[00:08:10] 6Labs, 5Patch-For-Review: Separate flannel's etcd from k8s' etcd - https://phabricator.wikimedia.org/T125371#1988247 (10yuvipanda) 5Open>3Resolved Done! (lot more commits that didn't reference this task, though) [00:11:19] chasemp: andrewbogott valhallasw`cloud just as an update: k8s and flannel (both require etcd) are now on separate etcd clusters [00:11:19] and these are also real etcd clusters, with 3 small instances each [00:11:19] so properly highly available [00:11:19] etcd is in k8s what bdb is to gridengine... [00:11:20] (just a fyi) [00:11:20] we at least have a chance of understanding etcd when it goes wrong :) [00:11:20] yes, and an upstream that'll listen :) [00:11:23] plus we use it in prod to [00:11:25] *too [03:12:13] 6Labs, 5Patch-For-Review: Figure out how to deal with SSL cert issues for kubernetes masters - https://phabricator.wikimedia.org/T119814#1988799 (10yuvipanda) Docs at https://wikitech.wikimedia.org/wiki/Puppet#SANs_for_puppet_certs [03:30:26] (03CR) 10Tim Landscheidt: "I personally don't like virtual environments & Co. because a) they promote a culture where a programmer doesn't have to look left or right" (031 comment) [labs/toollabs] - 10https://gerrit.wikimedia.org/r/234934 (https://phabricator.wikimedia.org/T91231) (owner: 10Tim Landscheidt) [03:36:19] PROBLEM - ToolLabs Home Page on toollabs is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:36:48] 6Labs, 10Tool-Labs: Setup DNS for kubernetes services - https://phabricator.wikimedia.org/T111914#1988817 (10yuvipanda) [03:37:05] it's fine, shinken-wm [03:40:10] RECOVERY - ToolLabs Home Page on toollabs is OK: HTTP OK: HTTP/1.1 200 OK - 778307 bytes in 4.068 second response time [09:05:42] 6Labs, 10Labs-Infrastructure: Instance console does not gives output / keystroke access - https://phabricator.wikimedia.org/T64847#1989167 (10hashar) 5Resolved>3Open Reopening, AFAIK we still can not access instances console. [12:14:29] 6Labs, 10Tool-Labs: Tool Labs: shared Pywikibot code not available - https://phabricator.wikimedia.org/T125505#1989468 (10Incola) 3NEW [12:21:22] 6Labs, 10Tool-Labs: Tool Labs: shared Pywikibot code not available - https://phabricator.wikimedia.org/T125505#1989485 (10valhallasw) As far as I can see it's there: ``` valhallasw@tools-bastion-01:~$ ls /shared/pywikipedia/core ChangeLog LICENSE scripts CREDITS... [12:22:59] 6Labs, 10Tool-Labs, 10pywikibot-core: Tool Labs: shared Pywikibot code not available - https://phabricator.wikimedia.org/T125505#1989488 (10valhallasw) [12:25:34] 6Labs, 10Tool-Labs, 10pywikibot-core: Tool Labs: shared Pywikibot code not available - https://phabricator.wikimedia.org/T125505#1989499 (10Ato_01) I am also receiving error mesages a couple of hours. "python: can't open file '/shared/pywikipedia/core/pwb.py': [Errno 2] No such file or directory" [12:34:18] 6Labs, 10Tool-Labs, 10pywikibot-core: Tool Labs: shared Pywikibot code not available - https://phabricator.wikimedia.org/T125505#1989533 (10Incola) The problem was on the host tools-bastion-01 but also on the hosts that run the jobs submitted with jsub. This is the log file of a bot task that runs every five... [12:42:09] 6Labs, 10Tool-Labs, 10pywikibot-core: Tool Labs: shared Pywikibot code not available - https://phabricator.wikimedia.org/T125505#1989550 (10Ladsgroup) I checked and it's working for me. As a wild guess, it's maybe a permission issue on folder that prevents accessing the file [13:11:30] 6Labs, 10Tool-Labs, 10pywikibot-core: Tool Labs: shared Pywikibot code not available - https://phabricator.wikimedia.org/T125505#1989605 (10Ato_01) Yes, me too. It is working again. [13:40:10] valhallasw`cloud, ping [13:41:01] Steinsplitter, ping [13:41:29] ? [13:56:20] CP678|laptop: what? [13:56:53] valhallasw`cloud, does WMF provide dumps of the text table or do you suppose that would be way too big? [13:57:27] CP678|laptop: ls /public/dumps/public/nlwiki/20160111/ [13:57:39] that should have the text, I think [13:57:45] I think nlwiki-20160111-pages-articles1.xml.bz2 ? [13:57:54] https://dumps.wikimedia.org documents which one is which [13:58:57] Hmm... [13:58:59] * CP678|laptop thinks. [13:59:04] CP678|laptop: nlwiki-20160111-pages-meta-history1.xml.bz2 has full edit history, nlwiki-20160111-pages-meta-current1.xml.bz2 just has current pages [13:59:40] I'm still hammering on the find the time a piece of text was added as quickly as possible problem [14:00:26] I could swear caching previous results and doing predictive binary searches could speed things up. But not by much and the memory costs are unnacceptable. [14:01:07] * CP678|laptop has an aha moment [14:01:52] CP678|laptop: it's also not clear to me why you want to know that revision [14:02:09] To get the time it was added [14:02:17] yes, but why? [14:02:34] by taking a step back, sometimes other approaches become visible [14:02:35] It's part of an algorithm to accurately fetch a working archive copy of a dead link on Wikipedia [14:02:56] It's proven to be the most reliable method. [14:02:58] huh? [14:03:03] And also the slowest [14:03:28] also, if you're just looking for 'when was link X added', you might consider a single scan, starting at revision 0, working forwards [14:03:35] valhallasw`cloud, Cyberbot II attaches wayback copies of dead sources. [14:03:35] that's a lot of preprocessing, but should be easy to keep up later [14:04:46] In an attempt to get a most likely working copy, the assumption being followed is that archives made at the time the link was added, are most likely to be working and to have what is needed to keep the article sourced [14:04:58] fair enough [14:05:00] CP678 check that algorithm wikiblame use [14:05:13] *what [14:05:47] wikiblame just does a binary search (or a linear backwards search), I think [14:05:55] zhuyifei1999_, valhallasw`cloud suggested a binary search on the API, since I can't tap the DB directly. [14:06:12] but in CP678|laptop's use case, the links was often added long ago, so it's less effective [14:06:20] So I implemented that, but it was still SLOOOWWWW [14:06:24] I think wikiblame also just grabs revisions from the api [14:06:31] but the memory improvements were enourmous [14:06:45] I'm pretty sure dump parsing is even slower [14:06:57] CP678|laptop: why do you need so much memory? you don't need to save each article text, I think? [14:07:20] So I implemented revision caching and advanced prediction to reduce the number of curl_execs [14:07:45] but the memory tradeoff was unnacceptable considering the miniscuel performance benefit. [14:07:57] Ah. Right, and at ~1M/page for large pages, that can take some memory. [14:08:14] valhallasw`cloud, exactly [14:08:39] valhallasw`cloud, I can try a simultaneous binary search [14:09:34] That outta reduce the curl_execs by a factor of roughly 1000 theoretically. [14:09:58] I'd love to see your code :) [14:10:58] I've got experimental code [14:10:59] CP678|laptop: uh. You need 1000 curl requests for a single page? [14:11:09] valhallasw`cloud, no this was over 20 pages [14:11:12] because log2(702385937) is only 30 [14:11:24] And I've been trying to elegantly reduce that [14:12:00] For pages with fewer than 100 revisions, I just download the entire history. [14:13:24] WTF????? [14:13:46] That dump has a compression ratio of nearly 100% [14:14:00] 219MB decompresses to 74 GB??? [14:14:05] lol [14:14:22] I just nearly blew my SSD up. [14:14:31] compressing text files is very effective [14:14:40] Indeeed [14:14:53] So much for that idea [14:15:18] you can just read the gzip directly? :P [14:16:13] I can? [14:16:24] My computer always tries to decompress it first. [14:16:42] then there would be no seeking => advanced xml reader required [14:16:58] I'm pretty sure you don't want to build a DOM with a 74GB file either [14:17:12] No... [14:17:17] I don't [14:17:48] I don't even have that space at my disposal. I'm using my trusty laptop right now. [14:18:12] I only know pywikibot had/has an xmlreader for this, not sure about any other tools, except $ gunzip | grep [14:19:10] I'll be AFK for a bit [14:27:11] 6Labs, 6Developer-Relations, 10wikitech.wikimedia.org, 7Epic: [EPIC] Make wikitech more friendly for the multiple audiences it supports - https://phabricator.wikimedia.org/T123425#1989771 (10Qgil) [14:34:33] zhuyifei1999_: zgrep :-p [14:34:53] saves you from piping 80GB :-) [14:34:59] (and zcat, etc) [14:42:40] oh really [14:48:47] bd808: https://wikitech.wikimedia.org/wiki/User:Merlijn_van_Deen/NewPortals -- I'm not too happy with the alignment of the search boxes, but the tool labs search box does work really well [14:48:58] bd808: but maybe it should be in the portal rather than the front page [14:49:51] valhallasw`cloud: oooh neat. [14:50:54] bd808: on a sidenote, what do you think of killing the name 'Tool Labs' and just calling it 'Tools' consistently? I feel the 'Labs' part is just confusing for people. [14:51:18] (maybe even 'wikimedia tools', but that might require some sort of sign-off from legal) [14:51:49] chasemp had some thoughts about naming too at https://wikitech.wikimedia.org/wiki/Labs_labs_labs/future#Integration_with_Community_Tech [14:51:55] valhallasw`cloud: yeah there has been alot of talk on that [14:52:17] I basically agree whole heartedly and think the 'labs' moniker in most cases is more harmful than anything [14:52:31] * bd808 slinks off to finish morning routine [14:52:49] and morning/afternoon! [14:53:20] or maybe even more explicitly 'Tool Hosting', and reserve the term 'tool' for the actual tools [14:53:28] I like that as well [14:53:28] yeah, the current wording is a bit confusing to me too [14:53:36] because we're not in the business of providing tools, we're in the business of providing hosting for them [14:53:38] but I didn't want to say anything because I was so excited about the new homepage :P [14:54:31] strategically, it might be worth thinking of Labs as a generic-purpose infrastructure hosting that includes tool/container hosting, VMs, bare metal servers etc. [14:54:47] https://wikitech.wikimedia.org/wiki/Labs_labs_labs/future#General some problem outline [14:54:55] our language here is like 2 years too old basically [14:55:58] I pitched rebranding labs as the cloud aspect as a wikimedia specific public hosting [15:01:07] if we couldn't help ourselves we could even use the word 'cloud' [15:01:12] but it would pain me [15:01:18] Yeah, 'Wikimedia Cloud' doesn't even sound so bad [15:01:31] We could do a contest and see what the community comes up with [15:02:35] you just want to vote on naming it eXXXtreme Hostin3 [15:03:40] but in all seriousness if that is viable I think it may be a good idea [15:03:53] I noticed in the old toolserver archies people called it wikicloud or something [15:04:00] archives [15:04:10] which is a pretty confusing name itself I guess [15:17:47] zhuyifei1999_, pinh [15:17:56] pong [15:18:19] zhuyifei1999_, https://github.com/cyberpower678/Cyberbot_II/tree/test-code/IABot [15:18:24] That's the experimental code [15:18:37] that I've been developing for nearly a year now [15:19:54] that's a lot of stuffs in two commits [15:21:19] Well there is a very messy live [15:21:22] version [15:21:30] That has more commits [15:23:06] zhuyifei1999_, but I did write all of that from scratch though. [15:23:16] nice [15:23:43] Hence it me taking almost a year [15:23:51] I started that last may [15:27:29] oh I was wondering, does archive.org has any limits on its api usage? [15:27:53] such as https://github.com/cyberpower678/Cyberbot_II/blob/test-code/IABot/API.php#L239 [15:29:26] last sept or oct (I don't remember) asked me to implement an archiving function to flickreviewr [15:33:14] zhuyifei1999_, yes [15:33:22] it will 503 you if you go too fast [15:33:48] That being said, I'm serving as a catalyst to support batch queryinh [15:33:55] ugh [15:34:19] then my bot will definitely go too fast [15:34:33] I'm going to be testing their beta testing their new beta API soon. [15:35:12] k [15:47:39] CP678|laptop: I just realized you can also turn the question around. Ask IA what versions of the page they have, and use those to get revisions of the article around those times [15:48:45] CP678|laptop: I see a potential problem with this binary search approach: what if the revision at "needie" is badly vandalized? [15:49:32] chasemp: and you could actually use prop=externallinks instead of parsing the page yourself [15:51:41] zhuyifei1999_, unlieky [15:52:14] Unless the link added a source to the vandalism, vandalism will most likely be the removal of content [15:53:40] yeah, then the search would go to after the revision instead of before the revision [15:53:43] valhallasw`cloud, that might be slower, and harder since their API is a bit low on the rate limit [15:54:11] But it is a good idea [15:54:26] But it would require a too big of a rewrite at current [15:54:41] zhuyifei1999_, you're not making much sense [15:54:51] huh? [15:54:51] I fail to see how vandalism affects the bot [15:55:55] zhuyifei1999_, I fail to see how vandalism affects the bot [15:56:01] by badly vandalized I mean those edits that says "replaced content with ..." vandalism [15:56:11] No effect [15:56:29] Oh wait [15:56:45] Well then it would be the revision after the vandalism [15:56:55] It shouldn [15:56:56] yep [15:57:00] t be a big deal [15:57:14] Most likely it will still yield a good result [15:57:25] ok fine then [15:57:36] I think that risk is acceptab;e [15:58:45] * CP678|laptop scrapes his new revision cacher [15:59:24] * CP678|laptop instead replaces it with a multi-search binary searcher. [15:59:44] AFK [16:25:49] grrrit-wm: is not working [16:25:52] valhallasw`cloud: ^ [16:27:50] * valhallasw`cloud prods grrrit-wm [16:28:13] !log tools.lolrrit-wm kubectl --user=lolrrit-wm --namespace=lolrrit-wm delete pods/grrrit-e3yvs [16:28:17] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.lolrrit-wm/SAL, Master [16:28:58] Thanks [16:46:41] Where to see my instance's name? [16:47:21] * Tanvir asks for help as a n00b. [16:49:24] I was on Special:NovaInstance. I saw none. [17:01:21] Tanvir: what do you mean with 'your instance's name'? [17:01:50] if you're working on tool labs, you don't have an instance [17:02:53] Tanvir: https://wikitech.wikimedia.org/wiki/Help:Tool_Labs#Getting_access_to_Tool_Labs [17:18:31] zhuyifei1999_, valhallasw`cloud : I'm back [17:54:27] valhallasw`cloud, then how do I proceed from bastion to run my bot? [17:54:48] I ran groups, it turns project-tools so I have that access. [18:03:28] http://commtech.wmflabs.org/ gives a 502 error. Any idea how to fix that? [18:06:46] kaldari, ping [18:08:22] kaldari, I'm not sure if you're receiving me or not, but I would like to discuss Cyberbot II [18:08:59] sure, I'm in a meeting for the next hour. Can you ping me an hour from now? [18:09:08] I can try [18:09:12] thanks [18:10:52] Tanvir: follow the tool labs guide [18:11:43] valhallasw`cloud, I am trying to find the right help page to guide me. [18:12:01] Maybe you have a better idea where to look? [18:12:03] Tool labs is such a terrible [18:12:05] name [18:12:13] Couldn [18:12:33] Couldn't we call it community tools, or tools, commtools? [18:12:34] 6Labs, 10Labs-Infrastructure: Instance console does not gives output / keystroke access - https://phabricator.wikimedia.org/T64847#1990542 (10scfc) Do you mean by "we" you as in project administrators or "we" as in Labs ops? AFAIK Labs ops //has// console access to instances via OpenStack. Do you want projec... [18:13:02] Tanvir: it's all on that page [18:13:42] Okay, I will dig more. Thanks valhallasw`cloud. [18:13:51] Tanvir: under getting started and then more specifically in later sections [18:14:03] Okay [18:31:15] I think linode is having serious issues so my bouncer is dead fyi [18:57:28] valhallasw`cloud: is grrrit-wm bac up? [18:57:48] YuviPanda: I think so? I killed the pod and the log said it was reporting things [18:57:52] ok! [18:57:54] cool :) [18:58:43] YuviPanda: except it's still called grrrit-wm1 [18:58:51] YuviPanda: why doesn't it rename itself? [18:59:58] YuviPanda: I'll restart it again [18:59:58] jgood question [19:00:10] I don't fully know, maybe the underlying IRC library doesn't handle that? [19:00:24] m, could be [19:00:44] there we go [19:01:00] nagf is down too, fallout from my flannel work yesterday I think [19:05:03] hmm [19:07:28] andrewbogott: around ? [19:07:50] matanya: I am, what’s up? [19:08:17] andrewbogott: I don't know if you are aware but the service https://tools.wmflabs.org/video2commons/ [19:08:18] is up [19:08:34] it relies on encoding01 as a backend [19:08:49] that’s great! [19:09:21] I wanted to ask what is the capacity for multiple backends, one we get more encoding/transcoding running in this service [19:10:18] and by ‘backends’ you mean VMs? [19:10:32] encoding VMs, yes [19:10:40] * andrewbogott looks at some graphs [19:10:54] the frontend is webui and redis [19:11:14] backend is job manager and encoding [19:11:48] * andrewbogott curses ganglia [19:13:39] matanya, it would take me a while to gather actual proper statics. But there’s certainly room for another VM or two if you need them. [19:14:00] of this size, right ? [19:14:24] I mean, if you prepfer some other type of large local storage and a compute node, that works too [19:15:44] Yeah, there’s a fair bit of storage available at the moment. Ideally we’d have a better solution for instance storage but we don’t really have that yet. [19:16:20] matanya: do you need your quota raised or do you already have headroom? [19:16:58] for the time being we are good [19:17:12] if i'll need more, it will be on /srv [19:17:19] but not at the moment [19:18:51] matanya: all of your IO is happening on instance-local storage, right? Not NFS? [19:19:00] indeed [19:19:55] that’s good, just checking :) [19:26:37] YuviPanda: are you using any magic puppet file transfer tricks for k8s? I remember something that you needed a private puppet master so you could use puppet for credential management, but from what I can see, you're not doing that anymore? [19:28:07] (I'm still pondering about letsencrypt on labs proxies, but the only option I can think of is by proxying /.wellknown/* requests to all proxies in turn -- I'm wondering if puppet could just copy the certs around) [19:30:10] kaldari, ping [19:30:16] It's been an hour I believe [19:30:18] or more [19:30:50] hello [19:35:45] CP678|laptop: I was thinking that another component that might be useful for us to build is a centralized logging and reporting interface. It could live on Tool Labs and just have a simple API input, i.e. each time Cyberbot completed fixing a page, it would ping the API with the info about what it did. The API would then store all the data in a logging [19:35:45] database. It would also have an output API, so that you could find out when the last time an article was fixed was (if ever), and it would also have a web interface for the Internet Archive that would show pretty stats and graphs. [19:36:38] kaldari, I love it [19:37:01] The logging API could also be used by the various bots running on other wikis (es, de, fr, and it so far) so that Cyberbot would know which articles didn't need to be re-fixed soon. [19:37:33] We just learned that Italian Wikipedia also has their own bot running [19:38:08] kaldari, sure [19:38:24] And I was thinking of scrapping the regex and building my own string parsing functino [19:39:16] I have an idea of how to approach this [19:40:08] oh yeah? [19:40:17] kaldari, I also realized that some templates are landing in the url field of the database [19:40:34] And then I remembered that enwiki sometimes uses templates for websites. [19:40:47] Widely used websites have templates for example [19:40:53] CP678|laptop: any examples? [19:41:12] And if they go down, they can easily be fixed by replacing the pointer url in the template [19:41:34] The bot has acknowledges some of these templates, but it doesn't know what to do with them. [19:42:42] that sounds like a hard problem to solve [19:42:48] kaldari, I'm trying to find an example. I'm scavanging the draft DB [19:42:58] kaldari, actually no it's not [19:43:29] We could create a template pointer class, and have the bot internally replace the template with the working url but save it as a template [19:43:44] It already parses templates quite well. [19:43:46] YuviPanda: oh, look: https://docs.puppetlabs.com/guides/exported_resources.html . But that does require a private master [19:44:11] kaldari, the difficult part is getting all the templates that convert to urls. [19:44:42] CP678|laptop: I guess you would have to do something like that, but it sounds like a pain to keep up to date (especially for multiple wikis). I guess it would have to be part of the on-wiki config [20:25:11] kaldari, sorry I had to step out. [20:25:34] kaldari, great idea. [20:25:55] It shouldn't be too painful. Websites don't update everyday. [20:30:49] kaldari, Template:Allmusic is url generator [20:31:58] having an odd issue getting a new user into our labs instances. I added gehel as a projectadmin to the search project in wikitech. He's added his ssh key. He's able to login to bastion but not to our instances. auth.log says failed public key, and then access denied for user gehel by PAM account configuration [preauth] [20:34:17] ebernhardson: if you run ldaplist -l passwd gehel, does that return their public key? [20:34:40] 6Labs: Gusy up Labs proxy 502 page - https://phabricator.wikimedia.org/T125576#1991433 (10Andrew) 3NEW [20:34:50] valhallasw`cloud: it gives a key, checking with them to see [20:35:02] ebernhardson: oh! do they have agent forwarding (or better yet: proxycommand) set up? [20:35:09] 6Labs: Gussy up Labs proxy 502 page - https://phabricator.wikimedia.org/T125576#1991447 (10Andrew) [20:35:33] valhallasw`cloud: should, he just set it up today based on https://wikitech.wikimedia.org/wiki/SSH_access#Labs [20:35:43] valhallasw`cloud: and i can verify in auth.log that the connection is coming in through bastion [20:36:12] ebernhardson: ok, so he's connecting from his own computer to x.wmflabs.org directly? not 'ssh bastion.wmflabs.org' and on bastion 'ssh '? [20:36:33] valhallasw`cloud: will find out in a sec, i think he took a quick smoke break before retrying :) [20:37:10] ebernhardson: the auth log should show the actual keys tried, I think [20:37:38] not entirely sure about that, though [20:37:43] it gives an RSA fingerprint, not sure how to compare that to the public key [20:38:02] gehel: over here :) [20:38:14] ebernhardson: on the instance try [20:38:17] usr/sbin/ssh-key-ldap-lookup [20:38:30] w/ a leading / :) [20:38:32] gehel: another thing to try is 'ssh -vv ', and see if that shows anything obvious (should show the exact keys tried) [20:38:52] yup the ssh key looks to match [20:39:08] gehel: verify your using `ssh suggesty.eqiad.wmflabs` w/ the ProxyCommand line in ~/.ssh/config ? [20:39:16] ok so then yeah -vvv to see if he's presenting teh right one [20:39:27] ssh -vv gives me the fingerprint of the key I expect [20:39:41] gehel: can you ssh to teh bastion directly? [20:39:47] yep, I got the proxy command [20:39:58] yes but can you ssh to the bastion itself [20:40:18] I also tried to log to bastion with agent forwarding (I know, that's bad) same error [20:40:34] ok so if you can't login to the bastion then teh problem is there :) [20:40:39] agent forwarding is disabled except on the special ops bastion [20:40:41] suggesty will never work otherwise [20:41:11] nope, I can login to the bastion with no problem [20:41:52] can you login to any vm in the project at all / ebernhardson did you add him to the project? [20:42:03] chasemp: i added him to project [20:42:13] and he can see the right stuff in wikitech which confirms it took [20:42:35] and it seems that bastion.wmflabs.org allows forwarding. [20:42:38] ok I'm curious now [20:42:41] what's the username? [20:42:48] same as irc, gehel [20:44:11] chasemp: I'm happy to provide interesting problems ;-) [20:44:43] ebernhardson: I think it's the wrong project [20:44:47] 52471(project-shiny-r) [20:44:57] or at least ldap doesnt' see gehel as a member [20:45:07] oh, i'm an idiot [20:45:14] suggesty is in a different project :P [20:45:20] also known as, did you add him to the project :D [20:45:21] gehel: try cirrus-browser-bot.eqiad.wmflabs [20:45:28] cool [20:45:43] i added him to search project, suggesty got created in a different project because we ran out of quota, and then we never recreated after getting more quota :P [20:46:10] ebernhardson, chasemp : that works [20:46:30] so the problem was PEBKAC ;) [20:46:33] Thx a lot ! [20:46:34] adding you to shiny-r project too [20:54:00] YuviPanda: related to discourse, i've been asked about how we can setup an email address for people to create topics / reply to topics by emailing. any idea what the appropriate route is there? discourse basically just needs a mail server it can fetch mail from [20:54:32] ebernhardson: for testing, I'd actually reccomend just using sendgrid or something [20:54:45] ebernhardson: someone will kill me for it now :D [20:54:57] ebernhardson: I know we can set it up on our own infra as well, but mail is a big black box for me... [20:55:31] hmm, 12,000 emails for free. i suppose good enough for testing [20:55:48] YuviPanda: yea they asked about setting up mail on the server, and i have no interesting in setting up or figuring out SMTP servers ;) [20:56:12] yeah [20:56:40] ebernhardson: you might need MX records, which we can do. let me know if you doneed 'em [20:57:16] kk [21:06:08] well thats promising, the welcome to sendgrid email ended up in spam ;) [21:06:52] what is the email thing for? topics on what I don't get it [21:08:04] ebernhardson: heh, nice [21:10:17] chasemp: discourse is a platform that some people on the wikimedia-l list asked about trying out as a replacement to mailing lists. It has a "friendly" web interface, and can also be interacted with like a normal mailing list via emails [21:10:32] chasemp: i set up an instance in labs since they ran into some issues understanding how to set it up (it runs in docker) [21:11:00] its basically a test instance for trying it out and evaluating if it could potentially replace mailing lists for some use cases [21:11:54] https://discourse.wmflabs.org/ [21:12:24] ah thanks [21:13:01] email gets pretty complicated as you get into trust relationship stuff but if you are looking to propose this into prod I would ask mutante [21:13:17] he redid all the email stuff...last quarter or so? and has the most context on current setup I think [21:13:22] i think they might, eventually, propose it's use in prod. but it's very much an exploratory phase right now [21:17:37] actually like the UI [21:27:49] YuviPanda: turns out sendgrid is the wrong direction, it sends emails but i need to receive emails :( sending emails somehow magically worked through mx1001.wikimedia.org [21:28:23] * ebernhardson will poke around some more [21:37:32] ebernhardson: I think tools is the only mail-receiving host -- basically, you need a mail server with an MX record setup [21:37:51] I think the easiest might be to somehow use tools' infra as long as you're testing [21:38:35] ebernhardson: or just poll gmail? that seems to be one of the standard options [21:39:28] yea discourse suggests gmail in their docs so i just started setting that up, but of course i've run into oddities there now :) after signing up discourse.wmflabs.org with my ebernhardson@wikimedia.org account, now admin.google.com only tells me about wikimedia.org :) [21:39:34] fun! :) [21:40:11] 'after signing up discourse.wmflabs.org'? [21:40:45] to get email to a particular domain, instead of @gmail.com, you need to sign up to some thingie [21:41:00] google apps [21:41:37] ah, but that's not going to work unless you get googles' mail server set up in wmflabs' DNS [21:42:01] ebernhardson: I would actually just register wmflabsdiscourse@gmail.com [21:42:09] hmm, yea that looks like it will be easier [21:47:16] ebernhardson: and I *think* it might be possible to set up discourse@tools.wmflabs.org as a forward to that address (although I'm not sure if discourse would understand that setup) [21:48:58] but I would just start with the wmflabsdiscourse option for now [22:04:10] YuviPanda: The last completed query at quarry is 9 hours old, the last is queued since 5 hours. Is that normalß [22:04:28] http://quarry.wmflabs.org/query/1850 <-- waits since 5 hours [22:10:56] Luke081515: ouch, I'll check it in a bit [22:27:57] 6Labs, 10Tool-Labs, 6Security-Team: consider making individual tools on tool labs have their own domain - https://phabricator.wikimedia.org/T125589#1991911 (10Bawolff) 3NEW [22:28:19] 6Labs, 10Tool-Labs, 6Security-Team: consider making individual tools on tool labs have their own domain - https://phabricator.wikimedia.org/T125589#1991919 (10Bawolff) [22:30:53] 6Labs, 10Tool-Labs, 6Security-Team: consider making individual tools on tool labs have their own domain - https://phabricator.wikimedia.org/T125589#1991932 (10valhallasw) It requires a *.tools.wmflabs.org ssl certificate, but otherwise there's no technical reason why this would not be possible. As for other... [22:31:11] 6Labs, 10Tool-Labs, 6Security-Team: consider making individual tools on tool labs have their own X.tools.wmflabs.org subdomain - https://phabricator.wikimedia.org/T125589#1991942 (10valhallasw) [22:36:55] 6Labs, 10Labs-Infrastructure, 10DBA, 6operations: Set up additional filters for Echo tables - https://phabricator.wikimedia.org/T125591#1991970 (10jcrespo) [22:37:07] 6Labs, 10DBA, 6operations: Set up additional filters for Echo tables - https://phabricator.wikimedia.org/T125591#1991971 (10jcrespo) [22:39:35] 6Labs, 10DBA, 6operations: Set up additional filters for Echo tables - https://phabricator.wikimedia.org/T125591#1991983 (10jcrespo) Two separate jobs to do here: * Add echo tables to puppet:manifests/realm.pp * Delete existing hidden tables [22:41:04] 6Labs, 10Tool-Labs, 6Security-Team: consider making individual tools on tool labs have their own X.tools.wmflabs.org subdomain - https://phabricator.wikimedia.org/T125589#1991994 (10Bawolff) >>! In T125589#1991932, @valhallasw wrote: > It requires a *.tools.wmflabs.org ssl certificate, but otherwise there's... [22:46:42] 6Labs, 10Tool-Labs, 6Security-Team: consider making individual tools on tool labs have their own X.tools.wmflabs.org subdomain - https://phabricator.wikimedia.org/T125589#1992012 (10yuvipanda) The only problem would be that we already have advertised in some places login.tools.wmflabs.org, but that can be mo... [22:49:36] YuviPanda, is something wrong with Quarry? [22:49:43] Stuff is staying queued for hours: http://quarry.wmflabs.org/query/runs/all [22:57:55] matt_flaschen: am on it, should be fixed soon [23:06:04] 6Labs, 10MediaWiki-extensions-OpenStackManager, 10wikitech.wikimedia.org, 5Patch-For-Review, 5WMF-deploy-2016-02-09_(1.27.0-wmf.13): Wikitech often loses track of internal openstack/nova session - https://phabricator.wikimedia.org/T101199#1992128 (10Krenair) 5Open>3Resolved [23:06:21] kaldari, you there [23:06:35] (03PS4) 10Tim Landscheidt: Add list-user-databases command [labs/toollabs] - 10https://gerrit.wikimedia.org/r/234934 (https://phabricator.wikimedia.org/T91231) [23:07:25] (03CR) 10Tim Landscheidt: Add list-user-databases command [labs/toollabs] - 10https://gerrit.wikimedia.org/r/234934 (https://phabricator.wikimedia.org/T91231) (owner: 10Tim Landscheidt) [23:07:41] Thanks YuviPanda [23:08:49] (03CR) 10jenkins-bot: [V: 04-1] Add list-user-databases command [labs/toollabs] - 10https://gerrit.wikimedia.org/r/234934 (https://phabricator.wikimedia.org/T91231) (owner: 10Tim Landscheidt) [23:23:10] (03CR) 10Tim Landscheidt: "recheck" [labs/toollabs] - 10https://gerrit.wikimedia.org/r/234934 (https://phabricator.wikimedia.org/T91231) (owner: 10Tim Landscheidt) [23:59:16] kaldari, I hope you don't mind me reassigning the priority [23:59:28] Cyberpower678: go for it [23:59:34] Cool [23:59:45] I'm fine tuning the latest code for deployment [23:59:48] Cyberpower678: also please add any thought or suggestions that you have