[00:39:43] Krinkle: Freenode just had to do some jiggering of SSL stuff - they blogged about it, lemme find the link [00:40:15] https://blog.freenode.net/2013/07/server-hosting-and-trust/ SSL certs changed [00:51:58] Ryan_Lane: Freenode SSL certs changed https://blog.freenode.net/2013/07/server-hosting-and-trust/ -- could that be causing some of these issues? [00:52:11] shouldn't [00:52:27] the issues are timeouts [00:52:49] nm [00:52:57] is kma500 still in the office? [00:53:02] (I doubt it) [01:20:47] sumanah-usually, I was wondering which host they got rid of... [01:54:56] [bz] (8NEW - created by: 2Antoine "hashar" Musso, priority: 4Unprioritized - 6normal) [Bug 51874] vhtcpd needs to support purge request send over unicast - https://bugzilla.wikimedia.org/show_bug.cgi?id=51874 [07:46:28] . [07:47:23] !log bots deleting all application servers [07:47:25] Logged the message, Master [08:22:44] Cyberpower678: :< [08:22:44] Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 79 bytes) in /data/project/xtools/public_html/pcount/counter.php on line 223 [08:26:34] Damianz: meh [08:26:44] the nagiosbuilder is fetching non existent instances [08:27:01] can u fix? [08:30:04] o.o [08:42:59] Coren|Away: where is ident server for tool labs [08:43:54] !rb :o [08:43:55] broken? report a bug: https://bugzilla.wikimedia.org/enter_bug.cgi?product=Wikimedia%20Labs [08:44:43] [bz] (8NEW - created by: 2Peter Bena, priority: 4Unprioritized - 6normal) [Bug 51935] wm-bot to tools - https://bugzilla.wikimedia.org/show_bug.cgi?id=51935 [08:47:22] [bz] (8NEW - created by: 2Peter Bena, priority: 4Unprioritized - 6normal) [Bug 51936] Set up a server for query relaying - https://bugzilla.wikimedia.org/show_bug.cgi?id=51936 [08:50:52] [bz] (8NEW - created by: 2Peter Bena, priority: 4Lowest - 6minor) [Bug 51937] execution hosts are randomly unresponsive - https://bugzilla.wikimedia.org/show_bug.cgi?id=51937 [08:54:47] [bz] (8NEW - created by: 2silke.meyer, priority: 4Unprioritized - 6major) [Bug 51890] Bots compenent in bugzilla is obsolete - https://bugzilla.wikimedia.org/show_bug.cgi?id=51890 [09:13:07] petan: I might be off bsql01 today ;p [09:13:15] /might/ [09:13:29] ok [09:24:52] !screenfix [09:24:53] script /dev/null [09:51:05] [bz] (8NEW - created by: 2Peter Bena, priority: 4Unprioritized - 6normal) [Bug 51943] Resolve vmem issues in memory limits - https://bugzilla.wikimedia.org/show_bug.cgi?id=51943 [09:51:37] !vmem [09:51:37] qstat -F h_vmem [09:51:41] @search stack [09:51:41] Results (Found 3): os-change, queue, blueprint-dns, [10:42:23] [bz] (8RESOLVED - created by: 2Antoine "hashar" Musso, priority: 4Unprioritized - 6normal) [Bug 51874] vhtcpd needs to support purge request send over unicast - https://bugzilla.wikimedia.org/show_bug.cgi?id=51874 [11:00:29] [bz] (8NEW - created by: 2Antoine "hashar" Musso, priority: 4Unprioritized - 6critical) [Bug 51955] puppet broken on all instances (../private/manifests/passwords.pp does not exist) - https://bugzilla.wikimedia.org/show_bug.cgi?id=51955 [11:18:50] [bz] (8REOPENED - created by: 2Antoine "hashar" Musso, priority: 4Unprioritized - 6normal) [Bug 51700] https://login.wikimedia.beta.wmflabs.org/ trapped in an infinite self-redirect - https://bugzilla.wikimedia.org/show_bug.cgi?id=51700 [11:24:05] hashar, hmm - http://en.wikipedia.beta.wmflabs.org/wiki/Special:Contributions/Maintenance_script indicates that role::beta::autoupdater isn't working [11:26:45] MaxSem: gadget sync is done with misc::beta::sync-site-resources i think [11:27:24] maybe the class is not applied [11:28:07] June 18th 22:13 hashar: Applying MaxSem 'misc::beta::sync-site-resources' to deployment-bastion. That syncs .css articles from production to beta! [11:28:26] wikitech list the class being applied on deployment-bastion [11:28:56] and it is in apache crontab # Puppet Name: sync-site-resources [11:28:57] * 12 * * * /usr/local/bin/sync-site-resources >/dev/null 2>&1 [11:30:03] manually running it [11:30:35] !log deployment-prep manually running sync-site-resources : su - apache -s /bin/bash then /usr/local/bin/sync-site-resources [11:30:38] Logged the message, Master [11:31:22] MaxSem: that did a bunch of update http://en.wikipedia.beta.wmflabs.org/wiki/Special:Contributions/Maintenance_script [11:55:58] addshore, what did you try to do? [12:41:37] can someone around try to run puppetd -tv on one of their instance [12:41:38] ? [12:41:53] it is broken for me Error 400 on SERVER: Could [12:41:54] not parse for environment production: No file(s) found for import of [12:41:54] '../private/manifests/passwords.pp' at /etc/puppet/manifests/base.pp:10 on node [12:41:55] i-0000031a.pmtpa.wmflabs [12:42:23] checking [12:42:39] I guess the labs puppet master is not cloning the labs/private repo [12:43:01] same [12:43:11] No file(s) found for import of '../private/manifests/passwords.pp' [12:43:55] That is a real shame because I was gonna puppet some stuff this morning. [12:44:49] thanks :-) [12:45:00] apparently the private dir got moved on puppet master [12:45:02] my bug is https://bugzilla.wikimedia.org/show_bug.cgi?id=51955 [12:45:23] and I bet labs puppet master ended up being broken [12:45:54] paravoid: do you get access to labs puppetmaster ? [12:46:10] it got broken apparently by some changes made to how private repos is fetched on puppetmaster [12:46:16] my bug is https://bugzilla.wikimedia.org/show_bug.cgi?id=51955 [12:46:28] No file(s) found for import of '../private/manifests/passwords.pp' at /etc/puppet/manifests/base.pp:10 [12:46:29] :( [12:52:13] I am out for a snack [12:52:23] will need to catch someone from ops to fix up the paths on puppet master [12:52:41] hashar: I saw that puppet doesn't work :( [12:52:51] it complains about something :/ [12:52:51] petan: yup puppetmaster broken bug 51955 [12:53:04] the private repo checkout got moved to a different place [12:53:26] nothing I can do :/ [12:53:38] I am out to get a snack be back soon [12:53:43] you can switch to puppetmaster self :P [12:54:44] hello there! [12:55:22] anything I can do for https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Access_Request/Jens_Ohlig to get it approved? :) [12:58:13] sure [12:58:47] petan: like what? ^^ [13:00:20] done [13:00:22] petan: \o/ [13:00:25] thanks! [13:00:28] yw [13:06:33] re [13:15:20] !screenfix [13:15:20] script /dev/null [14:23:10] andrewbogott: the poor puppet master on virt0 gives me some HTML :-] [14:23:21]
Could not prepare for execution: Got 1 failure(s) while initializing: change from absent to directory failed: Could not set 'directory on ensure: Permission denied - /etc/puppet/manifests [14:23:22] Yeah, I can't tell /what/ it's trying to do [14:25:46] anyone here? [14:26:03] Pratyya: Nope. We're all absent. :-) What can I help you with? [14:26:55] Coren|Away: do child processes count towards the memory limit too, in SGE? [14:27:02] if i spawn a git and it does things, does that count? [14:27:11] how can my bot works for 24 hours. I mean when I shutdown my pc it'll still work Coren|Away [14:27:52] Pratyya: have you read https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [14:27:56] Pratyya: specifically, https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Continuous_jobs_.28such_as_bots.29 [14:28:05] Coren|Away: he wants information about putting it on labs.. [14:28:11] [bz] (8NEW - created by: 2Peter Bena, priority: 4Lowest - 6minor) [Bug 51937] execution hosts are randomly unresponsive - https://bugzilla.wikimedia.org/show_bug.cgi?id=51937 [14:28:50] Pratyya: Your best bet is to run it on tool labs, which is designed for that very purpose. [14:28:55] !toolsdoc [14:28:55] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [14:29:05] This link ^^ gives a number of hints to get you started. [14:29:23] Don't hesitate to ask here if you need futher information or help. [14:29:52] :) see Pratyya... told you this was were to go. :p [14:30:13] yeah Technical_13 [14:31:36] [bz] (8NEW - created by: 2Peter Bena, priority: 4Normal - 6normal) [Bug 51936] Set up a server for query relaying - https://bugzilla.wikimedia.org/show_bug.cgi?id=51936 [14:32:38] but Coren the problem is I want to run my bot in 3 wikis. it's pywikibot. but it only runs in one wiki. [14:34:09] Pratyya: I'm not sure I understand what the problem is. Several people here runs bots on more than one wiki. [15:03:45] [bz] (8NEW - created by: 2Peter Bena, priority: 4Low - 6normal) [Bug 51935] wm-bot to tools - https://bugzilla.wikimedia.org/show_bug.cgi?id=51935 [15:21:04] Coren: how is identd set up? [15:21:10] for irc bots [15:21:21] I saw that bots on tools project get ident's from server [15:22:09] petan: The simplest possible way; there's an indentd running on all exec hosts. [15:22:17] ah... [15:22:37] firewall allows to ircd to query it from exec nodes? [15:22:44] I mean, the exec node has each own public IP? [15:22:49] * Coren nods. [15:23:10] That was necessary anyways to avoid unhelpful NAT making all bots appearing from the same IP. [15:23:19] ok [15:23:49] Coren: are we going to clear the mailq some day? :P [15:24:27] it seems that e-mail which have .forward can't be delivered for some reason, not sure if all of them [15:25:04] No, that's certainly not the case, though some are in exponential backoff. I'd rather not flush it so as to not send them all at once. [15:25:21] ok [15:25:55] Coren: BTW, on Toolserver we didn't have an Icinga alert for mail queue older than x/containing more than y messages. Would be nice to have that on Tools so that hiccups are noticed. [15:25:56] we are now on 10k, few hours ago it was about 4k [15:26:22] petan: Hm. WTF is sending out thousand of email / hours? [15:26:36] local-spbot [15:26:39] not sure, maybe just I don't remember correctly [15:26:45] let me check my history if that's possible [15:27:09] forwarding to e�@ekuss.de [15:27:13] i.e.: broken. [15:27:50] "mailq | wc -l" is 10812, so "only" about 3k. [15:28:31] ah indeed [15:28:40] I see only 3 destinations, too. [15:28:45] well, then mailq | wc -l was about 4k while ago :P [15:28:57] local-bingle, local-bugello and local-spot [15:29:05] local-spbot* [15:29:33] 1174 local-bingle@tools.wmflabs.org [15:29:37] 2175 local-bugello@tools.wmflabs.org [15:29:41] 255 local-spbot@tools.wmflabs.org [15:29:42] loca-spbot has a broken .forward [15:30:06] local-bingle has a maintainer with no home (never logged in) [15:30:26] Same with local-bugello [15:30:44] What's causing the mails for bingle then? [15:30:48] So mail to that maintainer is stalled. [15:31:02] (Both have more than one maintainer) [15:31:12] k [15:31:24] bingle and buggle would be awjr's (he's on vacation now) [15:31:31] * YuviPanda is a maintainer too [15:31:38] Yeah, but it's maxsem that never logged in. [15:31:54] Add a manual .forward without maxsem for the time being? [15:32:11] ah [15:32:12] right [15:32:15] who pronounces my name in vain? [15:33:17] * YuviPanda hides behind Coren [15:34:49] okay, logged into tools-login [15:35:02] any other place? [15:35:59] That should be enough for Tools. [15:36:28] You might want to add a ~/.forward if you don't want to read mail online. [15:37:17] does that file accept /dev/null as a destination?:P [15:38:04] MaxSem: Actually, it does, but that makes marks you as a neglectful maintainer. :-) [15:38:20] Better to change the tools' .forwards then to exclude your username/only include users who are interested in them :-). [15:38:54] awjr is the 'real' maintainer, I think we're just on it 'in case things go horribly wrong and need a kick' [15:39:20] (A good argument can be made that if your tool sends you mail you don't want to read, then it shouldn't have sent it in the first place) [15:39:35] YuviPanda: The trick would be to adjust the /tool's/ .forward accordingly. [15:39:46] right [15:40:04] MaxSem: But yeah, just logging in to tools-login.wmflabs.org will create your home and unclog that email. :-) [15:46:21] Cyberpower678: get the edit count and nice graphs for addbot on en wiki :( [15:47:30] Coren: how do I make become load a .bashrc file [15:47:37] Cyberpower678: Also, you might want to (a) make sure your tools don't send that much email and (b) clean up your mailbox once in a while. [15:47:42] now when I become the tools doesn't have any profile loaded [15:48:03] petan: It's a login shell. You can source .bashrc from the .bash_profile if you want, or just put what you need in there. [15:48:11] where [15:48:12] Coren, ? [15:48:16] What mail. [15:48:28] -rw------- 1 cyberpower678 wikidev 650838460 Jul 24 15:19 /var/mail/cyberpower678 [15:48:53] petan: .bash_profile [15:48:56] * Cyberpower678 scratches his head. He doesn't have any tools that send emails. :/ [15:49:02] yay [15:49:34] addshore, 2000000+ edits is a lot. Let's see [15:49:36] Cyberpower678: They will if you have noisy crontabs, or use the -m option to qsub et. all [15:50:01] addshore, There's no way I can fix that unless Coren ups the limit. :p [15:50:12] Ups what limit? [15:50:17] :P [15:50:29] Coren, memory allocation. :p [15:50:30] I wonder how much memory it would actually require [15:50:40] Cyberpower678: Your jobs aren't -quiet, so cron will send you an email anytime one starts. [15:50:44] Probably a 1 or 2 gigs. [15:50:49] Cyberpower678: Use -mem [15:51:02] "Failed to add Mgrover to deployment-prep." [15:51:07] what's wrong? [15:51:16] Coren, addshore and I are talking about the webserver. [15:51:17] MaxSem: Does he have shell? [15:51:23] Cyberpower678: Oh! [15:51:25] Which you set a max limit on. [15:51:27] she does [15:51:57] Yes, I have. Otherwise, I'd need to allocate /way/ too many resources for them. But... pro tip: the web servers are submit hosts! [15:52:10] Send the work off to the grid. [15:52:18] Cyberpower678: makes sense! [15:52:26] Let me see if I can conserve some resources. [15:52:35] evaluate how much memory you think a request would take [15:52:48] if it is too much send it to the grid (the user would have to wait a bit longer but i dont suppose they would mind) [15:53:02] addshore, that depends largely on many edits the user being looked up has. [15:53:07] addshore: Probably not that much longer, even. [15:53:28] Coren: just the time for oge to pickup the job which I think is under 10seconds [15:53:50] addshore, that would mean I'd have to take the tool down for a considerable amount of time just to overhaul its guts. :( [15:53:56] not sure what it is configured at :) but thats what it seems :) [15:54:04] Cyberpower678: why? [15:54:10] where is the code? :> [15:54:13] link me :D [15:54:27] Because the processing scripts are everywhere. [15:54:31] addshore: I think it's 5 times per min, so 12s max 6s average. [15:54:43] !logs [15:54:43] http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-labs/ [15:54:45] :) [15:55:43] Coren: Well done by the way :) Everything seems to be holding up with more and more use :) [15:56:21] -mem Request amount of memory for the job. [15:56:23] (number prefixed by 'k', 'm' or 'g') [15:56:25] Coren: prefixed?? [15:56:27] addshore: Save the congratulation for once I have gotten rid of the [bleep] [bleep]ing [bleep] controller issue on the NFS server. [15:56:29] like -mem m20 [15:56:30] ? [15:56:43] Err, no, that's an error in the usage. Suffixed. [15:56:48] :o [15:57:48] Coren: 60% more bandwidth used on the webservers (a prediction) this month compared to last month :) [15:58:15] everythings going up :) and im sure nfs will behave soon :) [16:00:20] interesting how 2.7 GB (67%) of status codes have been 404 [16:02:23] Coren: wm-bot doesn't want to connect to freenode from tools :( [16:02:29] is there something the bot needs to do? [16:02:35] like contact identd [16:02:45] petan: No. What is the error you are getting? [16:03:14] addshore: Broken links, mostly to /~geohack [16:03:16] heh [16:03:26] all I see Coren :) [16:03:32] Coren: LOG [07/24/2013 16:02:12]: Waiting for all instances to connect to irc [16:03:41] LOG [07/24/2013 16:02:12]: DEBUG: Waiting for wm-bot [16:03:41] at some point this might be related to nickname being used lol [16:03:48] ok, let me fix that [16:03:50] Successful hits on favicon.ico 1948344.4 % heh [16:05:40] FUCK tools-login is so lagged what the hell is going on there [16:06:27] petan: Load is <3, and it's snappy for me. Perhaps you have lagginess when crossing the Atlantic? [16:06:36] maybe... :/ [16:06:45] wtf... [16:07:16] local-wm-bot@tools-login:~$ tail core.err [16:07:17] Killed [16:07:26] o.O [16:07:48] Plenty of ram available, and the vm isn't trashing. [16:08:24] Well, maybe not "plenty". I may have been a bit overly conservative when I speced that VM. [16:09:23] Biggest user of ram atm is an emacs session. [16:12:30] Just look at xtools folder. All of what you see belongs to the edit counter and is integrated into various other tools. [16:12:31] * YuviPanda mentions vim [16:12:35] addshore, ^ [16:14:31] Coren: I am running it on grid, not login [16:16:24] WTF [16:16:44] now it is running according to qstat but doesn't write anything to logs o.O [16:16:57] meh, now it is [16:17:11] maybe some nfs caching? [16:17:21] oh lol [16:17:23] I am an idiot [16:17:35] complaining about log files and I don't see that bot already joined this channel [16:17:44] wm-bota: hi [16:17:48] You are root identified by name .*@wikimedia/Petrb [16:17:48] You are root identified by name .*@wikimedia/Petrb [16:17:52] :o [16:18:18] ok let's see for how long is wm-bota going to last before it die :P [16:18:32] shall it be a test of tools project [16:19:18] 692827 core wm-bot Task / Running 2013-07-24 16:16:08 CPU: 0.5s VMEM: 711M/1.7G [16:19:19] haha [16:19:23] 711M of vmem [16:19:29] Coren, mgrover still has problems: while she's in the mebers list for deployment-prep, she can't log into deployment-bastion [16:19:30] that bot barely needs 20mb of resident memory [16:20:06] @add #wm-bot [16:20:06] This channel is already in db [16:22:26] Coren: can we change config of identd [16:22:31] Coren: so it doesn't have local- in it [16:22:49] that doesn't provide us with so many combinations, given that only 4 characters of tool name is used [16:23:09] 2 same tools with same 4 leading chars would have exact ident [16:23:15] &add #wm-bot [16:23:15] This channel is already in db [16:23:23] &help [16:23:23] I am running http://meta.wikimedia.org/wiki/WM-Bot version wikimedia bot v. 1.20.0.16 my source code is licensed under GPL and located at https://github.com/benapetr/wikimedia-bot I will be very happy if you fix my bugs or implement new features [16:25:30] !ping [16:25:30] pong [16:25:37] coren, the labs puppetmaster is misbehaving. I'm going to step away but will work on it more this afternoon… have forwarded info to Ryan in the meantime. [16:25:44] But, don't be confused when labs instances don't update :) [16:26:36] good morning! [16:29:48] coren: I had a basic question about accounts in tool labs/what to call them. There are three, no? Labs account (which one must create to get shell access), 2) Tool Labs user account (created in response to request for access to TL) and 3) Tool Labs tools account (created by users). Is that correct? [16:32:34] * YuviPanda pokes Coren about 'does memory limits include memory used by child processes too?' [16:37:06] addshore, there's just too many edits. For me to truly be able to offer limit free results, I would need my own webserver. :/ [16:38:26] addshore, have you touched Peachy yet? [16:42:25] andrewbogott_afk: Noted. [16:43:24] kma500: Not quite. What you refer to in (2) is the same account as in (1), it's just requesting access, not creating a new account. (I.e.: there is the Labs/Wikitech account, which is your [personal] identity for everything labs). [16:44:01] kma500: Tool accounts aren't "users" per se, but services. They have a user ID and all that, but that's a technical aspect. It's not a personal account like the Labs account. [16:45:10] For most purposes, it's easier to see "Tool accounts" as groups with flexible memberships than as identities. [16:46:58] okay. thanks for the clarifications! [16:47:23] * YuviPanda pokes Coren about 'does memory limits include memory used by child processes too?' [16:51:54] YuviPanda: Yes, but in an odd way. The hard limit propagates to the children, but doesn't limit the sum. SGE, OTOH, monitors the sum of all children. [16:52:26] Coren: right. so if I shell out to another process, and that blows up, my job will be killed with it too [16:53:09] YuviPanda: Yes. The "safe" way to do this is to manage your children yourself by giving them a set fraction of your total budget (i.e.: setrlimit() between your fork() and the exec()) [16:53:36] Memory management of multiprocess programs is nontrivial. [16:53:50] Coren: hmm, so my use case is that *normaly* I'll just need less than 512MB of VMEM, but for one thing alone (doing anything with mediawiki-core) I seem to need a lot more [16:54:02] and this job is continuous [16:54:16] so I'll have to have this be a larger limit even tho it won't be used most times [16:54:16] * Coren ponders. [16:54:37] And that one thing with mediawiki-core can't be spun off? [16:55:44] Coren: well, not easily. Since I'll want to make sure that there's only one job accessing one repo at a time, and would have to implement some form of locking [16:55:45] Coren: if it is all one process, this ist rivial [16:55:49] Perhaps the simplest solution that would cover your use case would be to allow the exec hosts to submit jobs themselves. There's nothing that /prevents/ that, nominally, but I was a little worried about runaway jobs eating all the slots. [16:55:58] *isn't [16:56:35] Coren: yeah, forkbombs via that way... :) [16:57:45] YuviPanda: Another solution might just be to go ahead and allocate for your worst case scenario and if you end up taking too much resources we just spin you out on your own queue/vm [16:58:00] Coren: hmm, alright [16:58:39] The reason this needs to be done this way is that, in the end, we /have/ to allocate for the worst possible case if we want to guarantee that every job has the resources it needs. [16:58:59] So that one job blowing up can't bring others down. [17:00:40] Coren: yeah [17:00:43] Coren: so I'll just max out memory. [17:01:27] (PS3) coren: Tool Labs: new custom error messages [labs/toollabs] - https://gerrit.wikimedia.org/r/75480 [17:06:20] Coren: do you know if it's possible to import data from a labs page to wikipedia by ajax? Or who would know it? [17:07:55] Alchimista: I see no obstacle that would prevent it provided your javascript was written right; I seem to remember there is some important black magic to take into account because of the single-source browser rules, but I'm no javascript expert. [17:08:37] Your best bet, I think, is to ask on labs-l or wikitech-l where javascript gurus are known to lurk. :-) [17:08:55] you need to use jsonp, *or* you can set appropriate headers from your tool [17:09:02] the earlier part is much easier than the latter [17:09:09] provided you are only *reading* content (only GET, no POST) [17:09:23] Coren: i'm not a js expert, that's why i've asked. xss are critical, and some browsers block other pages requests. this is a new territory for me [17:09:45] YuviPanda: are you familiar with it? peraps you could give me some advices :D [17:10:07] Alchimista: I know barely enough to be aware of the issues involved, I'd be of little to no help in implementation. [17:10:28] Alchimista: are you just going to read data from your tool? [17:10:39] Alchimista: use JSONP [17:11:09] YuviPanda: just read. i was thinking on a json, and then a gadget to read it on wiki [17:11:19] yeah that should work [17:15:59] thanks YuviPanda [17:16:41] Alchimista: :) [17:21:46] YuviPanda: Do you have experience with writing java unit tests by any wild chance? [17:22:06] i've written trivial ones with JUnit [17:22:09] but really trivial [17:23:37] Hmmm...I'm kinda stuck. See, Myrrix is a recommendation engine. I have got a Java class that uses Myrrix's client classes to get recommendations from a myrrix service running on a specific port. [17:23:56] the service is tomcat, but it's irrelevant. [17:24:13] But unit tests can't have dependencies on services can they. [17:25:29] well, I am not so strict about that :) You'll find that many flame wars have been fought over this... [17:25:57] Oh god. :D [17:26:09] my unit tests were testing a mediawiki api [17:26:58] so obviously I've a dependency on a running mediawiki instance [17:26:58] theoretically they aren't 'unit tests' [17:27:30] nileshc: but they're *useful* to me, so they stay that way. I'd have to create a mock interface for all o the mediawiki api to have them really be 'unit tests', and frankly I don't think that's that useful [17:27:35] Exactly. To be pedantic, those are integration tests. [17:27:51] YuviPanda: Exactly! [17:28:07] indeed, but I try to *not* be pedantic :P [17:28:11] useful tests are useful [17:31:18] YuviPanda: Yes, good tests by any other name are still good tests. :) [17:36:32] (CR) coren: [C: 2 V: 2] "Reflects current contents." [labs/toollabs] - https://gerrit.wikimedia.org/r/75480 (owner: coren) [17:36:33] (Merged) coren: Tool Labs: new custom error messages [labs/toollabs] - https://gerrit.wikimedia.org/r/75480 (owner: coren) [17:40:15] coren: is there any existing information about sftp access/or anything we should say about that specifically in the docs? I couldn't find much beyond the fact that it is supported. [17:41:12] kma500: sftp is part of the ssh functionality; it uses the same transport and authentication. [17:42:09] Coren: you need to mention take in scp, I think. Since people get confused about scp'ing to their tool accounts [17:42:33] okay. I'm asking because I saw a user noting that sftp was not mentioned anywhere in the docs. Maybe I'll just mention it under the ssh section. [17:42:45] kma500: That'd be the right place. [17:42:54] Coren: wait, we support s*ftp*? I thought we only did scp [17:43:06] is sftp also over ssh? I thought it was not... [17:43:43] YuviPanda: Yeah, I suck at noticing those kind of things. To me things like unix owners, groups and file permissions no longe ever register as significant. :-) [17:43:48] YuviPanda: http://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol [17:44:05] Not FTP over SSL which was obsolete before it was ever implemented. [17:44:08] kma500: Coren yeah, you should mention 'take' along with sftp/scp [17:44:15] ah, nice Coren [17:45:12] YuviPanda--can you point me to more info about take? [17:45:47] hmm, Coren is take documented? :D [17:45:47] kma500: let me look [17:46:06] Coren: no docs for take on !toolsdoc [17:46:28] YuviPanda: I don't think it is; it took long enough for me to get opsen to grudgingly accept it and that fell between the cracks. [17:46:34] right [17:46:56] kma500: so no docs yet. [17:46:58] scfc_de still needs to finish https://gerrit.wikimedia.org/r/#/c/71112/ too. [17:47:13] * Coren poked scfc_de by "accident" there. :-) [17:47:22] ;-) [17:47:42] YuviPanda. okay. what should I mention about it in the docs? [17:47:57] kma500: I think Coren will be able to explain this better, since he wrote it. [17:48:02] (the guide that is) [17:48:52] kma500: That thing is currently (a) indispensible and (b) completely undocumented (it doesn't even have a usage blurb). The short of it: [17:49:22] Usage: take FILE... [17:49:45] Change ownership of the FILE(s) and directories recursively to that of the calling user [17:50:03] provided that (a) they own the containing directory and (b) they are a member of the group owning the file. [17:50:42] Can you give me an example for when someone would do that? [17:50:45] This is the easy "I need the tool to own these files I created as my user" solution. [17:51:02] If you scp things to your tool's home, for instance, they are going to be owned by you. [17:51:13] Ah. I see. [17:51:30] In many cases, you'd want them owned by the tool instead. With take, the tool can take over ownership of the files. [17:52:05] *most cases :) [17:52:15] Yeah, most. [17:52:36] So, if I wanted my tool to take the scp'd files, I'd become my tool and then take the files? [17:52:38] The converse might also be useful (having a maintainer take over a file created by the tool) but that's more marginal. [17:52:44] kma500: Exactly. [17:53:07] Coren, can I get my own webserver? [17:53:17] :p [17:53:19] okay. thanks! [17:53:27] Cyberpower678: Probably not. Why do you ask? [17:53:51] I'm still getting compliants that memory keeps running out. [17:54:01] addshore, is an example. [17:54:14] Actually, yes you certainly can. I can recommend xlhost.com, they have cheap hosting and are very reliable -- I've had them host my stuff for nearly a decade now. :-) [17:54:41] Cyberpower678: Send the heavy lifting to the grid. [17:54:54] Coren, err, I need replication. [17:55:17] the jobs system seems to work pretty good, in my experience so far [17:55:28] Cyberpower678: Like I said; send the actual work to the grid. You should really only be using the webserver for presentation, ideally. [17:55:31] Lovely. I have to overhaul the tools now. -_- [17:55:32] i send my bot there and do query stuff [17:55:54] Cyberpower678: With "-sync y", the tool is executed synchronously so you don't have to worry about dealing with scheduling. [17:56:33] Coren, when I begin to overhaul the tools, I'm going to be stealing a lot of your time. :p [17:56:38] Cyberpower678: Your web application uses unbounded memory; that's a bug at best. :-) [17:56:55] Cyberpower678: It's not stolen since I'm actually paid specifically to do that. :-) [17:57:06] * aude ddos Cyberpower678 's tool :) [17:57:15] agree with coren [17:57:27] * Cyberpower678 plants a virus into aude [17:57:31] heh [17:57:44] Coren, I'll make sure it's stolen. [17:57:46] :D [18:08:00] kma500: I have some computer problems but will be back in ~30 min :) [18:09:01] Hope the problems resolve easily, sumanah! [18:11:44] Coren: very basic question about categories of clients. Are putty and winscp considered graphical file managers? Is that how people think of them? [18:11:56] <^d> I just tried adding 4 instances to the beta project for elastic, 2 of them came up fine, other 2 are showing ERROR on wikitech and I can't ssh to them. [18:12:21] <^d> deployment-es0 and -es1 are fine, -es2 and -es3 are the broken ones. [18:12:26] kma500: winscp is, not putty (the latter is just a combo ssh-client/console) [18:12:58] thanks [18:15:57] <^d> Coren: Any ideas? ^ [18:17:02] ^d: Ryan_Lane is a better bet to help you with that. He can probably answer you before I figured out where to look. :-) If he's not around, I'll look into it. [18:18:09] <^d> Ryan_Lane and I should probably just work out some sort of indentured servitude contract...it's going to take me 7 years to catch up to all I owe him ;-) [18:18:10] ^d: error? [18:18:11] huh [18:18:12] weird [18:28:39] ^d: seems it's that the network node isn't responding quickly enough [18:29:00] I'm going to make some changes to it to make this less likely [18:32:34] <^d> Okie dokie. Deleting + recreating worked fine, -es2 and 3 are now up [18:32:39] cool [18:39:41] [bz] (8REOPENED - created by: 2Chris McMahon, priority: 4Unprioritized - 6major) [Bug 50623] Entering AFTv5 feedback causes error - https://bugzilla.wikimedia.org/show_bug.cgi?id=50623 [18:39:42] [bz] (8NEW - created by: 2Chris McMahon, priority: 4Unprioritized - 6major) [Bug 50622] Special:NewPagesFeed intermittently fails on beta cluster; causes test failure - https://bugzilla.wikimedia.org/show_bug.cgi?id=50622 [19:05:03] is there a cgi-bin for projects? [19:07:27] (PS1) Yuvipanda: Ignore Merges by Jenkins-Bot [labs/tools/grrrit] - https://gerrit.wikimedia.org/r/75676 (via SuchABot) [19:14:06] (PS1) Yuvipanda: Fix typo from last commit [labs/tools/grrrit] - https://gerrit.wikimedia.org/r/75677 (via SuchABot) [19:15:10] (CR) Yuvipanda: [C: 2 V: 2] "Reviewers would've helped find the last one!" [labs/tools/grrrit] - https://gerrit.wikimedia.org/r/75677 (owner: SuchABot) [19:19:29] [bz] (8RESOLVED - created by: 2Antoine "hashar" Musso, priority: 4Unprioritized - 6normal) [Bug 51700] https://login.wikimedia.beta.wmflabs.org/ trapped in an infinite self-redirect - https://bugzilla.wikimedia.org/show_bug.cgi?id=51700 [19:20:39] [bz] (8NEW - created by: 2Chris Steipp, priority: 4Unprioritized - 6normal) [Bug 51622] Add loginwiki to beta - https://bugzilla.wikimedia.org/show_bug.cgi?id=51622 [19:28:49] [bz] (8RESOLVED - created by: 2Antoine "hashar" Musso, priority: 4Unprioritized - 6critical) [Bug 51955] puppet broken on all instances (../private/manifests/passwords.pp does not exist) - https://bugzilla.wikimedia.org/show_bug.cgi?id=51955 [19:30:10] coren: What is the storage limit of a tool account? I found 50GB somewhere, but not sure if that applies. [19:30:43] [bz] (8NEW - created by: 2spage, priority: 4Unprioritized - 6normal) [Bug 51580] configure beta labs for SUL2 - https://bugzilla.wikimedia.org/show_bug.cgi?id=51580 [19:30:44] [bz] (8NEW - created by: 2Antoine "hashar" Musso, priority: 4Low - 6normal) [Bug 48501] [OPS] beta: get SSL certificates - https://bugzilla.wikimedia.org/show_bug.cgi?id=48501 [19:31:03] kma500: There is no quota set atm, we have plenty of storage. We may or may not impose one if people go overboard in the future, but that's a human problem and not a technical one. [19:31:13] thanks! [19:31:15] [bz] (8RESOLVED - created by: 2Liangent, priority: 4Normal - 6normal) [Bug 50056] ?status page is escaping "<" incorrectly - https://bugzilla.wikimedia.org/show_bug.cgi?id=50056 [19:38:24] [bz] (8RESOLVED - created by: 2Liangent, priority: 4Normal - 6normal) [Bug 48811] jsub breaks whitespaces in arguments - https://bugzilla.wikimedia.org/show_bug.cgi?id=48811 [19:44:22] [bz] (8RESOLVED - created by: 2Tim Landscheidt, priority: 4Normal - 6normal) [Bug 49159] Relax suPHP's paranoia - https://bugzilla.wikimedia.org/show_bug.cgi?id=49159 [19:47:10] coren: I have drafts of sections 2-4 up on etherpad. Could you review when you have a moment? [19:47:35] kma500: Can you gimme the linky? I don't have it handy. [19:47:46] http://etherpad.wmflabs.org/pad/p/Tool_Labs_Sprint_July_23 [19:47:55] ty [19:57:10] hi kma500 et alia :) how's it going, any particular triumphs so far in the doc sprint? [19:57:13] Coren: can I mail tool account/maintainers from the tools home page? Where do I do this? (to request access, for example) [19:57:25] hi sumanah! [19:57:59] kma500: Incoming mail is pending an okay from Legal and the resolution of a minor skirmish between Erik and Luis. :-) [19:58:28] skrimish? [19:58:36] small quarrel [19:58:40] okay. there's a bunch of stuff about it in the docs. Should we pull it for now? [19:59:21] hey Coren: i think you can delete the 'mobile-stats' group on labs, AFAIK there are no instances associated with it. [19:59:34] sumanah--no triumphs, just plugging though. bit quiet on the etherpad [19:59:51] kma500: I'm still reading over it. I don't know if we should pull it or just disclaim it as [not yet live but soon] [20:00:47] okay. If it's coming soon, disclaimer sounds good. [20:01:02] kma500: re: ' · Ganglia, Icinga, and Nagios systems to systems to help monitor tools ' line on the etherpad, we don't actually have that [20:01:47] I just started a page about that, and we'll probably have that in... 4-5 months? not a priority even [20:02:15] oh. [20:02:22] kma500: The only "real" dispute is about what domain the email addresses should be under; it's actually ready to turn on. [20:02:56] kma500: shall I just go ahead and remove that line? [20:03:21] yuvipanda--if it's not true it should go! [20:03:26] I'm confused though. [20:03:34] kma500: in http://etherpad.wmflabs.org/pad/p/Tool_Labs_Sprint_July_23 - are there particular bits that you specifically wish other people would help with? I think line 465: Redis (Security and A note about memcache) qualify..... [20:03:36] kma500: gone :) [20:03:56] I can help with Redis and the security bit, since I set that up [20:03:57] * YuviPanda scrolls [20:03:59] YuviPanda: it seems like you could add it back with a note saying "this doesn't exist yet; to follow progress see [page Yuvi made] [20:04:17] I can't remember now where I saw that these things were listed. Are they relevant at all to Tool Labs at the moment? [20:04:33] those things being ganglia, etc [20:04:44] I think they are relevant [20:04:52] they are profiling and monitoring tools [20:04:54] kma500: we have ganglia, etc for labs in general. But they make sure that labs by itself doesn't go down. They're used by our opsen. [20:05:00] Tools users can not do anything with them at all [20:05:11] YuviPanda: not YET, right? [20:05:23] I want to change that, but that's going to take another 4-5 months. [20:05:36] kma500: https://blog.wikimedia.org/2013/02/05/how-the-technical-operations-team-stops-problems-in-their-tracks/ has more information about ganglia and Nagios. icinga is another monitoring tool (compare to Nagios basically) [20:05:52] So, we could note that in the docs--that it's in the long-term plan.. [20:05:54] sumanah: true, and nobody is 'officially' working on it, nor is it on any roadmap :) [20:06:07] it's just a pet project of mine, just as redis was [20:06:13] Okay. So, kma500 maybe it could go in the FAQ instead [20:06:14] ah, I see yuvipanda [20:06:23] kma500: Your explanation of tool accounts is teh 1337 [20:06:25] yes. FAQ is better [20:06:35] I don't think we should put that in *features*. We can definitely add a note elsewhere but [20:06:42] question: are there any plans for adding monitoring or profiling tools? answer: Yes, in the very long term and not guaranteed. [link] [20:06:57] let me find [link] [20:06:57] moment [20:06:57] yup, sumanah. [20:07:06] https://wikitech.wikimedia.org/wiki/User:Yuvipanda/Icinga_for_tools [20:07:07] is link [20:07:14] hi folks, quick and must be very basic API question - I'm getting imageInfo okay for an Image - but what's the action/ prop to get the summary as displayed on the Image page: https://commons.wikimedia.org/wiki/File:Albert_Einstein_Head.jpg [20:07:38] hey chippy. API questions - you should probably ask in #mediawiki [20:07:39] chippy: mind if I answer you in #mediawiki ? [20:07:45] how does one take ownership of a vile via terminal ? [20:07:49] okay sorry, thanks! [20:07:51] Betacommand: use the 'take' command [20:07:53] Betacommand: take $file [20:07:56] Betacommand: 'take $file' [20:08:17] thanks both [20:08:30] kma500: we can add Redis to the 'features' list, though. Mind if I add it? [20:08:50] other areas that I could use help are 9. Developing on Tool Labs (especially 9.9, but also 9.5, and any tips/tricks/useful knowledge (can cut that section, of course) [20:08:54] kma500: reddis should probably be made an 'official' feature since I plan on using it for my cron-replacement. :-) [20:09:01] okay! [20:09:46] Coren: do we have a process in place for requesting Gerrit repos? [20:09:53] coren: i had to look up teh 1337 [20:10:09] YuviPanda: "Ask Coren or Ryan" :-) [20:10:14] :) [20:10:22] Coren: that would be the answer to kma500's 9.5 [20:11:05] [bz] (8NEW - created by: 2Chris McMahon, priority: 4Unprioritized - 6major) [Bug 51988] login broken on beta labs - https://bugzilla.wikimedia.org/show_bug.cgi?id=51988 [20:11:16] Well, "a Gerrit admin" really. The problem is that since deleting gerrit projects is (narly) impossible, we don't want to automate any part of it. [20:11:32] Coren: deleting projects isn't really impossible anymore, IIRC> ^d just deleted one for me yesterday! [20:11:49] YuviPanda: Oooo! Improvements! [20:11:55] indeed [20:12:32] I think it's still troublesome though. We'll have to ask him about the propriety of having some sort of self-serve mechanism in the future. In the meantime, "as a Gerrit admin" is the way. :-) [20:13:33] Coren: true [20:13:49] kma500: and as for 9.9, I can offer to write the Python part of that tomorrow. [20:14:13] great. Thanks, YuviPanda! [20:14:28] kma500: :) I can also write for nodejs. Will do those tomorrow! [20:14:40] I'm going to head off to sleep now. [20:14:49] good night! [20:15:16] kma500: hmm, also - clarification re: 9.5 [20:15:26] kma500: do you mean 'code that is used by multiple tools'? [20:15:40] kma500: or just 'source code files' used by multiple tools? [20:15:40] yes [20:15:58] err, to rephrase [20:16:01] the former, by multiple tools [20:16:11] 'source code used by multiple tools', or 'files to be used by multiple tools' [20:16:12] not source code [20:16:15] former is things like libraries [20:16:27] latter is things like config files, datasets, images, static files, etc [20:16:59] You know. Now that you break that out, it seems like we should talk about both. [20:17:10] indeed [20:17:29] (1) should be done via git submodules or some such, (2) is more complicated [20:17:38] Coren: can one tool add another tool to its group? [20:19:14] YuviPanda: Huh. I don't think the UI would let you right now, but there's no technical reason why not. [20:19:31] Coren: right, because if so that can be the solution to (2) [20:20:35] It's not an unreasonable use case. You'd have to open a bz for it, and point andrewbogott at it [20:20:58] Coren: can't you do that from the commandline? [20:21:18] let me open bz [20:22:43] [bz] (8NEW - created by: 2Yuvi Panda, priority: 4Unprioritized - 6normal) [Bug 51990] Allow one tool to add another tool to its group - https://bugzilla.wikimedia.org/show_bug.cgi?id=51990 [20:22:46] Coren: ^ [20:23:35] YuviPanda: I could go mess directly in LDAP, but that'd be an unmaintainable hack. [20:23:48] Coren: ooooh, right. I forgot all our stuff comes from ldap [20:23:49] right [20:24:12] Coren: should I cc andrewbogott, or just mention his nick on IRC multiple times ( andrewbogott, andrewbogott, andrewbogott) and point him to https://bugzilla.wikimedia.org/show_bug.cgi?id=51990? [20:24:13] :) [20:24:19] nah, no need to CC, I think [20:24:56] Heh. [20:25:56] kma500: I expanded out 9.5 a tiny little bit. [20:26:12] great. thanks! [20:26:35] gnite :) [20:26:38] (for real, hopefully) [20:26:46] :) [20:30:28] <^demon> YuviPanda: Yeah, you can, but still needs manual cleanup on github side. [20:30:40] <^demon> And gitblit and jenkins, if we're really cleaning up [20:30:43] ^demon: that can be scripted :) [20:30:58] chrismcmahon: yeah login dead :/ [20:31:30] hashar: you've probably seen it but https://bugzilla.wikimedia.org/show_bug.cgi?id=51988 [20:39:04] chrismcmahon: wierd [20:39:10] I have no idea what maybe wrong. [20:40:12] !log deployment-prep apt-get upgrading apache32 and apache33. Running puppet on them [20:40:12] Logged the message, Master [20:41:47] !log deployment-prep restarted memcached on both apache boxes. Might clear their caches. [20:41:50] Logged the message, Master [20:42:34] coren, is that just a matter of adding the service user from service group a as a member of service group b? [20:43:01] andrewbogott: Basically. I see no technical obstacle beyond "the UI won't let you". [20:43:09] ok [20:43:10] <^demon> hashar: I added a couple of new instances to deployment-prep for elastic search. [20:43:22] <^demon> hashar: deployment-es[0-3] [20:43:45] \O/ [20:44:08] if you switch to cirrus search make sure to ping folks on labs-l [20:45:19] chrismcmahon: try getting some auth people involved. I have no idea what is wrong [20:45:30] chrismcmahon: might be an issue in central auth or some weird memcached session issue [20:45:36] I am off to bed [20:45:46] still have to prepare tomorrow lunch for my daughter [20:45:55] ok [20:46:06] I have dumped some stuff on https://bugzilla.wikimedia.org/show_bug.cgi?id=51988 [20:46:27] seems the user memcached session is saved with an id and retrieved with another one [20:46:39] I have no clue how it is working anyway [20:48:27] anyway off to bed [20:57:01] MaxSem: did this merge just break non-Mobile login on beta labs? https://gerrit.wikimedia.org/r/#/c/75772/ (see scrollback ^^) [20:58:00] looking [20:59:16] thanks [21:02:22] chrismcmahon, I have a suspicion that it's due to desktop beta working off varnish not squid like prod [21:02:34] let's check this... [21:03:48] MaxSem: pretty sure only MF uses varnish and not the main pages [21:04:09] chrismcmahon, nope - desktop beta now goes through varnish [21:04:24] or that's what headers are saying:) [21:05:03] huh [21:06:12] what's the equivalent of site.pp for the beta cluster? I'm trying to figure out which machine is 'role::beta::logging::mediawiki' [21:06:25] which machine has the role, I mean. [21:06:36] do i need to query ldap or something? [21:06:37] ori-l, check instance settings [21:06:50] it has checkboxes for roles [21:07:29] right, but i'd prefer not to have to go through each machine [21:07:55] chrismcmahon, I've temp-reverted my change - waiting for beta to get updated [21:07:56] there's no programmatic way to query applied roles? [21:08:36] not that I know. Ryan_Lane might know otherwise:) [21:08:56] you can query ldap [21:09:06] ou=hosts,dc=wikimedia,dc=org [21:10:26] Hi, I am getting an nginx http status 413 Request Entity when uploading via instance-proxy.wmflabs.org - Do we know what the max file size is, and is this configurable, [21:11:20] Ryan_Lane: do you have an example query in your .bash_history? trying to avoid spending the next hour with the ldapsearch man page [21:11:36] it's not essential, we're not going to have uploads but use images from commons... but could help with testing [21:13:13] ldapsearch -x -D 'cn=proxyagent,ou=profile,dc=wikimedia,dc=org' -W -b 'ou=hosts,dc=wikimedia,dc=org' 'puppetvar=instanceproject=deployment-prep' [21:13:27] the info for proxy agent is in /etc/ldap.conf [21:23:17] petan: non existent? [21:34:24] YuviPanda, is the group-in-group thing something that will be useful on many occasions in the future, or just a one-off? [21:34:34] It's going to make the GUI pretty messy :) [21:37:01] andrewbogott: The use case is generic enough; I can see why tools might want to share configuration files. [21:37:08] ok [21:37:12] andrewbogott: And what do you mean, "make"? :-P [21:37:24] well, messier [21:37:31] actually just taller! [21:38:04] kma500: where are you in the Etherpad? :) [22:21:36] Ryan_Lane: thanks for that :) [22:21:57] ori-l: oh, for the query? worked for you? [22:22:19] just trying it now.. [22:24:54] It's asking me for an ldap password, but I haven't given it a username [22:25:03] yes you did :) [22:25:13] 'cn=proxyagent,ou=profile,dc=wikimedia,dc=org' [22:25:18] and the password for that user is in /etc/ldap/ldap.conf [22:25:41] hunter2 [22:28:41] coren: There's a note in the docs about database shards/replicas. Should that be there? [22:30:53] " [22:30:53] The database server setup follow the same scheme as in production where the different databases are spread on a number of shards. Those servers can be reached with names following the patternsN.labsdb where N is the shard number. For instance, the English Wikipedia database is on the s1.labsdb server. [22:34:44] Ryan_Lane: worked! [22:34:49] greay [22:34:51] *great [22:35:02] -D = bind dn [22:35:15] -W = prompt for password (-w is an alternative) [22:35:23] -x is a simple bind [22:35:35] -b is the basedn of the search [22:36:07] 'puppetvar=instanceproject=deployment-prep' is the search pattern. that specific search lets you find everything for the specified project [22:39:25] I should update ldaplist for labs [22:39:28] it's easier to use [22:41:25] oh I remember the niceness of ldaplist. That was easyish to use, yeah [22:42:40] that's a solaris tool that I reimplemented in python [22:42:46] like 6 years ago [22:43:19] I haven't changed the features to ensure it didn't become incompatible, but no one uses solaris anymore :D [22:43:49] coren: also, could you clarify the difference between a 'cluster' and a 'shard'? [22:47:05] kma500: we shard databases into clusters [22:47:06] so..... [22:47:17] we have 800 or so wikis (probably more) [22:47:29] good luck kma500 [22:47:31] we might stick 500 or so into a shard called s5 [22:47:45] s5 itself is a cluster of mysql servers [22:47:52] with a master and a number of slaves [22:48:55] we have shards s1-s7 [22:49:16] kma500: http://noc.wikimedia.org/dbtree/ [22:49:20] Coren, the more I think about this the more I don't understand, which makes me think I don't quite know how service groups work... [22:49:28] Can you talk me through the use case for this? [22:49:38] Thanks, ryan. I'm reading along/thinking about this [22:49:43] yw [22:58:34] https://wikitech.wikimedia.org/wiki/Help:Instances#Searching_for_instances_by_Puppet_role [23:07:21] Coren: Im about to kill the webservers [23:12:18] ori-l: ah. nice