[01:07:43] PROBLEM Total processes is now: WARNING on bots-salebot.pmtpa.wmflabs 10.4.0.163 output: PROCS WARNING: 167 processes [01:12:42] RECOVERY Total processes is now: OK on bots-salebot.pmtpa.wmflabs 10.4.0.163 output: PROCS OK: 92 processes [03:20:20] 01/07/2013 - 03:20:19 - Creating a home directory for neo at /export/keys/neo [03:25:21] 01/07/2013 - 03:25:20 - Updating keys for neo at /export/keys/neo [04:26:34] PROBLEM Free ram is now: CRITICAL on dumps-bot3.pmtpa.wmflabs 10.4.0.118 output: Critical: 5% free memory [06:30:42] PROBLEM Total processes is now: WARNING on parsoid-spof.pmtpa.wmflabs 10.4.0.33 output: PROCS WARNING: 162 processes [06:35:42] RECOVERY Total processes is now: OK on parsoid-spof.pmtpa.wmflabs 10.4.0.33 output: PROCS OK: 149 processes [06:37:22] PROBLEM Free ram is now: WARNING on bots-2.pmtpa.wmflabs 10.4.0.42 output: Warning: 14% free memory [06:45:18] 01/07/2013 - 06:45:18 - Updating keys for beetstra at /export/keys/beetstra [08:40:53] PROBLEM dpkg-check is now: CRITICAL on bots-liwa.pmtpa.wmflabs 10.4.1.65 output: DPKG CRITICAL dpkg reports broken packages [08:42:33] Beetstra your instance is waiting for you :P [08:42:42] Yep, I saw [08:42:56] will reboot it once and then you can use it [08:42:59] you have root there [08:43:04] did you create /mnt/share, and did you install the perl packages (or can I do that .. ah ..) [08:43:17] it's all being created now [08:43:20] I have a script for that [08:43:25] good [08:44:05] I saw it spawned into existance yesterday, will start using it one of these days and then also update that in the docs [08:44:16] ok [08:45:12] btw only you have root, so you don't need to worry that someone break it, but everyone can ssh into it [08:45:34] that could be also restricted, but dunno if it's necessary [08:46:11] That is fine, as long as there are no other bots running on it (I may ask others to be able to (re-)start the linkwatcher for me). [08:46:22] ok [08:46:28] linkwatcher has so much to do, interference is 'bad' .. [08:46:38] you should create a control script for it then [08:47:05] parsing most edits on over 750 wikis is quite a bit of work [08:47:20] or you can create special user for your bot which everyone can sudo to [08:48:10] OK, I'll figure that out then [08:50:15] ok [08:50:16] it's done [08:50:21] thanks! [08:50:53] RECOVERY dpkg-check is now: OK on bots-liwa.pmtpa.wmflabs 10.4.1.65 output: All packages OK [09:35:22] PROBLEM Free ram is now: WARNING on dumps-bot2.pmtpa.wmflabs 10.4.0.60 output: Warning: 19% free memory [10:06:52] RECOVERY Free ram is now: OK on swift-be3.pmtpa.wmflabs 10.4.0.124 output: OK: 23% free memory [10:27:22] RECOVERY Free ram is now: OK on bots-2.pmtpa.wmflabs 10.4.0.42 output: OK: 22% free memory [10:36:02] !beta updating mediawiki config to latest master [10:36:03] !log deployment-prep updating mediawiki config to latest master [10:36:06] Logged the message, Master [11:01:25] !log deployment-prep Fixed up extension static assets ( {{bug|43692}} ). [11:01:27] Logged the message, Master [11:19:33] PROBLEM Total processes is now: WARNING on bots-2.pmtpa.wmflabs 10.4.0.42 output: PROCS WARNING: 151 processes [11:37:34] !logs bots beetstra: Starting to initialise bots-liwa for running LinkWatcher - installing perl modules [11:37:34] logs http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-labs [11:38:07] !log bots beetstra: Starting to initialise bots-liwa for running LinkWatcher - installing perl modules [11:38:09] Logged the message, Master [11:46:40] * Beetstra looks at petan .. [11:46:47] You said I was admin on bots-liwa [11:47:03] I don't seem to be able to sudo, su or install any perl modules ... [11:49:34] RECOVERY Total processes is now: OK on bots-2.pmtpa.wmflabs 10.4.0.42 output: PROCS OK: 144 processes [11:50:00] wait .. maybe I needed to freshly login again [11:56:43] PROBLEM Free ram is now: UNKNOWN on dumps-bot3.pmtpa.wmflabs 10.4.0.118 output: NRPE: Call to fork() failed [12:00:24] PROBLEM Current Load is now: CRITICAL on dumps-bot3.pmtpa.wmflabs 10.4.0.118 output: CHECK_NRPE: Error - Could not complete SSL handshake. [12:00:24] PROBLEM Total processes is now: CRITICAL on dumps-bot3.pmtpa.wmflabs 10.4.0.118 output: CHECK_NRPE: Error - Could not complete SSL handshake. [12:00:34] PROBLEM SSH is now: CRITICAL on dumps-bot3.pmtpa.wmflabs 10.4.0.118 output: Server answer: [12:00:44] PROBLEM Disk Space is now: CRITICAL on dumps-bot3.pmtpa.wmflabs 10.4.0.118 output: CHECK_NRPE: Error - Could not complete SSL handshake. [12:00:54] PROBLEM dpkg-check is now: CRITICAL on dumps-bot3.pmtpa.wmflabs 10.4.0.118 output: CHECK_NRPE: Error - Could not complete SSL handshake. [12:01:24] PROBLEM Current Users is now: CRITICAL on dumps-bot3.pmtpa.wmflabs 10.4.0.118 output: CHECK_NRPE: Error - Could not complete SSL handshake. [12:01:34] PROBLEM Free ram is now: CRITICAL on dumps-bot3.pmtpa.wmflabs 10.4.0.118 output: CHECK_NRPE: Error - Could not complete SSL handshake. [12:10:52] PROBLEM dpkg-check is now: CRITICAL on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: DPKG CRITICAL dpkg reports broken packages [12:18:02] Beetstra really? [12:18:12] that's weird [12:25:01] No, it was weird, Petan [12:25:13] I had to log out from bastion, and log back in, and log back into the instance [12:25:20] Just cross-login did not do it [12:25:26] cpan works just under my account now [12:26:23] ok so it works? [12:26:30] I was about to create a ticket [12:26:42] For now, it does .. bot is not running yet, but I am installing modules [12:26:48] ok [12:35:21] 01/07/2013 - 12:35:20 - Creating a home directory for kipcool at /export/keys/kipcool [12:40:19] 01/07/2013 - 12:40:19 - Updating keys for kipcool at /export/keys/kipcool [13:05:14] !log bots beetstra: Installed perl modules POE, WWW::Mechanize, XML::Simple (forced) and POE::Component::IRC on bots-liwa [13:05:15] Logged the message, Master [13:05:27] !log bots beetstra: linkwatcher is now running from bots-liwa [13:05:28] Logged the message, Master [13:05:52] RECOVERY dpkg-check is now: OK on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: All packages OK [13:06:45] Beetstra: do you intend to write the 4 pages that you have linked in the doc? [13:07:18] 4 pages linked in the doc? [13:07:57] there are 4 red links: coibot, xlinkbot, unblockbot and linkwatcher [13:08:07] I am actually thinking to link them to their main accounts on wiki [13:08:13] ok [13:14:25] giftpflanze, this better? [13:16:57] "also by" is irritating me now [13:25:06] It's a wiki .. SOFIXIT ;-) [13:26:28] I actually don't know who wrote that .. [13:26:49] it's your text - ok, it's not :D [13:27:39] it was dzahn [13:28:27] :-D [13:43:43] PROBLEM Current Load is now: WARNING on wikidata-dev-9.pmtpa.wmflabs 10.4.1.41 output: WARNING - load average: 5.48, 5.58, 5.14 [13:53:52] RECOVERY Current Load is now: OK on wikidata-dev-9.pmtpa.wmflabs 10.4.1.41 output: OK - load average: 3.73, 4.43, 4.81 [14:09:42] PROBLEM Total processes is now: WARNING on bastion1.pmtpa.wmflabs 10.4.0.54 output: PROCS WARNING: 155 processes [14:10:53] PROBLEM Current Load is now: CRITICAL on wikidata-puppet-repo2.pmtpa.wmflabs 10.4.0.23 output: Connection refused by host [14:11:03] PROBLEM Free ram is now: CRITICAL on wikidata-puppet-repo2.pmtpa.wmflabs 10.4.0.23 output: Connection refused by host [14:11:33] PROBLEM Current Users is now: CRITICAL on wikidata-puppet-repo2.pmtpa.wmflabs 10.4.0.23 output: Connection refused by host [14:12:13] PROBLEM Disk Space is now: CRITICAL on wikidata-puppet-repo2.pmtpa.wmflabs 10.4.0.23 output: Connection refused by host [14:12:23] PROBLEM Total processes is now: CRITICAL on wikidata-puppet-repo2.pmtpa.wmflabs 10.4.0.23 output: Connection refused by host [14:12:53] PROBLEM dpkg-check is now: CRITICAL on wikidata-puppet-repo2.pmtpa.wmflabs 10.4.0.23 output: Connection refused by host [14:14:44] RECOVERY Total processes is now: OK on bastion1.pmtpa.wmflabs 10.4.0.54 output: PROCS OK: 140 processes [14:20:27] !beta apache32 and apache33 have disk full again. [14:20:27] !log deployment-prep apache32 and apache33 have disk full again. [14:20:29] Logged the message, Master [14:20:50] hashar some idea what is cause? [14:21:26] yeah same old story [14:21:35] /data/project on deployment-prep has been broken for a few months [14:21:43] I don't remember if I heard old story :P [14:21:51] mm [14:21:56] that cause the GlusterFS client to emit ton of logs in /var/log/glusterfs [14:21:59] I am cleaning that out [14:22:02] and updating the bug report [14:22:02] ah [14:22:03] that [14:22:20] we really should redirect glusterfs to /dev/null [14:22:23] in /var/log [14:23:45] no [14:23:54] we should have Ryan to fix the Gluster volume [14:23:57] and rotate the log file [14:23:59] !beta manually emptied out /var/log/glusterfs/data-project.log on apache32 and apache33. [14:23:59] !log deployment-prep manually emptied out /var/log/glusterfs/data-project.log on apache32 and apache33. [14:24:02] Logged the message, Master [14:25:23] RECOVERY Disk Space is now: OK on deployment-apache32.pmtpa.wmflabs 10.4.0.166 output: DISK OK [14:26:33] RECOVERY Disk Space is now: OK on deployment-apache33.pmtpa.wmflabs 10.4.0.187 output: DISK OK [14:26:45] !beta apache32 / apache33 filling is logged as {{bug|43703}}. This is caused by gluster client log files not being rotated which is {{bug|41104}} [14:26:45] 43703}}. This is caused by gluster client log files not being rotated which is {{bug|41104}}: !log deployment-prep apache32 / apache33 filling is logged as {{bug [14:27:18] stupid bot [14:27:26] !log deployment-prep apache32 / apache33 filling is logged as {{bug|43703}}. This is caused by gluster client log files not being rotated which is {{bug|41104}} [14:27:28] Logged the message, Master [14:29:37] hashar I don't know how this bot could recognize when you want to use pipe for which purpose :P [14:29:59] some people want it to use what is after pipe as prefix for message some want it to be just a pipe [14:30:23] petan: as a prefix ?? [14:30:30] my message | some prefix [14:30:30] yes [14:30:32] could be written: [14:30:35] some prefix my message [14:30:36] !ping | hashar [14:30:36] hashar: pong [14:30:40] you see? [14:31:02] disable that for the !log keyword maybe ? [14:31:03] you did !beta blablah {{bug|43... [14:31:14] I am not sure there is any point in using: !log | petan [14:31:23] but you weren't doing !log [14:31:26] if one wanted to ping, he could just log !log [14:31:28] you were doing !beta :D [14:31:30] ah [14:31:39] 2 bots [14:31:53] so make it ignore for my !beta alias :-] [14:31:58] (if at all possible) [14:32:15] mm it could be [14:36:03] PROBLEM dpkg-check is now: CRITICAL on wikidata-puppet-repo.pmtpa.wmflabs 10.4.1.52 output: DPKG CRITICAL dpkg reports broken packages [14:54:55] PROBLEM host: wikidata-puppet-repo3.pmtpa.wmflabs is DOWN address: 10.4.0.23 CRITICAL - Host Unreachable (10.4.0.23) [14:55:21] 01/07/2013 - 14:55:21 - Updating keys for kipcool at /export/keys/kipcool [15:04:11] !beta apt updated and upgraded apache32, apache33 and jobrunner08 [15:04:11] !log deployment-prep apt updated and upgraded apache32, apache33 and jobrunner08 [15:04:14] Logged the message, Master [15:08:43] PROBLEM dpkg-check is now: CRITICAL on deployment-jobrunner08.pmtpa.wmflabs 10.4.1.30 output: DPKG CRITICAL dpkg reports broken packages [15:13:42] RECOVERY dpkg-check is now: OK on deployment-jobrunner08.pmtpa.wmflabs 10.4.1.30 output: All packages OK [15:15:10] lol [15:28:53] PROBLEM Current Load is now: CRITICAL on wikidata-puppet-repoo.pmtpa.wmflabs 10.4.0.23 output: Connection refused by host [15:29:33] PROBLEM Current Users is now: CRITICAL on wikidata-puppet-repoo.pmtpa.wmflabs 10.4.0.23 output: Connection refused by host [15:30:13] PROBLEM Disk Space is now: CRITICAL on wikidata-puppet-repoo.pmtpa.wmflabs 10.4.0.23 output: Connection refused by host [15:30:53] PROBLEM Free ram is now: CRITICAL on wikidata-puppet-repoo.pmtpa.wmflabs 10.4.0.23 output: Connection refused by host [15:32:23] PROBLEM Total processes is now: CRITICAL on wikidata-puppet-repoo.pmtpa.wmflabs 10.4.0.23 output: Connection refused by host [15:34:33] PROBLEM dpkg-check is now: CRITICAL on wikidata-puppet-repoo.pmtpa.wmflabs 10.4.0.23 output: Connection refused by host [15:57:23] RECOVERY Total processes is now: OK on wikidata-puppet-repoo.pmtpa.wmflabs 10.4.0.23 output: PROCS OK: 90 processes [15:58:52] RECOVERY Current Load is now: OK on wikidata-puppet-repoo.pmtpa.wmflabs 10.4.0.23 output: OK - load average: 0.17, 0.45, 0.26 [15:59:32] RECOVERY Current Users is now: OK on wikidata-puppet-repoo.pmtpa.wmflabs 10.4.0.23 output: USERS OK - 1 users currently logged in [15:59:32] RECOVERY dpkg-check is now: OK on wikidata-puppet-repoo.pmtpa.wmflabs 10.4.0.23 output: All packages OK [16:00:13] RECOVERY Disk Space is now: OK on wikidata-puppet-repoo.pmtpa.wmflabs 10.4.0.23 output: DISK OK [16:00:53] RECOVERY Free ram is now: OK on wikidata-puppet-repoo.pmtpa.wmflabs 10.4.0.23 output: OK: 1022% free memory [16:36:42] PROBLEM Total processes is now: WARNING on parsoid-spof.pmtpa.wmflabs 10.4.0.33 output: PROCS WARNING: 157 processes [16:45:33] PROBLEM Current Load is now: WARNING on parsoid-roundtrip7-8core.pmtpa.wmflabs 10.4.1.26 output: WARNING - load average: 8.65, 7.46, 5.97 [17:20:33] PROBLEM Current Load is now: WARNING on parsoid-roundtrip3.pmtpa.wmflabs 10.4.0.62 output: WARNING - load average: 5.04, 5.70, 5.30 [17:26:53] PROBLEM Current Load is now: WARNING on ve-roundtrip2.pmtpa.wmflabs 10.4.0.162 output: WARNING - load average: 8.09, 6.52, 5.51 [17:31:53] RECOVERY Free ram is now: OK on swift-be4.pmtpa.wmflabs 10.4.0.127 output: OK: 27% free memory [18:08:54] PROBLEM Current Load is now: WARNING on bots-liwa.pmtpa.wmflabs 10.4.1.65 output: WARNING - load average: 5.43, 5.92, 5.26 [18:13:26] mike_wang: How are things going with puppet? Have you had any luck getting nginx set up on your instance? [18:13:52] RECOVERY Current Load is now: OK on bots-liwa.pmtpa.wmflabs 10.4.1.65 output: OK - load average: 5.41, 4.91, 4.96 [18:36:53] PROBLEM Current Load is now: WARNING on bots-liwa.pmtpa.wmflabs 10.4.1.65 output: WARNING - load average: 6.41, 5.55, 5.24 [18:51:53] RECOVERY Current Load is now: OK on ve-roundtrip2.pmtpa.wmflabs 10.4.0.162 output: OK - load average: 2.62, 3.86, 4.60 [18:55:33] RECOVERY Current Load is now: OK on parsoid-roundtrip3.pmtpa.wmflabs 10.4.0.62 output: OK - load average: 2.76, 3.65, 4.74 [19:01:42] RECOVERY Total processes is now: OK on parsoid-spof.pmtpa.wmflabs 10.4.0.33 output: PROCS OK: 150 processes [19:24:01] andrewbogott: I read a book Pro Puppet and did some test on my laptop last week. I will try to set up nginx on my de instance. [19:24:31] mike_wang: Cool. It should be pretty simple if you just follow the model of existing manifests. [19:25:04] mike_wang: And, don't hesitate to regard your VM as disposable… I tend to build and destroy several VMs anytime I'm doing puppet development since puppet is good about installing but not so good at cleaning up for a 2nd try. [19:25:16] yes. It should be very simple I think. [19:52:49] Ryan_Lane: andrewbogott: hello labs masters :-] [19:53:17] howdy [19:53:24] Ryan_Lane: andrewbogott: I have hit a bug in labsconsole that prevents me from adding new puppet classes to my project :-) [19:53:42] ah. yes. I added a bug to bugzilla for this [19:53:47] you don't see "Add class"? [19:53:54] exactly [19:54:04] and adding a group show an empty / unnamed group :/ [19:54:15] yeah [19:54:18] my bug report https://bugzilla.wikimedia.org/show_bug.cgi?id=43705 [19:54:22] it does actually show it [19:54:32] but it does add the group [19:54:53] RECOVERY Free ram is now: OK on swift-be2.pmtpa.wmflabs 10.4.0.112 output: OK: 30% free memory [19:54:55] ahh I see your bug ( https://bugzilla.wikimedia.org/show_bug.cgi?id=43613 ) [19:54:58] marking mine a dupe [19:55:32] Ryan_Lane: any hope to have it fixed? :-] [19:55:39] well, I'm looking [19:55:48] great :-] [19:55:55] save sometime to have a lunch though!!!! [19:56:27] also I have started looking at git-deploy and I have added deployment roles for beta at https://gerrit.wikimedia.org/r/#/c/42549/ [19:56:44] (forgot to add you as a reviewer oops) [19:57:57] I'm really confused by this change [19:58:15] oh [19:58:22] you made a common config class [19:58:24] nevermind [19:59:22] however, you not using it [19:59:23] Ryan_Lane: I wanted to avoid copy pasting :-) [20:02:01] and added a bunch of tabs [20:03:11] Ryan_Lane: the .*common classes are called with a 'require' under beta / production subclasses [20:03:21] yes... [20:03:22] but... [20:03:33] that doesn't mean they are brought into scope [20:03:40] oh [20:03:42] that means they can be referenced [20:03:59] I'm adding inline comments [20:04:55] Ryan_Lane: I have sent a second patchset that fix the whitespace [20:05:01] I did use tabs instead of spaces [20:08:51] ok. added inline comments [20:08:56] thanks [20:09:00] so, we'll likely need to modify the system some for this to work in labs [20:09:28] we'll need to do regex matches on grains, rather than instance ids [20:09:40] which means we'll need to modify the runner [20:09:46] and the sync script [20:09:57] what are grains? [20:10:05] they are like facts in puppet [20:10:35] on one of the labs instances, as root, run this: salt-call grains.items [20:11:03] we'll want to use fqdn and instanceproject [20:11:37] and we'll want a compound match for project and fqdn [20:12:24] I think can do this easily enough in the sync script [20:19:58] Ryan_Lane: so yeah salt-call grains.items give me the instanceproject. Seems there is some hope [20:20:13] mind you, I barely know what salt is hehe [20:20:14] right [20:20:16] heh [20:20:27] so, I was going to add a "project" argument to the runner [20:21:20] if project is passed it, it would change the match to be compound, and so match based in grain_pcre: and grain: [20:22:51] so: "P@fqdn: and G@instanceproject:" [20:22:59] http://docs.saltstack.org/en/latest/topics/targeting/compound.html [20:28:54] Ryan_Lane: I think I am still a noob regarding salt. Your explanations looks like chinese to me :-) [20:29:11] but I am gonna read the doc and learn stuff so I elevate from noob to rookie [20:29:20] well, I did give you a link ;) [20:38:10] Ryan_Lane: with obscure matching syntax, regex and yaml snippet :-D [20:38:24] hashar: you can get back yaml or json ;) [20:38:29] at least now I know where the doc is [20:38:34] --out json [20:38:35] yaml is my choice [20:38:45] I use json. heh [20:39:04] the top two things I hate about json: lack of comments, trailing comma not being supported in strucutures [20:39:14] either way, this is likely something I need to add, not you [20:40:03] to do this in a secure way, I need this: https://github.com/saltstack/salt/issues/3181 [20:44:53] Ryan_Lane: fine. I could use a workaround in puppet probably by using the instance id I-00001234 [20:45:04] and just add the instance name as a comment next to each I-xxxx [20:58:26] hey all, anyone know why after updating and rebooting an instance mysqld.sock disappears? [20:58:40] and how to correct the issue? [21:38:22] spagewmf: Does this look OK to you? It's meant to fix the capcha issue you were having at the last minute on Friday: https://gerrit.wikimedia.org/r/#/c/42502/ [21:41:59] andrewbogott: would you mind taking a look at https://bugzilla.wikimedia.org/show_bug.cgi?id=41104 ? [21:42:14] I need to finish up the deployment crap so that I can get back to labs work. heh [21:42:23] yep, looking now [21:43:00] thanks [21:44:34] mike_wang: can you go through https://bugzilla.wikimedia.org/buglist.cgi?cmdtype=runnamed&namedcmd=Wikimedia%20Labs&list_id=171559 and sort bugs that are infrastructure related into the infrastructure component? [21:44:44] Ryan_Lane: Do you think this is really a rotation issue or a gluster-spamming issue? [21:44:53] a bit of both [21:44:58] but the logs don't get rotated at all [21:45:22] ok. Are there existing log-rotation schemes in puppet that you like? [21:45:25] Ryan_Lane: I suspect the GlusterFS volume to be corrupted for deployment-prep project. [21:46:08] andrewbogott: look at examples in files/logrotate [21:46:12] hashar: it's possible. [21:46:21] Ryan_Lane: ok, thanks. [21:50:37] andrewbogott Thanks, I'm testing it. It limits what I can override but may be enough. Otherwise at the end require_once orig/LocalSettings_ReallyOverrideFinal.php :) [21:51:06] spagewmf: What overrides are prevented? [21:52:00] andrewbogott if you wanted to change settings introduced by role_config_lines, and if one of the extensions from role_requires has settings that must be set after loading it. [21:52:22] let me test it and see if either applies to us. [21:53:38] Ah, I see... [21:54:27] andrewbogott It should WFM, it doesn't look like our labs instance piramido has any role_requires or role_config_lines added. Ship it! [21:55:24] Well, you may have convinced me that they should be the other way around... [21:57:55] Hm… nope, needs to go in as is. [22:01:27] Ryan_Lane: I can not login bugzilla. error message: The search named Wikimedia Labs does not exist. [22:09:22] ah [22:09:50] mike_wang: https://bugzilla.wikimedia.org/buglist.cgi?list_id=171575&resolution=---&resolution=LATER&resolution=DUPLICATE&query_format=advanced&product=Wikimedia%20Labs [22:10:25] mike_wang: if it's listed as a bug for beta, it's not infrastructure, even if it looks like infrastructure [22:10:32] same with bugs for bots [22:10:43] Ryan_Lane: I can see the pages now [22:10:45] cool [22:11:08] all of the ones I'm talking about are likely listed in the "General" component [22:16:02] moving out, *waves* [22:19:21] ok. lunch [22:27:51] Change on 12mediawiki a page OAuth/status was modified, changed by Jdforrester (WMF) link https://www.mediawiki.org/w/index.php?diff=625336 edit summary: [+0] Change latest referer to 2012-12-monthly. [22:29:38] Change on 12mediawiki a page OAuth was modified, changed by Jdforrester (WMF) link https://www.mediawiki.org/w/index.php?diff=625337 edit summary: [+25] + a clearing BR so the TOC isn't muddled by the float-left image. [22:47:12] Ryan_Lane: of all the 110 bugs, I found 3 are infrastructure related. [22:47:26] 42578 All Linux [22:47:28] 41797 Other All [22:47:29] 36338 ALL Linux [22:47:44] Is that what you need? [22:47:50] https://bugzilla.wikimedia.org/show_bug.cgi?id=42578 [22:47:53] that's not infrastructure [22:47:54] it's bot [22:47:56] *bots [22:48:25] by infrastructure I mean it deals with the labs infrastructure and not one of the projects inside of labs [22:48:39] for instance: https://bugzilla.wikimedia.org/show_bug.cgi?id=41104 [22:49:11] that bug affects all instances. it's specific to the filesystem we're using [22:49:33] another: https://bugzilla.wikimedia.org/show_bug.cgi?id=40526 [22:49:54] that's specific to firewall rules not being applied when someone changes the security groups via labsconsole [22:50:08] it's not specific to any project, but is a labs infrastructure bug [22:50:45] https://bugzilla.wikimedia.org/show_bug.cgi?id=40526 needs to have its "Component" field changed from "General" to "Infrastructure" [22:51:04] there's a bunch of other bugs like that [22:51:38] if the bug has a component that's already selected as bots or as beta, it's not a labs infrastructure bug [22:52:02] it'll either have "other" or "General" as the component [22:52:52] OK. I am clear. [22:52:58] if you're not sure if one is labs infrastructure or not, ask, and I'll check [22:54:13] sure [23:18:12] Ryan_Lane: Is there any reason why we care about having geo replication turned on in gluster? Are we going to use that to distribute filesystems across data centers sometime soon? [23:19:25] we have that turned on? [23:19:37] I think it may be on by default and/or by accident. [23:19:44] Going to turn it off for this troubled volume if you have no objection. [23:19:49] no objection [23:19:58] ok [23:40:10] Ryan_Lane: Bug 39788 - Enable passwordless sudo for instances . Do you think it is infrastructure related? [23:41:53] PROBLEM Free ram is now: WARNING on aggregator1.pmtpa.wmflabs 10.4.0.79 output: Warning: 19% free memory [23:42:57] mike_wang: yes [23:44:43] thax [23:44:47] thx [23:56:56] !tunnel [23:56:56] ssh -f user@bastion.wmflabs.org -L :server: -N Example for sftp "ssh chewbacca@bastion.wmflabs.org -L 6000:bots-1:22 -N" will open bots-1:22 as localhost:6000