[00:05:23] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [00:08:13] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [00:11:35] <^demon> Ryan_Lane: You know, having labsconsole get updated as part of the deployment cycle would be nice. Too bad it's setup totally differently :\ [00:11:44] yeah [00:11:46] well... [00:11:52] it *could* be done the same way [00:11:59] there's no reason it can't [00:12:29] <^demon> Eh, there's probably a lot of shit on the cluster you don't want. [00:12:59] we'd need to include OpenStackManager, OATHAuth, LdapAuthentication, SMW, SRF, SF, DynamicSidebar, and Validator into the wmf branches [00:13:29] yeah [00:13:36] <^demon> That's the easy part. I'm just thinking of all the extensions that are globally enabled that you don't want. [00:13:40] true [00:13:42] <^demon> CentralAuth immediately comes to mind. [00:13:45] indeed [00:13:51] that wouldn't work so well [00:14:16] <^demon> So yeah, we don't want to necessary make it *like* a cluster wiki, but having some way to update it easily during the deployment cycle would be nice. [00:14:32] <^demon> We can easily add the extensions to the branch, maybe just have some script we run on labsconsole's side. [00:14:57] <^demon> Well, "script" being `git checkout` [00:15:10] this may be more doable with the new deployment system [00:15:23] <^demon> If we just add the extensions to the branch, you just have to update the branch you're tracking. [00:15:28] yep [00:15:55] thankfully right now it's not too hard for me to update [00:15:57] <^demon> I'm not gonna do it tonight, but I'll lob a bug in BZ so I don't forget for me or Sam to add those extensions to wmf deploy branches. [00:16:05] cool. thanks [00:16:19] the hard part of that will be determining proper branch points [00:16:28] maybe the SMW folks would be interested in that [00:16:33] I can do the ones I manage [00:17:26] <^demon> Well make-wmf-branch lets us specify a branch point, so we can easily set them to w/e. [00:17:31] <^demon> (If we don't want HEAD) [00:17:58] well, I don't think we want head [00:18:04] ^demon: possibly related, I'd sure like to see everything deployed from Jenkins in the long run: https://bugzilla.wikimedia.org/show_bug.cgi?id=39701 [00:18:12] for instance we really want 1.8.x branch for SMW and SRF [00:18:27] SF doesn't have branch points, so we'd need to manage that somehow [00:18:40] or ask if Yaron would be kind enough to do so [00:19:04] chrismcmahon: deployed from jenkins? [00:19:07] you mean in labs, rigth? [00:19:09] ah [00:19:10] yeah [00:19:22] chrismcmahon: that should be doable with the new deployment system [00:19:38] s/should/will/ [00:19:54] Ryan_Lane: prod would be cool too. :) in the long run we're aiming for a DevOp/Continuous Deployment world as I understand it, so making Jenkins central will be important [00:20:04] DevOps even [00:20:06] we don't trust gerrit and jenkins enough for that [00:20:19] for the same reason we don't do that for puppet [00:21:09] Jenkins at least is designed for feedback mechanisms. it's a big project and a long time scale. [00:21:16] <^demon> Ryan_Lane: https://bugzilla.wikimedia.org/show_bug.cgi?id=42756 [00:21:22] yeah, but the worry is that gerrit will get owned [00:21:36] if we manually deploy, we have a final safeguard [00:22:29] ^demon: thanks [00:22:37] <^demon> There is a "Deployment" plugin for Jenkins. I've never used it, but was always curious. [00:22:48] <^demon> If it was manually triggered...one-click deploys would be possible ;-) [00:23:03] heh [00:23:14] terrifying ;) [00:24:12] <^demon> I need to write a script for manganese/formey. So I can just have it pull my latest wars from jenkins and deploy them. [00:24:23] <^demon> Almost-insta-deploy :) [00:25:55] fwiw, open req: http://hire.jobvite.com/Jobvite/Job.aspx?j=oZrQWfwW&c=qSa9VfwQ [00:26:31] ^demon: take a look at the new deployment system first [00:26:51] it's written to be generic [00:27:16] chrismcmahon: yeah [00:27:58] I think we backed off the DevOps jargon for that req though. [00:28:49] <^demon> Ryan_Lane: I'm not entirely sure...I have to do a `mvn package` or have git deploy fetch from some non-git location. [00:28:57] <^demon> I don't store the wars/jars in git, for obvious reasons. [00:29:56] ^demon: if it's a private git repo what's it matter? [00:30:06] it'll grow large, but we can fuck with the history as much as we want [00:30:25] of course, that said, it doesn't necessarily have to use git either :) [00:30:34] <^demon> I suppose we could do that. [00:31:06] I'm using git because it's efficient for what we're doing with mediawiki [00:31:26] <^demon> For gerrit, not so much. It's all binary files that don't delta well. [00:32:01] yeah [00:32:13] we could just use http for that [00:32:13] <^demon> I mean I could toss some wars in a repo and see, but at ~25M each, I doubt even a really aggressive repack will do much. [00:32:32] deployment is behind a web server anyway [00:33:30] we'd just need to use (or add) another module for that style of deployment [00:34:09] I think there's already a module for this, though [00:35:15] or... package all the things... from ci, pushed to mirror automagically with rainbow ponies [00:35:38] http://docs.saltstack.org/en/latest/ref/modules/all/salt.modules.cp.html#module-salt.modules.cp [00:35:48] Damianz: that's going to be a lot more work [00:35:58] <^demon> Eh, not as bad as I thought. One version of war in history - whole clone is 52M [00:35:59] we'd need to build a package building servie [00:36:00] work is fun though! [00:36:05] <^demon> Second version in history, then repacked, 59M [00:36:11] <^demon> 7M per version, so not bad. [00:36:19] salt.modules.cp.get_url(path, dest, env='base') [00:36:29] cli example: salt '*' cp.get_url http://www.slashdot.org /tmp/index.html [00:36:42] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [00:36:45] then.... [00:36:52] salt.modules.file.check_hash(path, hash) [00:36:53] and yeah... build services are a pita... I'm scoping one atm as we have stupid stuff like sles/suse as well as sensible stuff like debian to support :( [00:37:44] Damianz: heh [00:37:48] that's a pain [00:38:50] <^demon> Ryan_Lane: I'll think about it tomorrow. Maybe using a git repo won't be bad. I can easily stash core + all the plugins and deploy the whole thing at once. [00:38:59] All I can say is yay for virtualisation ;) [00:39:02] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [00:39:14] ^demon: ok. if not we can do it via another method [00:40:51] <^demon> Oh, one last thing. https://gerrit.wikimedia.org/r/#/c/37155/ is for the gerrit hooks. Roan requested it so we continue getting proper irc notifs on merges. [00:41:28] <^demon> Now, it's dinner time. [00:41:42] merged [01:06:33] PROBLEM Total processes is now: WARNING on bots-salebot i-00000457.pmtpa.wmflabs output: PROCS WARNING: 173 processes [01:06:53] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [01:09:03] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [01:16:33] RECOVERY Total processes is now: OK on bots-salebot i-00000457.pmtpa.wmflabs output: PROCS OK: 97 processes [01:36:52] PROBLEM Free ram is now: CRITICAL on dumps-bot1 i-000003ed.pmtpa.wmflabs output: Critical: 4% free memory [01:37:42] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [01:39:42] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [02:01:32] PROBLEM Total processes is now: WARNING on parsoid-spof i-000004d6.pmtpa.wmflabs output: PROCS WARNING: 154 processes [02:06:53] PROBLEM Current Load is now: WARNING on parsoid-roundtrip4-8core i-000004ed.pmtpa.wmflabs output: WARNING - load average: 11.87, 9.91, 7.17 [02:08:23] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [02:08:43] PROBLEM Current Load is now: WARNING on parsoid-roundtrip7-8core i-000004f9.pmtpa.wmflabs output: WARNING - load average: 8.30, 7.95, 5.75 [02:09:43] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [02:13:42] PROBLEM Current Load is now: WARNING on parsoid-roundtrip3 i-000004d8.pmtpa.wmflabs output: WARNING - load average: 8.71, 8.27, 5.89 [02:14:52] PROBLEM Current Load is now: WARNING on ve-roundtrip2 i-0000040d.pmtpa.wmflabs output: WARNING - load average: 6.59, 6.85, 5.37 [02:34:52] RECOVERY Current Load is now: OK on ve-roundtrip2 i-0000040d.pmtpa.wmflabs output: OK - load average: 3.62, 4.12, 4.76 [02:38:32] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [02:41:42] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [02:41:52] RECOVERY Free ram is now: OK on wikistream-1 i-0000016e.pmtpa.wmflabs output: OK: 27% free memory [02:47:52] PROBLEM Current Load is now: WARNING on ve-roundtrip2 i-0000040d.pmtpa.wmflabs output: WARNING - load average: 6.97, 6.86, 5.97 [02:51:33] RECOVERY Total processes is now: OK on parsoid-spof i-000004d6.pmtpa.wmflabs output: PROCS OK: 149 processes [03:03:52] PROBLEM Current Load is now: WARNING on parsoid-roundtrip6-8core i-000004f8.pmtpa.wmflabs output: WARNING - load average: 7.29, 6.25, 5.39 [03:08:32] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [03:09:52] PROBLEM Free ram is now: WARNING on wikistream-1 i-0000016e.pmtpa.wmflabs output: Warning: 13% free memory [03:11:43] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [03:38:32] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [03:42:22] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [04:08:52] RECOVERY Current Load is now: OK on parsoid-roundtrip6-8core i-000004f8.pmtpa.wmflabs output: OK - load average: 3.74, 4.31, 4.90 [04:09:02] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [04:12:11] Could a bots project sysadmin please create me a directory under "/data/project/fastily"? [04:12:22] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [04:12:26] petan Damianz ? [04:39:05] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [04:42:22] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [04:47:53] RECOVERY Current Load is now: OK on ve-roundtrip2 i-0000040d.pmtpa.wmflabs output: OK - load average: 4.67, 4.20, 4.78 [04:48:43] RECOVERY Current Load is now: OK on parsoid-roundtrip3 i-000004d8.pmtpa.wmflabs output: OK - load average: 3.40, 4.03, 4.86 [05:09:32] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [05:10:32] PROBLEM Total processes is now: WARNING on parsoid-spof i-000004d6.pmtpa.wmflabs output: PROCS WARNING: 153 processes [05:12:22] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [05:23:43] RECOVERY Current Load is now: OK on parsoid-roundtrip7-8core i-000004f9.pmtpa.wmflabs output: OK - load average: 4.03, 4.52, 4.81 [05:26:52] PROBLEM Free ram is now: WARNING on dumps-bot1 i-000003ed.pmtpa.wmflabs output: Warning: 11% free memory [05:39:42] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [05:42:22] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [06:10:22] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [06:12:22] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [06:28:33] PROBLEM Total processes is now: WARNING on vumi-metrics i-000004ba.pmtpa.wmflabs output: PROCS WARNING: 151 processes [06:28:33] PROBLEM Total processes is now: CRITICAL on incubator-apache i-00000211.pmtpa.wmflabs output: PROCS CRITICAL: 204 processes [06:33:33] RECOVERY Total processes is now: OK on vumi-metrics i-000004ba.pmtpa.wmflabs output: PROCS OK: 146 processes [06:40:32] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [06:42:22] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [06:48:11] Change on 12mediawiki a page Developer access was modified, changed by Psubhashish link https://www.mediawiki.org/w/index.php?diff=613635 edit summary: /* User:Psubhashish */ Re [06:48:32] PROBLEM Total processes is now: WARNING on incubator-apache i-00000211.pmtpa.wmflabs output: PROCS WARNING: 199 processes [07:11:43] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [07:12:23] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [07:41:52] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [07:43:12] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [08:11:52] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [08:13:12] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [08:41:53] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [08:43:13] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [08:55:33] RECOVERY Total processes is now: OK on parsoid-spof i-000004d6.pmtpa.wmflabs output: PROCS OK: 150 processes [09:12:43] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [09:13:13] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [09:41:53] RECOVERY Current Load is now: OK on parsoid-roundtrip4-8core i-000004ed.pmtpa.wmflabs output: OK - load average: 4.01, 4.37, 4.95 [09:43:13] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [09:43:23] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [10:13:13] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [10:13:33] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [10:43:13] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [10:43:33] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [11:14:02] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [11:14:03] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [11:17:33] PROBLEM Current Users is now: WARNING on wikidata-dev-9 i-0000052a.pmtpa.wmflabs output: USERS WARNING - 6 users currently logged in [11:25:58] Hi! I'm searching the labs help for hints if and how I can customize the default labs Nagios settings to monitor more services. Can't find anything - any hints? [11:32:32] RECOVERY Current Users is now: OK on wikidata-dev-9 i-0000052a.pmtpa.wmflabs output: USERS OK - 5 users currently logged in [11:44:03] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [11:44:12] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [12:14:03] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [12:14:14] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [12:44:03] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [12:44:13] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [13:14:32] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [13:14:32] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [13:16:53] PROBLEM Free ram is now: CRITICAL on dumps-bot1 i-000003ed.pmtpa.wmflabs output: Critical: 5% free memory [13:26:52] PROBLEM Free ram is now: WARNING on dumps-bot1 i-000003ed.pmtpa.wmflabs output: Warning: 6% free memory [13:36:52] PROBLEM Free ram is now: CRITICAL on dumps-bot1 i-000003ed.pmtpa.wmflabs output: Critical: 5% free memory [13:44:43] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [13:44:43] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [14:15:23] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [14:15:33] PROBLEM Current Users is now: WARNING on wikidata-dev-9 i-0000052a.pmtpa.wmflabs output: USERS WARNING - 6 users currently logged in [14:16:43] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [14:45:23] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [14:46:43] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [14:56:22] Change on 12mediawiki a page Developer access was modified, changed by Siebrand link https://www.mediawiki.org/w/index.php?diff=613808 edit summary: /* User:Alexander.dolidze */ [14:56:36] Change on 12mediawiki a page Developer access was modified, changed by Siebrand link https://www.mediawiki.org/w/index.php?diff=613809 edit summary: /* User:Alexander.dolidze */ [14:58:14] Change on 12mediawiki a page Developer access was modified, changed by Siebrand link https://www.mediawiki.org/w/index.php?diff=613812 edit summary: /* User:Psubhashish */ [14:58:32] PROBLEM Total processes is now: CRITICAL on incubator-apache i-00000211.pmtpa.wmflabs output: PROCS CRITICAL: 208 processes [15:03:33] PROBLEM Total processes is now: WARNING on incubator-apache i-00000211.pmtpa.wmflabs output: PROCS WARNING: 199 processes [15:15:33] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [15:16:44] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [15:41:52] PROBLEM Free ram is now: CRITICAL on dumps-bot1 i-000003ed.pmtpa.wmflabs output: Critical: 4% free memory [15:46:42] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [15:47:22] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [16:16:42] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [16:17:22] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [16:45:56] Hello! I could use some help to debug why the Wikidata demo repo has session failures! Can anyone help me? I checked the item in http://www.mediawiki.org/wiki/Manual:Errors_and_symptoms#Sorry.21_We_could_not_process_your_edit_due_to_a_loss_of_session_data._Please_try_again._If_it_still_doesn.27t_work.2C_try_logging_out_and_logging_back_in. But still no change. [16:46:53] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [16:47:23] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [16:47:56] What would be the first things you would check? [17:05:26] Silke_WMDE, are the session files being written correctly? [17:09:38] Platonides: I'm not sure [17:10:33] Actually, I realized, the failure occurs when editing Wikidata items but not when editing wikitext (e.g. on the front page). [17:15:22] PROBLEM Total processes is now: WARNING on nova-precise1 i-00000236.pmtpa.wmflabs output: PROCS WARNING: 151 processes [17:17:32] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [17:18:12] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [17:25:32] RECOVERY Current Users is now: OK on wikidata-dev-9 i-0000052a.pmtpa.wmflabs output: USERS OK - 2 users currently logged in [17:47:42] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [17:48:12] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [18:17:42] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [18:18:12] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [18:47:42] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [18:48:12] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [19:09:01] jeremyb: ? [19:09:11] matanya [19:10:25] huh [19:10:27] 12/06/2012 - 19:10:27 - Creating a home directory for matanya at /export/keys/matanya [19:15:36] 12/06/2012 - 19:15:36 - Updating keys for matanya at /export/keys/matanya [19:17:42] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [19:17:49] sorry jeremyb got disconnected [19:18:11] I'd like to know how to port my tools from the TS, can you help me with that? [19:18:12] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [19:27:01] matanya: not today... you can just ask the channel and someone may answer. or poke me in a day or two? [19:47:42] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [19:48:12] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [20:04:32] PROBLEM Total processes is now: WARNING on parsoid-spof i-000004d6.pmtpa.wmflabs output: PROCS WARNING: 152 processes [20:17:42] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [20:18:12] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [20:29:33] RECOVERY Total processes is now: OK on parsoid-spof i-000004d6.pmtpa.wmflabs output: PROCS OK: 149 processes [20:48:13] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [20:48:33] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [21:16:52] PROBLEM Free ram is now: WARNING on dumps-bot1 i-000003ed.pmtpa.wmflabs output: Warning: 11% free memory [21:19:02] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [21:19:12] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [21:49:03] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [21:49:33] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [21:55:28] 12/06/2012 - 21:55:28 - Updating keys for mwang at /export/keys/mwang [22:03:33] Damianz: do you know who else i accepting shell requests? two days ago sumanah told me wangatlargo, but no response until now [22:03:54] Hi there Merlissimo -- have you tried emailing labs-l? [22:03:55] Ryan_Lane paravoid andrewbogott [22:04:05] its for https://labsconsole.wikimedia.org/wiki/Shell_Request/Bene [22:04:07] dunno if wang knows how :) [22:04:17] Merlissimo: Damianz - maybe you should put that list on https://labsconsole.wikimedia.org/wiki/Shell_Request [22:04:53] technically it's anyone in https://labsconsole.wikimedia.org/w/index.php?title=Special:ListUsers&group=cloudadmin [22:05:00] pretty sure that was linked from somewhere at some point [22:06:07] kinda tempting to suggest labs has a schedule of who picks up random crap, like ops has now [22:06:28] though if we go by ops, it's Ryan's problem today :D [22:06:34] Damianz: I can create it right now, but indeed having the request on a secret page is not likely to lead to success :) [22:07:02] andrewbogott: That's where requests go.... [22:07:17] hm... [22:07:36] '(No outstanding requests)' is bs though [22:07:40] stupid semantic mw [22:07:55] Sorry, I'm confused… this is different from http://www.mediawiki.org/wiki/Project:Labsconsole_accounts ? [22:07:59] yes [22:08:02] I think so [22:08:12] yeah [22:08:15] that one should go away [22:08:16] mike_wang: hi, ping, do you have a moment? [22:08:23] basically atm they need that and shell request [22:08:45] once self registration opens up (finally *looks at hashar*) then the mediawiki one will go away and shell requests will be for ssh only [22:08:56] it's stupid atm because opening up registration got pushed back [22:09:30] so http://www.mediawiki.org/wiki/Project:Labsconsole_accounts = ldap/gerrit account, shell request = labs [22:09:36] OK. If I were to want to answer the question "What are the pending shell requests?" where would I go for that list? [22:09:43] Though it's technically all labs *brain implode* [22:11:45] {{#ifeq:{{{Comments|}}}|false|[[catagore:todo]]}} at the template would help i semantic mediawiki does not [22:11:49] so apparently it's broken [22:12:04] Merlissimo: I granted shell access to Bene, let me know if you need anything else. [22:12:11] https://labsconsole.wikimedia.org/wiki/Help:Contents#Requesting_Shell_Access [22:12:14] second item on there [22:12:17] should be a link [22:12:20] to a search result [22:12:21] andrewbogott: thx [22:12:25] but stupid semantic wiki is stupid [22:12:55] :D [22:12:58] https://labsconsole.wikimedia.org/wiki/Special:Ask/-5B-5BCategory:Shell-20Access-20Requests-5D-5D-5B-5BIs-20Completed::false-5D-5D/-3FShell-20Justification/-3FModification-20date/format%3Dbroadtable/sort%3DModification-20date/order%3Dasc/headers%3Dshow/searchlabel%3DOutstanding-20Requests/default%3D(No-20outstanding-20requests)/offset%3D0 [22:13:03] I upgraded it. is it broken? [22:13:11] thx also to Damianz and sumanah [22:13:12] well the search works [22:13:16] but it's not changing [22:13:20] just showing the default hmm [22:13:20] not changing? [22:13:35] Damianz: I think that legitimately counts as a 'secret' page :) [22:13:36] well see that url [22:13:37] 4 requests [22:13:47] yet help:contents says (No outstanding requests) [22:13:54] and (No completed requests) [22:13:56] so it's broken [22:14:20] andrewbogott: Hey you can like make it an rss feed and un-secretify it! [22:14:29] Or just blame Ryan :D [22:14:47] andre__: what counts as a secret page? [22:14:49] err [22:14:52] andrewbogott: ^^ [22:15:06] you can find all "request" pages via Content:Help [22:15:26] /normally/ [22:15:28] 12/06/2012 - 22:15:28 - Updating keys for mwang at /export/keys/mwang [22:15:28] shell requests is even in the sidebar [22:15:38] err [22:15:41] Help:Contents [22:15:46] oh [22:15:48] it is indeed broken [22:15:49] -.- [22:15:57] * Damianz frowns [22:16:07] the page needed to be refreshed [22:16:09] it's working now [22:16:41] on another note, can we get rid of the first form and just have 1 confusing process rather than 2? [22:16:46] yes [22:16:51] if you can make the form :) [22:16:57] >.< [22:17:04] kidding [22:17:06] it doesn't need to be you [22:17:07] it took me like 2387592386759823675986235 hours to make the project one :P [22:17:14] OK, the sidebar is working for me. having fulfilled a request, what should I do to make it stop appearing on that page? [22:17:18] but I should likely work on other things [22:17:21] tick the box that says done [22:17:25] yep [22:17:57] Um... [22:18:01] where is that box? [22:18:13] I see a table with three columns [22:18:26] hit edit as form [22:18:27] or w/e [22:18:39] or add Completed:true to the table [22:18:42] it's what the form does [22:18:46] because smw is just weird [22:18:50] Oh, ok, I see it! [22:19:03] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [22:19:12] * andrewbogott does these other two as well [22:19:33] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [22:21:08] Unrelatedly… Ryan_Lane, could I get a review of this? https://gerrit.wikimedia.org/r/#/c/37250/ [22:21:56] I am pretty sure that modifying the dict key like that doesn't break anything, but it makes me nervous nonetheless [22:21:59] * Damianz goes back to debating about killing nrpe with fire and replacing it with salt modules [22:25:52] hm [22:25:59] salt could actually be a replacement for nrpe [22:26:06] that's an interesting idea [22:26:20] it could even be run from the monitoring server via peer runs [22:28:36] Suman: Hi [22:28:36] so mike_wang - welcome to Wikimedia! I think Labs is exciting and I am glad you are working on it :-) [22:28:45] Thx [22:28:48] !tabcompletion | mike_wang [22:28:52] oh darn [22:29:00] Well the way I imagine our setup is chef keeps state, saltstack provides on demand stuff (basically never ssh into a server)... but reusing what I can present as a limited web interface for users for high level monitoring both application and 'remote probes' wise seems appealing [22:29:39] The only bit I'm hmmmm about is dependancies for monitoring... but nrpe is a pile of crap anyway, length limitations, deploying/updating plugins etc... [22:29:53] mike_wang: Instead of manually typing another person's nickname in IRC, you can type the beginning of their name and hit TAB on your keyboard to get it autocompleted, like on the command line. This avoids misspellings. Just make sure it's the right person! [22:30:49] mike_wang: anyway -- so it would be cool if you'd add your name on https://labsconsole.wikimedia.org/wiki/Help:Access and https://www.mediawiki.org/wiki/Wikimedia_Labs in the relevant places so people know you're working on this stuff [22:31:01] Yay to maintaing one repo of niceness over hacks of scripts all over the show though! [22:31:15] maintaining* totally can't spell [22:31:39] sumanah: it works. [22:31:47] Damianz: nrpe is also terribly insecure [22:31:59] sumanah: I typed TAB [22:32:07] mike_wang == wangatlargo? [22:32:11] yes [22:32:18] That totally just ruined my 'GOT WANG' snigger [22:32:48] Ryan_Lane: Oh yeah... remove commands enabled, unfirewalled etc... got some boxes like that atm (going to be deleted and re-created next week). [22:32:59] yay tabs :-) [22:33:02] * Damianz loves finding crazy stuff on inherited servers [22:33:05] Drugs are bad [22:33:10] Damianz: Yes I cahnged my nick name for easy recognition [22:33:13] mike_wang: have you been introduced to the major folks here who talk a lot? [22:33:26] mike_wang: Welcome anyway :) [22:33:31] Not yet [22:33:34] and dammit... remote not remove [22:33:37] * Damianz headdesks [22:33:47] mike_wang: https://wikimediafoundation.org/wiki/Staff has the people who work for the WMF already - like Andrew and Ryan [22:34:00] I hate that page [22:34:06] whoever did the expand teams thing needs shooting [22:34:23] mike_wang: petan is Petr Bena, a volunteer who maintains the wm-bot, if I recall correctly, and who has in the past worked on the Beta Cluster that's hosted in Labs https://www.mediawiki.org/wiki/Beta_cluster [22:34:27] * Damianz thinks 7 people work for wmf everytime he sees that [22:34:54] mike_wang: Damianz is Damian Zaremba, a volunteer who likes to be negative [22:35:01] And does other things too [22:35:04] sumanah <3 [22:35:14] mike_wang: Damianz and petan live in Europe [22:35:52] mike_wang: https://www.mediawiki.org/wiki/Developers and its subpages may be interesting for when you're wondering "who knows a lot about x?" or "who maintains service y?" [22:35:57] Talking of doing stuff I have about 60 cbng reviews to look at... bleh [22:36:00] and include some IRC handles [22:36:08] Damianz: I also don't like the "expand team" thing [22:36:18] I can never find people when I'm lookjing [22:36:20] *looking [22:36:53] mike_wang: Platonides is a volunteer in Europe who is working on the bots project, if I recall correctly -- that's pretty important -- https://labsconsole.wikimedia.org/wiki/Help:Move_your_bot_to_Labs is a reasonably important guide and can use improvement [22:37:11] mike_wang: (btw is this helpful to you?) [22:37:14] Platonides works on a bunch of things [22:37:49] sumanah: thanks for introducing the people to me. It helps a lot [22:37:54] :D glad to help! [22:38:52] mike_wang: in the QA team -- see https://www.mediawiki.org/wiki/Wikimedia_Platform_Engineering#Quality_Assurance for the team members and their major activities -- Chris McMahon and �eljko Filipin are working on a bunch of automated testing that sometimes depends on Labs (most importantly stuff related to the beta cluster) [22:39:36] mike_wang: chrismcmahon is here (and lives in the US) and �eljko Filipin lives in Europe. Michelle Grover is in the US and works on QA for WMF mobile -- not in this channel right now evidently [22:39:49] hi mike_wang [22:40:34] mike_wang: so they'll end up talking with you about what "beta labs" needs. Similarly, Antoine Musso (hashar) and Brad Jorsch (anomie) from the MediaWiki Core team https://www.mediawiki.org/wiki/Wikimedia_Platform_Engineering#MediaWiki_Core work on Jenkins, Zuul, and related things, in case you ever have to do stuff related to that [22:40:52] chrismcmahon: hi [22:41:24] hashar makes pretty [22:41:35] in general mike_wang I find that https://www.mediawiki.org/wiki/Wikimedia_Engineering provides an okay overview of a bunch of stuff that WMF engineering is doing and you can click through to the Platform, Features, Mobile, and Language hubs to understand those activities better [22:42:31] I usually work remotely from New York City so I know it can be kind of isolating and overwhelming that there is lots of stuff happening in Wikimedia Engineering and you don't hear about it quite as easily if you are a remote worker [22:42:32] PROBLEM Total processes is now: WARNING on parsoid-spof i-000004d6.pmtpa.wmflabs output: PROCS WARNING: 154 processes [22:42:32] PROBLEM Current Users is now: WARNING on kripke i-00000268.pmtpa.wmflabs output: USERS WARNING - 6 users currently logged in [22:42:44] sumanah: I will read https://www.mediawiki.org/wiki/Wikimedia_Engineering [22:42:53] * Damianz wonders if mike_wang has had his initiation yet, or if he should avoid wikipedia for the next few weeks [22:43:08] mike_wang: the monthly reports https://blog.wikimedia.org/c/corporate/wmf-monthly-reports/ https://www.mediawiki.org/wiki/Wikimedia_engineering_report/2012/November also REALLY help [22:43:47] mike_wang: Ryan_Lane just mentioned in a meeting today that we need to get better at communicating throughout engineering about what people are up to and what decisions are being made. https://meta.wikimedia.org/wiki/Metrics_and_activities_meetings/2012-12-06 has the notes & a link to the video of that meeting [22:44:23] anyway mike_wang I may have kind of overwhelmed you :-) [22:44:27] mike_wang: interesting enough, just now over on #wikimedia-operations we were discussing supporting more extensions on the beta labs cluster. If that's something that interests you as you get acquainted with things, let me know, I can talk about the current situation and where we want to go. [22:46:13] sumanah: You really overwhelmed me. I will say this chat and read it in the evening. [22:46:20] imo beta should be 2 seperate things (pre-prod and water testing), the former lets people do qa before deploy, the later lets people know if they broke their unit tests... not really sure how you can support more than prod and maintain proper qa... but I guess that's a huge discussion... or lets just auto spin up clusters, import random data sets from gluster, water test and destroy FTW [22:46:30] chrismcmahon: ^ see what Damianz just suggested [22:46:50] hey that wasn't a suggestion... that was a glad I don't work in qa because i'd go crazy :P [22:47:06] mike_wang: just out of curiosity, on Day One when you started with WMF, what kind of onboarding discussion or training or docs did you get? [22:47:14] Though I actually like sitting near our qa guys... except they play with hardware not websites [22:48:06] sumanah: Well he doesn't work in ops until he's taking prod down at least once, so you might want to ask next week :P [22:48:22] Damianz: I actually have a pretty clear idea of where I want beta labs to go, but zuul was a requirement, and we'll have to hack on Jenkins, and some other stuff. It's a process. [22:49:02] sumanah: On day one, I read https://labsconsole.wikimedia.org/wiki/Main_Page [22:49:03] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [22:49:33] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [22:49:35] chrismcmahon: I know you've mentioned this before... I can just see about 20 ways it could go, but having a vision is good :) [22:49:48] sumanah: ryan helped me set up account on day one [22:49:55] * Damianz is convinced his idea of testing is half crazy anyway [22:51:37] that's it? [22:52:02] that's pretty good tbf [22:52:24] it took me 3months to be told how to evacuate the building if it's on fire at work [22:52:43] "not fatally bad" does not equal "pretty good" Damianz :-) [22:53:06] 'it could be worse' [22:54:55] sumanah: I also take a phone meeting. just listening, did not say anything. [22:55:20] mike_wang: oh I think I was in that meeting, maybe - was it Monday/ [22:55:21] ? [22:55:48] sumanah: yes monday [22:56:28] yeah, I was the one talking about increasing our ability to usefully nurture volunteers [22:56:56] I should actually introduce *myself* - I am Sumana Harihareswara and I am the manager of our Engineering Community team -- nurturing our open source community [22:57:01] Random question - how do you find timezones at mwf? We're 16hour different east to west with the uk sitting in the middle, makes us <> cn co-ordination hard sometimes I find - things like 7am conference calls for us work around the issues. [22:57:11] yeah it's annoying [22:57:33] RECOVERY Total processes is now: OK on parsoid-spof i-000004d6.pmtpa.wmflabs output: PROCS OK: 149 processes [22:59:27] sumanah: I am Mike Wang. I am now part time consultant. I work 5 hours each day. Usually in the morning from 9:30-11:30 EST, in the afternoon from 2:30-5:30 EST. [23:00:00] Cool, nice to meet you. :-) [23:00:29] sumanah: I think it is my dinner time now. Talk to you tomorrow. [23:00:42] Bye! [23:00:59] I will save all the view and read it this evening. bye! [23:03:58] Good to meet you, Mike. [23:04:56] I created a new instance, proveit, https://labsconsole.wikimedia.org/wiki/Nova_Resource:I-0000052f . [23:05:11] But when I try to ssh in, I get "Unable to create and initialize directory '/home/mattflaschen'." [23:06:08] I can see the system info (system load, users logged in, etc.) [23:06:59] hmm [23:07:07] * Damianz pokes Ryan_Lane [23:07:12] did you make that r/o yet? [23:07:19] I totally don't keep up with stuff [23:07:22] it's a freaking ping storm on irc today [23:07:24] might just be the script broke again [23:07:30] Damianz: what's broken? [23:07:36] ping storm? ryan over multicast! [23:07:38] ah [23:07:43] Ryan_Lane: user home dir in project [23:07:48] lemme see [23:08:18] i forgot this is what my job is normally like. I need to go on vacation more often [23:10:31] then you'd have 4 weeks of backlog instead of 2 [23:10:52] I hate vacation generally... spend most the time thinking about the time lost that I could be doing work :P [23:11:08] the script must be broken [23:11:12] I'm done with backlog [23:11:33] people realize I'm back and everyone pings me at once :D [23:11:47] the export doesn't list the server [23:11:52] I wonder if it's missing an A record [23:12:10] it's their secret love for you [23:12:18] it's missing an A record [23:12:49] I wonder what triggers this bug [23:12:51] it's really rare now [23:13:27] I like mine well done [23:13:39] hockey puck? [23:14:28] mmmmmm [23:15:23] superm401: works now [23:15:36] I can't wait for moniker :) [23:15:45] Ryan_Lane, great, thanks. [23:15:49] yw [23:17:32] RECOVERY Current Users is now: OK on kripke i-00000268.pmtpa.wmflabs output: USERS OK - 5 users currently logged in [23:19:42] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [23:20:33] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [23:28:48] Hmm [23:28:57] Is labs still dual dutied for controllers? [23:29:06] Ie the 2 controllers are just on virtual nodes? [23:30:48] Damianz: eh? [23:31:05] yeah that was a retarded sentance [23:31:09] :D [23:31:27] they're not dedicated boxes, just both parts of openstack on 1 box (ie nova/controller) [23:31:56] well, it depends [23:32:09] we're running the scheduler, glance, and keystone on virt0 [23:32:19] we're running the api and network node on virt2 [23:32:31] we're running compute on virt1-8 [23:35:01] Ah... I thought you where running mirro [23:35:05] bleh [23:35:12] Ah... I thought you where running all on 2 for failover* [23:36:10] * Damianz needs to read about openstack this weekend to argue for its implimentation in our 3 new datacenters over vmware vsphere [23:36:50] chrismcmahon: I need to go right now, but check out https://bugzilla.wikimedia.org/show_bug.cgi?id=40605 . [23:49:42] PROBLEM host: i-0000051a.pmtpa.wmflabs is DOWN address: i-0000051a.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [23:51:42] PROBLEM host: i-000004de.pmtpa.wmflabs is DOWN address: i-000004de.pmtpa.wmflabs PING CRITICAL - Packet loss = 100% [23:53:09] I've got another question about the proveit instance I created. [23:53:33] In Configure instance, I checked webserver::apache2 a while ago. [23:53:41] But it doesn't seem to actually be installed.