[00:13:44] PROBLEM Current Load is now: CRITICAL on testing-newnode i-000001ec output: CHECK_NRPE: Error - Could not complete SSL handshake. [00:14:24] PROBLEM Current Users is now: CRITICAL on testing-newnode i-000001ec output: CHECK_NRPE: Error - Could not complete SSL handshake. [00:15:04] PROBLEM Disk Space is now: CRITICAL on testing-newnode i-000001ec output: CHECK_NRPE: Error - Could not complete SSL handshake. [00:15:44] PROBLEM Free ram is now: CRITICAL on testing-newnode i-000001ec output: CHECK_NRPE: Error - Could not complete SSL handshake. [00:16:54] PROBLEM Total Processes is now: CRITICAL on testing-newnode i-000001ec output: CHECK_NRPE: Error - Could not complete SSL handshake. [00:17:34] PROBLEM dpkg-check is now: CRITICAL on testing-newnode i-000001ec output: CHECK_NRPE: Error - Could not complete SSL handshake. [00:28:08] PROBLEM host: testing-newnode is DOWN address: i-000001ec check_ping: Invalid hostname/address - i-000001ec [02:59:28] RECOVERY Current Users is now: OK on reportcard2 i-000001ea output: USERS OK - 2 users currently logged in [03:00:48] RECOVERY Disk Space is now: OK on reportcard2 i-000001ea output: DISK OK [03:00:48] RECOVERY Free ram is now: OK on reportcard2 i-000001ea output: OK: 83% free memory [03:01:58] RECOVERY Total Processes is now: OK on reportcard2 i-000001ea output: PROCS OK: 87 processes [03:03:48] RECOVERY Current Load is now: OK on reportcard2 i-000001ea output: OK - load average: 0.01, 0.09, 0.09 [03:30:38] PROBLEM Disk Space is now: WARNING on mobile-feeds i-000000c1 output: DISK WARNING - free space: / 573 MB (5% inode=84%): [03:42:38] PROBLEM Free ram is now: WARNING on utils-abogott i-00000131 output: Warning: 16% free memory [03:47:28] PROBLEM Free ram is now: WARNING on orgcharts-dev i-0000018f output: Warning: 16% free memory [03:50:58] PROBLEM Free ram is now: WARNING on nova-daas-1 i-000000e7 output: Warning: 15% free memory [03:57:28] PROBLEM Free ram is now: WARNING on test3 i-00000093 output: Warning: 11% free memory [04:02:28] RECOVERY Free ram is now: OK on test3 i-00000093 output: OK: 96% free memory [04:02:38] PROBLEM Free ram is now: CRITICAL on utils-abogott i-00000131 output: Critical: 4% free memory [04:04:38] PROBLEM dpkg-check is now: CRITICAL on bz-dev i-000001db output: DPKG CRITICAL dpkg reports broken packages [04:05:58] PROBLEM Free ram is now: WARNING on test-oneiric i-00000187 output: Warning: 14% free memory [04:07:28] PROBLEM Free ram is now: CRITICAL on orgcharts-dev i-0000018f output: Critical: 4% free memory [04:07:38] RECOVERY Free ram is now: OK on utils-abogott i-00000131 output: OK: 97% free memory [04:12:28] RECOVERY Free ram is now: OK on orgcharts-dev i-0000018f output: OK: 96% free memory [04:15:58] PROBLEM Free ram is now: CRITICAL on nova-daas-1 i-000000e7 output: Critical: 4% free memory [04:19:32] RECOVERY dpkg-check is now: OK on bz-dev i-000001db output: All packages OK [04:20:52] PROBLEM Free ram is now: CRITICAL on test-oneiric i-00000187 output: Critical: 5% free memory [04:20:52] RECOVERY Free ram is now: OK on nova-daas-1 i-000000e7 output: OK: 93% free memory [04:25:52] RECOVERY Free ram is now: OK on test-oneiric i-00000187 output: OK: 97% free memory [05:08:55] should we get labs bugs reported in here? [05:09:18] * jeremyb has been duly punished for commented on a bug while asleep: left out a word. ;-( [05:09:25] bye! [06:25:32] RECOVERY Disk Space is now: OK on deployment-transcoding i-00000105 output: DISK OK [06:52:52] RECOVERY Disk Space is now: OK on aggregator1 i-0000010c output: DISK OK [07:35:56] !requests | Ryan_Lane [07:35:57] Ryan_Lane: this is a backlog of all requests needed to be done by ops https://labsconsole.wikimedia.org/wiki/Requests [07:36:14] that request isn't even close to easy [07:36:22] there's only on there [07:36:48] *one [07:36:57] ah [07:37:02] I've been thinking about how to separate out root, but it's not simple [07:37:04] I didn't know if you even see it [07:37:15] ok, but how it's done on bastion? [07:37:16] it's best to make bugs for this stuff [07:37:23] on bastion no one has root [07:37:25] bugs? [07:37:29] bugzilla [07:37:34] wikimedia labs product [07:37:49] ah, you mean instead of request page [07:37:52] yes [07:37:55] perhaps we should remove it then [07:38:02] I had no clue this page existed till you pinged me about it one day [07:38:16] yeah would be good to remove it [07:38:19] ok, I thought it was someone from ops who created it [07:38:52] so, roles aren't groups [07:38:58] and the instances have no clue about them [07:39:07] so, we can't just make another role [07:39:09] ok but that's just a matter of some programming to implement it [07:39:27] I know it doesn't support it now, that's why I requested it :) [07:39:47] ok I guess that these groups are written to ldap? [07:39:57] they aren't groups [07:39:59] they are roles [07:40:07] and the aren't usable by the system [07:40:11] *they [07:40:29] they only mean something for things that support roles [07:40:56] we could update the extension on labs to use the group in ldap then, so that when you give someone the role, it update the group too [07:41:11] I'd really prefer not to handle it that way [07:41:14] it's error prone [07:41:48] ok, the only way would be to create a project like bastion and generate root password for instance [07:42:00] so that creator could su as root and define roles in sudoers [07:42:04] no. it should be sudo [07:42:23] sure [07:42:39] you would just need to create the sudoers definition for first time by hand [07:42:42] there shouldn't be a root password at all, for any instance [07:42:49] puppet does it now [07:42:55] but how do you manage it on every instance? [07:43:04] and how do you keep it updated? [07:43:13] maybe using puppet [07:43:21] if each project would have own repository for puppet [07:43:28] sudoers could be there [07:43:47] but that means manually modifying the puppet manifest [07:44:11] you plan to make puppet for each project separate, or not? [07:44:20] no [07:44:24] only a branch for each [07:44:28] ok, so branch [07:44:36] and I'm actually leaning towards feature branches, and not long-lived branches [07:44:44] hm... [07:44:57] but each project could have that one too, for such stuff [07:45:15] there's no referential integrity that way [07:45:18] it might be wanted to customize some classes for certain projects [07:45:33] if a user is deleted, there's no way to delete them in the sudoers [07:45:49] how is it done now? [07:46:09] either the ldap server or openstackmanager handles it [07:46:21] hm. I wonder if I can make per-project sudoers [07:46:48] right now everything goes to ou=SUDOers,dc=wikimedia,dc=org [07:47:10] if I made a sudoers definition underneath each project, it could work [07:47:28] then the instances would pull sudoer info from their own project, and defining "ALL" for hosts would only be for that project [07:47:42] we can manage global sudoers via puppet (we only allow that for ops anyway) [07:47:42] hm [07:48:22] then, if we wanted to delete an entire project, the sudoers ou would die with it [07:48:55] ok [07:48:56] then project admins could granularly give sudo, as well [07:49:06] can you open a bug? :) [07:49:32] I could actually automatically create a policy on project creation [07:49:39] with ALL, ALL (all rights on all instances) [07:49:48] then project admins can modify that, if they want to disable it [07:50:27] removing a member from a project would then also remove them from sudoers [07:51:48] I'll create the bug [07:51:53] and put in the design idea [07:52:25] hm, this should go in openstackmanager extension [07:52:33] the bug, that is [08:04:56] petan: https://bugzilla.wikimedia.org/show_bug.cgi?id=35850 [08:08:29] I think this approach is clever enough for me to write a blog post on my blog when I implement it :D [08:08:47] you have a blog? :D [08:09:00] of course :) [08:09:02] @regsearch ryan [08:09:02] No results found! :| [08:09:59] http://ryandlane.com/blog/ [08:13:59] ok. bed time [08:14:11] * Ryan_Lane waves [08:14:23] petan: I'll be much closer to your timezone all of june :D [08:19:33] oh really? [08:19:40] I think we might meet in Berlin heh [08:20:03] I already signed for the meeting there [08:30:52] PROBLEM Disk Space is now: CRITICAL on aggregator1 i-0000010c output: DISK CRITICAL - free space: / 214 MB (2% inode=93%): [09:12:42] PROBLEM Free ram is now: WARNING on bots-3 i-000000e5 output: Warning: 18% free memory [09:22:42] RECOVERY Free ram is now: OK on bots-3 i-000000e5 output: OK: 22% free memory [09:50:12] PROBLEM Puppet freshness is now: CRITICAL on puppet-lucid i-00000080 output: Puppet has not run in last 20 hours [11:17:44] Change abandoned: Dzahn; "will be added to own repo/project instead" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4277 [11:18:12] Change abandoned: Dzahn; "will be added to own repo/project instead" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4278 [11:18:33] Change abandoned: Dzahn; "will be added to own repo/project instead" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4279 [11:18:38] Change abandoned: Dzahn; "will be added to own repo/project instead" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4280 [11:18:52] Change abandoned: Dzahn; "will be added to own repo/project instead" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4281 [11:19:22] Change abandoned: Dzahn; "will be added to own repo/project instead" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4264 [11:19:33] Change abandoned: Dzahn; "will be added to own repo/project instead" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4273 [11:20:38] New review: Dzahn; "(no comment)" [operations/puppet] (test); V: 1 C: 2; - https://gerrit.wikimedia.org/r/4259 [11:20:38] Change merged: Dzahn; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4259 [11:43:45] PROBLEM Free ram is now: WARNING on bots-3 i-000000e5 output: Warning: 18% free memory [12:03:45] RECOVERY Free ram is now: OK on bots-3 i-000000e5 output: OK: 20% free memory [12:16:45] PROBLEM Free ram is now: WARNING on bots-3 i-000000e5 output: Warning: 17% free memory [12:19:23] hmm, can't SSH [12:50:45] PROBLEM Disk Space is now: CRITICAL on mobile-feeds i-000000c1 output: DISK CRITICAL - free space: / 285 MB (2% inode=84%): [12:53:03] Thehelpfulone: where [12:53:43] petan: it was on labs, it was taking a very long time to start the session, I tried it a few times and it worked eventually [12:54:06] which instance [12:54:27] !nagios | Thehelpfulone [12:54:27] Thehelpfulone: http://nagios.wmflabs.org/nagios3 [12:54:31] there is ssh status [12:54:35] for each instance you need [12:54:46] it was to just get into bastion [12:54:57] bastion is there as well :) [12:55:04] but probably on your side [14:10:37] !ping | [14:10:37] pong [14:10:50] !ping |||| [14:10:50] |||: pong [14:11:05] !ping . |||| [14:11:05] |||: pong [14:11:15] !ping . |||| . . [14:11:15] ||| . .: pong [14:15:02] ^demon: how do i check out wmf extensions + mediawiki core using one git command? [14:15:08] is it even possible to do that [14:15:11] <^demon> You can't. [14:15:14] ah [14:15:22] <^demon> Just yet. You'll be able to later today :) [14:15:25] is it possible to checkout all wmf extensions using one command [14:15:33] <^demon> No. [14:15:49] Make one repo wiht everything as submodules then just submodule update an:P [14:16:11] <^demon> That's how you'll be able to clone the wmf branch later today :) [14:16:13] ok is it possible to get easily parseable list of wmf extensions and then recursively pull them? [14:16:16] <^demon> We're adding extensions as submodules. [14:16:18] ah ok [14:16:21] gitweb really needs a better look, totally doesn't match up to svn for browsing stuff :( [14:16:37] because as for now updating the test site was doing svn up [14:16:45] how do I do that with git [14:16:48] <^demon> Isn't the test site going to run master though? [14:16:54] of course [14:16:55] it is [14:17:03] I already use git on labs [14:17:12] http://labs.wikimedia.beta.wmflabs.org/wiki/Special:Version [14:17:14] <^demon> Ok, doing core is easy right now. [14:17:24] yes it is, but extensions? how to do that [14:17:33] extensions are still not using git [14:17:42] <^demon> All wmf-deployed extensions are in git. [14:17:49] the folder extensions is actually svn now [14:18:11] ok but I need to checkout all wmf extensions and all non wmfextensions to one folder [14:18:15] <^demon> Well once we upgrade to 2.3 you'll be able to clone the meta-repo. [14:18:26] <^demon> For now you'll just have to clone them all or setup something locally with submodules. [14:18:30] gerrit has a whole lot of projects now :o [14:18:48] ok so I checkout all extensions using svn and rest of wmf extension using git [14:18:59] one checkout for each wmf extension? [14:19:11] Just git clone the extensions and use submodules so you can just update all the submodules at once? [14:19:25] Damianz: no idea how to do that, if u know it, do it [14:19:31] and document how u did it [14:19:38] simples [14:19:54] right now I do git pull and svn up ex* [14:20:01] and site is running head [14:20:02] git submodule add [14:20:12] then after just git submodule update and magic things happen [14:20:26] <^demon> Yeah, make a new directory, `git init`, then do that git submodule add that Damianz said. [14:20:36] <^demon> Then `git submodule update` is a one-liner to pull all changes. [14:20:38] example for extension called "meh" [14:21:02] You might have to switch to master as IIRC submodules stick to the head sha1 unless you tell it otherwise... [14:21:23] I know how to do git init [14:21:25] that's all [14:21:26] But yeah, that's how I roll all the produciton sites for work (with fabric). [14:21:27] :P [14:21:45] so I need to add module for each extension we have? [14:21:52] <^demon> Yes [14:21:56] in future if new extension is added I need to add it manually? [14:21:59] <^demon> Damianz: They stick to the submodule head sha1 at time of `submodule add` until you update it. [14:22:01] that suck a bit [14:22:11] <^demon> Yes, which is why I'm trying to upgrade to 2.3 very soon. [14:22:16] Ahh, I knew it was one of ther other :) [14:22:46] ok so I should wait? [14:22:57] is it possible to mix svn and git repo [14:23:02] that's what I did now and it works [14:23:05] kinda [14:23:16] I hope git wouldn't randomly delete some stuff [14:23:17] I wish git submodules woult work with git-svn, can see why not though [14:23:24] <^demon> By the way, https://gerrit.wikimedia.org/mediawiki-extensions.txt is a list of all extensions in git right now. [14:23:32] <^demon> The cron's a little delayed, but it should update again later today. [14:24:21] ^demon: if u give me example to checkout extension "ab" I will make a script to checkout all in the list [14:24:46] I guess git clone "some url" [14:24:55] then git pull *? [14:25:09] wtf o.0 [14:25:18] that's how it could work in svn [14:25:25] git pull == svn up [14:25:27] or not [14:25:40] roughly, yes [14:25:44] imagine you have a bunch of repositories in one folder and u need to svn up them all [14:25:44] gerrit's html is insane [14:25:49] svn up * is solution for that [14:26:08] so git pull * should do the same or not [14:26:39] Git submodules are like git repos in a git repo,the difference is you can say folder x in repo y at tag z and no matter where you clone out the main repo all the submodules will be right [14:26:49] Svn in svn is just a folder as far as svn is concerned iirc [14:26:58] <^demon> `git submodule foreach` is so super useful :) [14:27:06] Damianz: ok what is benefit of using modules in my case [14:27:18] rather than just cloning each repo [14:27:25] You're running head so git pull; git submodule update will update everything [14:27:41] ok git pull * would do the same or not [14:27:47] no [14:27:53] what's difference [14:28:06] git pull * just won't work in general [14:28:11] why [14:28:27] Because * isn't a remote or branch [14:28:49] * will expand to name of folders in shell [14:28:51] The syntax of git pull is [14:28:59] git will get the names not "*" symbol [14:29:16] Yeah, are all the names a repo or a ref? [14:29:24] should be [14:29:31] it should be a folder full of repos [14:29:41] I don't think it works how you think it works [14:29:43] some might be svn though [14:30:01] I don't think I know how it works [14:30:16] You need to run git pull from within a repo [14:30:25] for x in * ; do cd $; git pull; cd ../; done might work [14:30:29] but just use submodules [14:30:38] ok how is that worse than submodules [14:30:44] because I still see it far easier [14:31:13] I can manage to write a cron job which would manage the repos [14:31:21] but dunno how to do that using modules, so that it's automatic [14:34:22] Beacuse how do you know every folder is a git repo [14:34:29] What happens if you want one extension at a certain branch [14:35:46] the extensions in other branches would be physically in other folder [14:35:50] there is het [14:36:02] so that there is folder with full mediawiki per branch [14:36:11] including extensions [14:36:26] php-trunk contains the HEAD [14:36:29] so latest rev [14:36:40] php-branch-name contains branch [14:37:09] I just need to have a full mediawiki core + extensions and have a simple way to "svn up" it all [14:37:30] in git :P [14:38:10] is that possible I guess [14:41:34] ^demon: where do I find how to checkout extension [14:41:42] git clone https://gerrit.wikimedia.org/r/p/mediawiki/core.git is core [14:41:52] what is address of FeaturedFeeds [14:41:59] or GlobalUsage [14:42:09] <^demon> mediawiki/extensions/FeaturedFeeds.git [14:42:10] <^demon> Etc etc. [14:42:15] ok [14:50:35] The 'simple way' to 'svn up' 'in git' is to use submodules ;) [15:00:25] Damianz: ok but is it simple to manage these submodules [15:00:31] like without touching the shell [15:00:40] I want it to be fully automatic [15:01:18] I'm not 100% sure what you're trying to achive but I'm pretty sure it can be done simply [15:01:49] ok, now I do svn up and it remove add all extensions which were branched or removed [15:02:08] I suppose that submodules needs to be removed or added by hand [15:02:35] they wouldn't add / remove itself when there is new wmf extension or when someone decide to remove some [15:03:22] I mean submodules are ok if I want to keep static repository of extensions to be up to date but this "repository" of wmf extension may change over time [15:03:37] and I don't want to manually insert new submodule when new extension is deployed [15:03:42] Hmm [15:03:59] So currenty you have a repo cloned that has all the extensions in place then you just svn up? [15:04:07] yes [15:04:22] one command update all 4 branches of core and extensions [15:05:00] right now it doesn't work since I updated to git [15:05:09] core of trunk is using git [15:05:14] other branches still svn [15:05:19] extensions are mostly using svn [15:05:32] right now I started process to remove all wmf extensions and clone git repo for each [15:05:48] using the list from ^demon [15:06:15] but I need to automate it so that when this list is changed it update the repository / delete old extensions and add new [15:06:27] I wanted to know if it's possible to do that smarter [15:07:05] I could make some program to diff the lists and remove / add modules to git as you suggest [15:07:20] but it seems to be too hard core to me, compared how simple it is using svn [15:07:44] I still don't think I understand how you have svn setup atm [15:08:00] I have checkout of core [15:08:07] then I have checkout of trunk/extensions [15:08:16] that extensions is extensions folder in core [15:08:23] so that when I do svn up it update everything [15:08:32] svn go recursively and update all repositories [15:08:40] no matter if they are related to root repository [15:09:14] that makes stuff a lot easier now [15:09:23] Yeah, so 1 git submodule would do that fine but extensions now don't live in one repo [15:09:39] would they ever will [15:09:44] live in one [15:09:57] I don't know :/ [15:10:06] There could be a meta repo that contained submodules for every extension [15:10:12] hm [15:10:17] how can I mix it with core [15:10:28] Which would have the same effect as svn, the way svn is now is horrible tbh [15:11:02] horrible but simple and works [15:27:35] ok it works [15:27:48] now we are running all wmf extensions from git on deployment site [15:27:55] Special:Version is funny [15:28:15] mixed svn and git [15:52:31] PROBLEM Free ram is now: WARNING on bots-3 i-000000e5 output: Warning: 15% free memory [16:22:31] RECOVERY Free ram is now: OK on bots-3 i-000000e5 output: OK: 21% free memory [16:23:51] PROBLEM Disk Space is now: WARNING on deployment-transcoding i-00000105 output: DISK WARNING - free space: / 78 MB (5% inode=53%): [16:42:31] PROBLEM Free ram is now: WARNING on bots-3 i-000000e5 output: Warning: 17% free memory [16:55:09] ugh, nfs fail [16:57:31] RECOVERY Free ram is now: OK on bots-3 i-000000e5 output: OK: 20% free memory [17:03:43] PROBLEM Current Load is now: CRITICAL on essex-puppet i-000001ed output: Connection refused by host [17:04:23] PROBLEM Current Users is now: CRITICAL on essex-puppet i-000001ed output: Connection refused by host [17:05:03] PROBLEM Disk Space is now: CRITICAL on essex-puppet i-000001ed output: Connection refused by host [17:06:31] Dantman, can you add me to gareth project? [17:10:03] PROBLEM host: essex-puppet is DOWN address: i-000001ed check_ping: Invalid hostname/address - i-000001ed [17:11:00] Has instance creation been failing for everyone, today? [17:12:13] Ryan_Lane: This logfile makes it look like there's a corrupt file in the deb repo. Is that likely? https://labsconsole.wikimedia.org/w/index.php?title=Special:NovaInstance&action=consoleoutput&project=openstack&instanceid=i-000001ee [17:13:44] PROBLEM Current Load is now: CRITICAL on login-test2 i-000001ef output: Connection refused by host [17:13:44] PROBLEM Current Users is now: CRITICAL on nova-essex-test i-000001ee output: Connection refused by host [17:14:24] PROBLEM Disk Space is now: CRITICAL on nova-essex-test i-000001ee output: Connection refused by host [17:14:24] PROBLEM Current Users is now: CRITICAL on login-test2 i-000001ef output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:15:04] PROBLEM Free ram is now: CRITICAL on nova-essex-test i-000001ee output: Connection refused by host [17:15:04] PROBLEM Disk Space is now: CRITICAL on login-test2 i-000001ef output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:15:44] PROBLEM Free ram is now: CRITICAL on login-test2 i-000001ef output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:16:14] PROBLEM Total Processes is now: CRITICAL on nova-essex-test i-000001ee output: Connection refused by host [17:16:54] PROBLEM dpkg-check is now: CRITICAL on nova-essex-test i-000001ee output: Connection refused by host [17:16:54] PROBLEM Total Processes is now: CRITICAL on login-test2 i-000001ef output: Connection refused or timed out [17:17:34] PROBLEM dpkg-check is now: CRITICAL on login-test2 i-000001ef output: Connection refused or timed out [17:18:14] PROBLEM Current Load is now: CRITICAL on nova-essex-test i-000001ee output: Connection refused by host [17:18:45] andrewbogott: I'm not sure how that happens [17:18:57] andrewbogott: but if you delete/recreate, it'll likely run fine [17:19:07] That's my second try, same failure. [17:19:20] But, third's the charm... [17:21:22] <^demon> Ryan_Lane: I reverted that temp. hack for gerrit cron https://gerrit.wikimedia.org/r/#change,4430 [17:33:42] andrewbogott: yeah, I don't know what's up with it [17:33:46] it isn't very easy to debug [17:34:06] PROBLEM Current Users is now: CRITICAL on nova-essex-test i-000001f2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:34:16] Maybe related to instance size... I had three failures with a medium instance, now I have a small one that seems to be building properly. [17:34:26] PROBLEM Disk Space is now: CRITICAL on nova-essex-test i-000001f2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:34:45] yeah, larger instances tend to fail more. I don't know why [17:35:06] PROBLEM Free ram is now: CRITICAL on nova-essex-test i-000001f2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:36:16] PROBLEM Total Processes is now: CRITICAL on nova-essex-test i-000001f2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:36:56] PROBLEM dpkg-check is now: CRITICAL on nova-essex-test i-000001f2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:38:06] PROBLEM Current Load is now: CRITICAL on nova-essex-test i-000001f2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:38:16] PROBLEM host: login-test2 is DOWN address: i-000001f1 check_ping: Invalid hostname/address - i-000001f1 [17:41:14] RECOVERY Total Processes is now: OK on nova-essex-test i-000001f2 output: PROCS OK: 83 processes [17:41:54] RECOVERY dpkg-check is now: OK on nova-essex-test i-000001f2 output: All packages OK [17:43:14] RECOVERY Current Load is now: OK on nova-essex-test i-000001f2 output: OK - load average: 0.04, 0.38, 0.58 [17:43:44] PROBLEM Current Load is now: CRITICAL on login-test3 i-000001f3 output: Connection refused by host [17:44:04] RECOVERY Current Users is now: OK on nova-essex-test i-000001f2 output: USERS OK - 1 users currently logged in [17:44:24] PROBLEM Current Users is now: CRITICAL on login-test3 i-000001f3 output: Connection refused by host [17:44:24] RECOVERY Disk Space is now: OK on nova-essex-test i-000001f2 output: DISK OK [17:45:04] PROBLEM Disk Space is now: CRITICAL on login-test3 i-000001f3 output: Connection refused by host [17:45:04] RECOVERY Free ram is now: OK on nova-essex-test i-000001f2 output: OK: 86% free memory [17:45:44] PROBLEM Free ram is now: CRITICAL on login-test3 i-000001f3 output: Connection refused by host [17:46:54] PROBLEM Total Processes is now: CRITICAL on login-test3 i-000001f3 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:47:35] PROBLEM dpkg-check is now: CRITICAL on login-test3 i-000001f3 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:59:54] PROBLEM dpkg-check is now: CRITICAL on nova-essex-test i-000001f2 output: DPKG CRITICAL dpkg reports broken packages [18:03:43] PROBLEM Current Load is now: CRITICAL on login-test4 i-000001f4 output: Connection refused by host [18:04:33] PROBLEM Current Users is now: CRITICAL on login-test4 i-000001f4 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:05:03] PROBLEM Disk Space is now: CRITICAL on login-test4 i-000001f4 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:05:43] PROBLEM Free ram is now: CRITICAL on login-test4 i-000001f4 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:06:53] PROBLEM Total Processes is now: CRITICAL on login-test4 i-000001f4 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:07:34] PROBLEM dpkg-check is now: CRITICAL on login-test4 i-000001f4 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:13:44] PROBLEM Current Load is now: CRITICAL on login-test5 i-000001f5 output: Connection refused by host [18:14:24] PROBLEM Current Users is now: CRITICAL on login-test5 i-000001f5 output: Connection refused by host [18:15:04] PROBLEM Disk Space is now: CRITICAL on login-test5 i-000001f5 output: Connection refused by host [18:15:44] PROBLEM Free ram is now: CRITICAL on login-test5 i-000001f5 output: Connection refused by host [18:16:54] PROBLEM Total Processes is now: CRITICAL on login-test5 i-000001f5 output: Connection refused by host [18:17:34] PROBLEM dpkg-check is now: CRITICAL on login-test5 i-000001f5 output: Connection refused by host [18:23:43] PROBLEM dpkg-check is now: CRITICAL on login-test6 i-000001f6 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:25:03] PROBLEM Current Load is now: CRITICAL on login-test6 i-000001f6 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:25:43] PROBLEM Current Users is now: CRITICAL on login-test6 i-000001f6 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:26:18] PROBLEM Disk Space is now: CRITICAL on login-test6 i-000001f6 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:26:53] PROBLEM Free ram is now: CRITICAL on login-test6 i-000001f6 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:28:13] PROBLEM Total Processes is now: CRITICAL on login-test6 i-000001f6 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:33:31] Ryan_Lane: Are you planning to upgrade to precise before moving to essex? [18:36:15] andrewbogott: yep [18:37:30] I guess it'll be a few weeks before we can get precise packages in labs. [18:53:44] PROBLEM Current Load is now: CRITICAL on nova-essex-test i-000001f9 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:54:24] PROBLEM Current Users is now: CRITICAL on nova-essex-test i-000001f9 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:55:04] PROBLEM Disk Space is now: CRITICAL on nova-essex-test i-000001f9 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:55:44] PROBLEM Free ram is now: CRITICAL on nova-essex-test i-000001f9 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:56:51] PROBLEM Total Processes is now: CRITICAL on nova-essex-test i-000001f9 output: CHECK_NRPE: Error - Could not complete SSL handshake. [18:57:31] PROBLEM dpkg-check is now: CRITICAL on nova-essex-test i-000001f9 output: CHECK_NRPE: Error - Could not complete SSL handshake. [19:36:51] RECOVERY Total Processes is now: OK on nova-essex-test i-000001f9 output: PROCS OK: 100 processes [19:38:51] RECOVERY Current Load is now: OK on nova-essex-test i-000001f9 output: OK - load average: 0.92, 0.77, 0.43 [19:39:21] RECOVERY Current Users is now: OK on nova-essex-test i-000001f9 output: USERS OK - 1 users currently logged in [19:40:01] RECOVERY Disk Space is now: OK on nova-essex-test i-000001f9 output: DISK OK [19:40:44] RECOVERY Free ram is now: OK on nova-essex-test i-000001f9 output: OK: 89% free memory [19:43:34] PROBLEM Free ram is now: WARNING on bots-3 i-000000e5 output: Warning: 17% free memory [19:50:14] PROBLEM Puppet freshness is now: CRITICAL on puppet-lucid i-00000080 output: Puppet has not run in last 20 hours [22:12:37] Huh [22:12:44] OpenGrok has bizzare PHP supports [22:12:56] Which pretends that only "#" may make comments [22:13:06] And that true, false and null are not keywords [22:16:38] <^demon> Someone might want to tell them? [22:18:10] I agree true, false & null are not keywords they are values that all under the types that php support. [22:18:17] s/all/fall/ [22:20:42] Well, maybe, but it is a bad idea to offer search index over them [22:21:17] ^demon: well, I am now fixing their code, but I am not sure whether I am doing it 100% right-and-clean [22:26:31] heh, it lets you search for the variable 'false' ? [22:32:26] Yes [22:32:32] I fixed that [22:32:50] I also wanna make it handle PHP built-in functions correctly [22:53:49] Ryan_Lane: What is your presentation tool of choice? I predict that you do not use MS Powerpoint. [22:54:00] I use open office [22:54:05] many people use keynote [22:55:10] I guess there's no reason we can't just swap laptops mid-presentation. Although I'm inclined to use OO.o anyway. [22:55:26] I'm doing a presentation? :) [22:55:40] <^demon> Ryan_Lane: 2.3 came out today :D [22:55:43] I guess we should talk about that at some point :) [22:55:48] :D [22:55:51] ^demon: cool [23:00:50] Ryan_Lane: how does one get a public IP for a labs instance? [23:02:22] dschoon: tell me which project and why [23:02:53] To host documentation for analytics projects. [23:04:03] Something like analytics.wmflabs.org [23:09:34] 04/10/2012 - 23:09:34 - Creating a project directory for analytics [23:09:35] 04/10/2012 - 23:09:34 - Creating a home directory for dsc at /export/home/analytics/dsc [23:09:57] PROBLEM Disk Space is now: WARNING on bz-dev i-000001db output: DISK WARNING - free space: / 78 MB (5% inode=43%): [23:10:34] 04/10/2012 - 23:10:34 - Updating keys for dsc [23:15:01] andrewbogott: when do you get into town? [23:15:30] Sunday noon [23:16:06] ty Ryan_Lane [23:16:11] Oh! And I see the summit is /before/ the conference. I thought it was the other way around :/ [23:25:54] PROBLEM host: kant1 is DOWN address: i-000001a7 check_ping: Invalid hostname/address - i-000001a7 [23:26:04] RECOVERY Puppet freshness is now: OK on kripke i-000001fa output: puppet ran at Tue Apr 10 23:25:54 UTC 2012 [23:43:44] PROBLEM Current Load is now: CRITICAL on kirke i-000001fd output: Connection refused by host [23:43:44] PROBLEM Current Users is now: CRITICAL on kripke i-000001fc output: Connection refused by host [23:44:24] PROBLEM Current Users is now: CRITICAL on kirke i-000001fd output: Connection refused by host [23:44:24] PROBLEM Disk Space is now: CRITICAL on kripke i-000001fc output: Connection refused by host [23:45:04] PROBLEM Disk Space is now: CRITICAL on kirke i-000001fd output: Connection refused by host [23:45:04] PROBLEM Free ram is now: CRITICAL on kripke i-000001fc output: Connection refused by host [23:45:44] PROBLEM Free ram is now: CRITICAL on kirke i-000001fd output: Connection refused by host [23:46:14] PROBLEM Total Processes is now: CRITICAL on kripke i-000001fc output: Connection refused by host [23:46:54] PROBLEM Total Processes is now: CRITICAL on kirke i-000001fd output: Connection refused by host [23:46:59] PROBLEM dpkg-check is now: CRITICAL on kripke i-000001fc output: Connection refused by host [23:47:34] PROBLEM dpkg-check is now: CRITICAL on kirke i-000001fd output: Connection refused by host [23:48:14] PROBLEM Current Load is now: CRITICAL on kripke i-000001fc output: Connection refused by host