[03:37:14] PROBLEM Free ram is now: WARNING on orgcharts-dev i-0000018f output: Warning: 15% free memory [03:42:14] PROBLEM Free ram is now: WARNING on utils-abogott i-00000131 output: Warning: 17% free memory [03:51:54] PROBLEM Free ram is now: WARNING on nova-daas-1 i-000000e7 output: Warning: 15% free memory [03:52:14] PROBLEM Free ram is now: WARNING on test-oneiric i-00000187 output: Warning: 15% free memory [03:57:14] PROBLEM Free ram is now: CRITICAL on orgcharts-dev i-0000018f output: Critical: 4% free memory [04:02:14] RECOVERY Free ram is now: OK on orgcharts-dev i-0000018f output: OK: 95% free memory [04:02:14] PROBLEM Free ram is now: CRITICAL on utils-abogott i-00000131 output: Critical: 5% free memory [04:07:14] RECOVERY Free ram is now: OK on utils-abogott i-00000131 output: OK: 97% free memory [04:11:54] PROBLEM Free ram is now: CRITICAL on test3 i-00000093 output: Critical: 5% free memory [04:11:54] PROBLEM Free ram is now: CRITICAL on nova-daas-1 i-000000e7 output: Critical: 5% free memory [04:12:14] PROBLEM Free ram is now: CRITICAL on test-oneiric i-00000187 output: Critical: 3% free memory [04:16:54] RECOVERY Free ram is now: OK on test3 i-00000093 output: OK: 96% free memory [04:17:14] RECOVERY Free ram is now: OK on test-oneiric i-00000187 output: OK: 97% free memory [04:21:54] RECOVERY Free ram is now: OK on nova-daas-1 i-000000e7 output: OK: 93% free memory [05:42:04] PROBLEM Puppet freshness is now: CRITICAL on puppet-lucid i-00000080 output: Puppet has not run in last 20 hours [08:46:45] :o [08:46:49] hi [10:57:58] suhasmonk: hi, i think i just read your mail [10:58:30] you asked about a mail you never got? [10:58:45] mutante, yeah. i got the mail. I was typing in the wrong mail id. thanks though :) [10:58:52] ah,ok:) [14:07:35] New patchset: Jgreen; "redirect labs ldap manage-exports stderr to /dev/null, death to cronspam" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4410 [14:07:48] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/4410 [14:08:37] New review: Jgreen; "(no comment)" [operations/puppet] (test); V: 1 C: 2; - https://gerrit.wikimedia.org/r/4410 [14:08:39] Change merged: Jgreen; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4410 [14:23:54] New patchset: Jgreen; "should have added 2>&1 after >/dev/null not before" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4411 [14:24:07] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/4411 [14:24:14] New review: Jgreen; "(no comment)" [operations/puppet] (test); V: 1 C: 2; - https://gerrit.wikimedia.org/r/4411 [14:24:17] Change merged: Jgreen; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4411 [14:43:12] Ryan_Lane: hi [14:43:18] !requests [14:43:18] this is a backlog of all requests needed to be done by ops https://labsconsole.wikimedia.org/wiki/Requests [14:43:22] Ryan_Lane: ^ [14:43:25] can you check it [15:06:12] petan: are you aware of the db error upon attempting login for http://en.wikipedia.beta.wmflabs.org Unknown column 'user_options' in 'field list' (deployment-sql) [15:09:00] <^demon> chrismcmahon: That column isn't supposed to be used anymore. [15:09:04] <^demon> :\ [15:11:15] <^demon> Well I'd create an account and try myself...but it's timing out :\ [15:11:38] ^demon: it's been doing that for a few days now [15:11:48] <^demon> It was dropped from the schema recently. [15:11:59] <^demon> Obviously something's still referencing it. Let's try and find a stacktrace. [15:12:18] <^demon> Ah, got the error on registering. [15:12:28] <^demon> User::addToDatabase() [15:14:25] <^demon> Hmmm. I wonder if the schema was updated but the code wasn't? [15:16:22] <^demon> The project list is totally broken on labsconsole? [15:16:25] <^demon> :\ [15:18:43] ^demon: Broken in what sense? [15:18:56] It's working for me, or else is broken in a subtle way that I'm not seeing. [15:18:56] <^demon> I can't see any projects [15:19:05] Click on 'show project filter' [15:19:07] ? [15:36:32] ^demon: Did that help? [15:36:46] <^demon> Nope. [15:39:31] What are you seeing? [15:43:03] PROBLEM Puppet freshness is now: CRITICAL on puppet-lucid i-00000080 output: Puppet has not run in last 20 hours [15:44:06] <^demon> command line terminals since I'm busy doing other stuff right now ;-) [15:56:11] chrismcmahon: fixed [15:56:52] thanks petan [15:56:55] ^demon: try another browser :P [15:57:06] problem is in session [15:57:18] there is critical bug in current extension which Ryan didn't manage to fix [15:57:46] it sometimes happen that the mediawiki stop responding using your current session [15:58:15] also chrismcmahon if you created a ticket in bugzilla I would have fix it in few minutes after that [15:58:25] I don't watch irc so much [15:58:38] I need to go now [15:58:45] see you in a while :P [15:58:47] bb [17:14:33] RECOVERY Disk Space is now: OK on aggregator1 i-0000010c output: DISK OK [17:14:50] ^demon: the project list is empty for you? [17:16:09] <^demon> Yeah on labsconsole. [17:16:32] <^demon> I can screenshot if you'd like. [17:19:34] did you try logging out and back in? [17:19:41] mediawiki destroys sessions for some reason [17:20:04] I'd love to figure out why [17:28:55] ^demon: ? [17:28:58] did that work? [17:29:30] I've pushed out new code recently, I need to push some more out. I'd like to make sure it's in a working state now before I push the new update :) [17:29:51] <^demon> Yeah that fixed it. [17:31:31] ok [17:31:39] I *really* want to track down that bug [17:32:12] it's not something I can trigger, which is the problem [17:33:00] Hmm can puppet do mode => stuff onlyif => ? [17:33:18] Damianz: eh? [17:33:25] umm [17:33:30] in an exec, for sure [17:33:37] As far as I can tell, no unless I do exec with a test. [17:33:45] PROBLEM Free ram is now: CRITICAL on vumi i-000001e5 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:33:51] but yes, otherwise too [17:34:33] file { "blah": ensure => present; } something { "blah2": requires => File["blah"]; } [17:35:05] PROBLEM Total Processes is now: CRITICAL on vumi i-000001e5 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:35:09] though that may not be what you want [17:35:45] PROBLEM dpkg-check is now: CRITICAL on vumi i-000001e5 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:36:46] Hmm, requires could work if I pass it a package. [17:36:55] PROBLEM Current Load is now: CRITICAL on vumi i-000001e5 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:36:58] Overriding permisisons on stuff packages dumps out is a pita. [17:37:35] PROBLEM Current Users is now: CRITICAL on vumi i-000001e5 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:38:15] PROBLEM Disk Space is now: CRITICAL on vumi i-000001e5 output: CHECK_NRPE: Error - Could not complete SSL handshake. [17:38:40] what's opengrok? [17:39:19] No idae [17:39:34] it's broken in some way [17:41:22] ah. had a broken instance [17:50:09] Ryan_Lane: https://launchpad.net/~jerith/+archive/vumi-snapshots/+packages [17:54:23] preilly: dpkg-source -x blah.dsc [17:55:16] preilly: dpkg-source -x blah.dsc [18:00:13] RECOVERY Total Processes is now: OK on vumi i-000001e5 output: PROCS OK: 91 processes [18:00:43] RECOVERY dpkg-check is now: OK on vumi i-000001e5 output: All packages OK [18:01:53] RECOVERY Current Load is now: OK on vumi i-000001e5 output: OK - load average: 0.07, 0.73, 0.60 [18:02:33] RECOVERY Current Users is now: OK on vumi i-000001e5 output: USERS OK - 1 users currently logged in [18:03:13] RECOVERY Disk Space is now: OK on vumi i-000001e5 output: DISK OK [18:03:53] RECOVERY Free ram is now: OK on vumi i-000001e5 output: OK: 89% free memory [18:04:58] preilly: dpkg-buildpackage -S -rfakeroot [18:05:06] Ryan_Lane: yeah [18:05:22] Ryan_Lane: ImportError: cannot import name setup [18:05:23] dh_auto_clean: python setup.py clean -a returned exit code 1 [18:05:24] make: *** [clean] Error 1 [18:05:24] dpkg-buildpackage: error: fakeroot debian/rules clean gave error exit status 2 [18:17:28] preilly: Is something broken? [18:17:52] jerith: no [18:18:03] jerith: what do I need to change to test these? [18:18:10] jerith: they've installed correctly [18:18:36] You need to do some rabbitmq setup. [18:19:23] jerith: do you have anytime to log in to the VM named vumi and do that? [18:20:39] s/anytime/any time/ [18:20:45] I don't have my laptop handy. [18:21:38] jerith: damn [18:21:47] jerith: but, I understand [18:21:57] jerith: and greatly appreciate all of your help [18:22:24] If you look at the github repo at github.com/praekelt/vumi there is a rabbit setup script in the utils dit. [18:22:57] rabbitmqctl add_user vumi vumi [18:22:57] rabbitmqctl add_vhost /develop [18:22:58] rabbitmqctl set_permissions -p /develop vumi '.*' '.*' '.*' [18:22:59] *dir [18:23:11] Yes, that. [18:23:56] jerith: okay, what else? [18:24:05] You can probably puppet that. [18:24:26] jerith: yeah [18:25:08] The you should be able to set up the app. [18:26:15] jerith: if I knew how [18:26:27] ok. code deploy on labsconsole.... [18:27:08] There's first steps tutoroal at vumi.rtfd.org [18:27:50] <^demon> Ryan_Lane: OpenStackManager is in git now btw :) [18:27:58] oh. cool [18:28:03] last deploy from svn, then :) [18:28:16] That will you if messages are doing the right thing. [18:28:31] jerith: http://vumi.readthedocs.org/en/latest/getting-started.html [18:29:03] PROBLEM Free ram is now: WARNING on bots-2 i-0000009c output: Warning: 19% free memory [18:29:16] jerith: it says, "twistd -n --pidfile=telnettransport.pid start_worker --worker-class vumi.transports.telnet.TelnetServerTransport --set-option=transport_name:telnet --set-option=telnet_port:9010" [18:29:17] That seems reasonable. [18:29:26] Yes, that. [18:29:37] jerith: I want to start telnet? [18:30:46] This is a tutorial test thing, not the real thing. [18:30:49] jerith: okay so I did that and telnet'ed [18:30:55] jerith: and it worked [18:31:12] Cool. [18:31:30] Now it gets complicated. [18:32:26] And easier to do with config files than all on the command line. [18:33:12] Can you get at the configs in the labs machone we were using before. [18:33:23] jerith: I've also rebuilt the packages on our box [18:33:32] jerith: as we don't trust third party repos [18:33:40] jerith: where are those files located? [18:34:10] /var/praekelt/something [18:34:42] is it /var/praekelt/vumi/vumi/config [18:35:40] There will be one each for the gtalk transports and one for the wiki app [18:35:51] yaml files [18:36:23] -rw-r--r-- 1 vumi vumi 148 2012-02-24 18:13 wikipedia_xmpp_sms.yaml [18:36:24] -rw-r--r-- 1 vumi vumi 134 2012-02-24 18:13 wikipedia_xmpp.yaml [18:36:36] Those are the ones. [18:36:39] -rw-r--r-- 1 vumi vumi 3469 2012-02-24 18:22 supervisord.wikipedia.conf [18:37:09] jerith: cd: /var/praekelt/vumi/: No such file or directory [18:37:16] jerith: on the test VM [18:37:57] You can probably put that in /etc/vumi or something. [18:39:23] Everything's in system instead of var now. [18:42:08] jerith: okay, I got those files in /etc/vumi now [18:45:17] If you want to use supervisord, you can update the .conf to point at where the configs are now. [18:47:26] --config=./config/wikipedia.yaml [18:47:29] becomes [18:47:38] --config=/etc/vumi/wikipedia.yaml [18:49:42] If you'd prefer to use some other process monitor, you can ignore the supervisor conf and just use the command lines from it. [18:50:18] jerith: supervisor is fine [18:51:22] jerith: okay, so I changed those lines now what? [18:53:44] You should probably change log paths and stuff too. [18:54:24] Then supervisord -nc /etc/vumi/thingy [18:54:43] Where thingy is the sup'rd config. [18:58:32] If everything goes according to plan, you now have the app up and running on gtalk. [18:58:35] environment=PYTHONPATH=/var/praekelt/vumi/vumi-wikipedia/ [18:59:00] Ah. Kill that. [18:59:43] It was a hack because we had no setup.py back then. [19:00:11] Error: Cannot open an HTTP server: socket.error reported errno.ENOENT (2) [19:03:13] jerith: any ideas? [19:04:07] what are the supervisord socket lines up at the top of the config? [19:04:56] jerith: it was the sock file [19:05:25] Does that point somewhere sane? [19:05:39] jerith: now it does [19:05:54] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/4157 [19:05:57] Change merged: Ryan Lane; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4157 [19:06:05] jerith: 2012-04-06 19:04:56,913 INFO spawned: 'wikipedia_xmpp_transport_1' with pid 5857 [19:06:05] 2012-04-06 19:04:57,587 INFO exited: wikipedia_xmpp_transport_1 (exit status 1; not expected) [19:06:07] 2012-04-06 19:04:57,587 INFO gave up: wikipedia_xmpp_transport_1 entered FATAL state, too many start retries too quickly [19:06:08] 2012-04-06 19:04:57,588 INFO exited: wikipedia_worker_0 (exit status 1; not expected) [19:06:10] 2012-04-06 19:04:57,588 INFO gave up: wikipedia_worker_0 entered FATAL state, too many start retries too quickly [19:06:11] 2012-04-06 19:04:57,588 INFO exited: wikipedia_xmpp_transport_sms_1 (exit status 1; not expected) [19:06:11] 2012-04-06 19:04:58,590 INFO gave up: wikipedia_xmpp_transport_sms_1 entered FATAL state, too many start retries too quickly [19:07:17] Look at the logs for one of the workers. [19:08:17] jerith: it was a pid issue [19:08:32] \o/ [19:08:54] jerith: so, the gtalk instances are active [19:09:05] jerith: but, don't seem to respond to anything [19:10:02] You need to talk to the ussd one. [19:11:34] Also, check the logs and see if the xmpp messages are getting to the transports. [19:13:22] jerith: I see [19:13:23] 2012-04-06 19:07:47+0000 [WorkerAMQClient,client] XMPPTransport wikipedia_xmpp started. [19:13:24] 2012-04-06 19:07:48+0000 [Uninitialized] Starting Wikipedia_xmppOutboundDynamicConsumer with {'exchange_name': 'vumi', 'queue_name': 'wikipedia_xmpp.outbound', 'routing_key': 'wikipedia_xmpp.outbound', 'exchange_type': 'direct', 'start_paused': False, 'durable': True} [19:13:24] 2012-04-06 19:07:48+0000 [WorkerAMQClient,client] Consumer starting... [19:14:13] Thayt doesn't look happy. [19:14:34] Is debug on in the xmpp configs? [19:15:07] jerith: no [19:15:14] Also, stop it in the old labs machine if it's running there. [19:15:19] jerith: I did [19:15:26] Turn on debug. [19:15:34] debug: true [19:15:50] jerith: like that ^^ [19:16:13] I'm not entirely sure. [19:16:31] You might have to look at the code. [19:17:46] vumi/transports/xmpp/something in github. [19:18:46] :type debug: bool [19:18:47] :param debug: [19:18:48] Whether or not to show all the XMPP traffic. Defaults to False. [19:19:19] Yes, then you had it right. [19:19:55] 2012-04-06 19:19:15+0000 [XmlStream,client] RECV: '' [19:20:45] jerith: [19:20:45] 2012-04-06 19:20:22+0000 [XmlStream,client] RECV: 'test' [19:21:09] Did you get a response? [19:21:15] jerith: nope [19:21:35] Maybe look at the worker's log. [19:22:13] I'm not sure if the app actually logs anything useful. [19:22:19] root@vumi:/var/log/vumi# tail -f wikipedia_worker_0.log [19:22:19] --- --- [19:22:21] File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 823, in _inlineCallbacks [19:22:22] result = g.send(result) [19:22:24] File "/usr/lib/pymodules/python2.6/vumi_wikipedia/wikipedia.py", line 113, in consume_user_message [19:22:25] session = self.session_manager.load_session(user_id) [19:22:27] File "/usr/lib/python2.6/dist-packages/vumi/application/session.py", line 73, in load_session [19:22:28] return self.r_server.hgetall(ukey) [19:22:28] exceptions.AttributeError: 'Redis' object has no attribute 'hgetall' [19:22:39] Ah. [19:22:44] Um. [19:23:27] Maybe the redis client library is too old. [19:23:44] python-redis? [19:23:46] I didn't think of that. [19:23:50] Yes. [19:24:25] Maybe see if there's a backport of a newer one? [19:25:43] PROBLEM host: shop-analytics-main1 is DOWN address: i-000001d0 check_ping: Invalid hostname/address - i-000001d0 [19:27:03] We need at least redis 2.0 support. [19:27:15] jerith: ii python-redis 0.6.1-1 Persistent key-value database with network interface (Python library) [19:27:53] Is that the one in lucid? [19:29:25] jerith: Setting up python-redis (2.4.5-1~ppa1) ... [19:29:37] jerith: yes, that is the lucid one [19:30:25] Cool. Where is the new one from? [19:30:26] jerith: okay, that fixed it [19:30:40] jerith: https://launchpad.net/~cmsj/+archive/redis-stable/+packages [19:30:51] Cool. [19:32:49] Is it all working now? [19:33:44] PROBLEM Current Load is now: CRITICAL on shop-analytics-main i-000001e6 output: CHECK_NRPE: Error - Could not complete SSL handshake. [19:34:24] PROBLEM Current Users is now: CRITICAL on shop-analytics-main i-000001e6 output: CHECK_NRPE: Error - Could not complete SSL handshake. [19:34:48] jerith: yes [19:35:01] Yay! [19:35:04] PROBLEM Disk Space is now: CRITICAL on shop-analytics-main i-000001e6 output: CHECK_NRPE: Error - Could not complete SSL handshake. [19:35:12] jerith: I'm just building a local version of python-redis_2.4.5-1~ppa1.dsc [19:35:21] Cool. [19:35:44] PROBLEM Free ram is now: CRITICAL on shop-analytics-main i-000001e6 output: CHECK_NRPE: Error - Could not complete SSL handshake. [19:36:11] Do you people maintain a lot of debs? [19:36:26] jerith: a few [19:36:54] PROBLEM Total Processes is now: CRITICAL on shop-analytics-main i-000001e6 output: CHECK_NRPE: Error - Could not complete SSL handshake. [19:37:23] Because you can probably update the snapshot packages yourself if you need to. [19:37:34] PROBLEM dpkg-check is now: CRITICAL on shop-analytics-main i-000001e6 output: CHECK_NRPE: Error - Could not complete SSL handshake. [19:37:34] PROBLEM Disk Space is now: CRITICAL on aggregator1 i-0000010c output: DISK CRITICAL - free space: / 248 MB (2% inode=93%): [19:38:12] We should have a vumi 0.4 release soonish and then you can switch to that. [19:38:55] jerith: okay [19:39:34] * jerith disappears for a bit. [19:39:50] ยป'll check in again later. [19:42:45] jerith: okay, cool [19:55:23] preilly: wikimedia-task-dns-auth (0.18) hardy-wikimedia; urgency=low [19:56:03] Ryan_Lane: so, python-iso8601 (0.1.4-0) lucid-wikimedia; urgency=low [19:56:09] yep [19:59:29] Ryan_Lane: [19:59:30] class mobile::vumi { [19:59:30] package { "python-iso8601": [19:59:31] ensure => "0.1.4-0" } [19:59:32] } [19:59:37] * andrewbogott just got an eye exam and, hence, can't really read any more. [19:59:45] :D [19:59:50] So I guess I'll be out for the next few hours and/or rest of the day. [20:00:25] * Ryan_Lane nods [20:00:26] ok [20:00:40] * andrewbogott just assumes he is typing into an appropriate channel [20:16:53] preilly: for some of these latest may be better than the version number [20:32:48] RECOVERY Disk Space is now: OK on aggregator1 i-0000010c output: DISK OK [20:44:11] PROBLEM Free ram is now: CRITICAL on bots-2 i-0000009c output: Critical: 5% free memory [20:54:08] PROBLEM Free ram is now: WARNING on bots-2 i-0000009c output: Warning: 6% free memory [20:59:09] PROBLEM Free ram is now: CRITICAL on bots-2 i-0000009c output: Critical: 5% free memory [21:00:48] PROBLEM Disk Space is now: CRITICAL on aggregator1 i-0000010c output: DISK CRITICAL - free space: / 0 MB (0% inode=93%): [21:20:48] RECOVERY Disk Space is now: OK on aggregator1 i-0000010c output: DISK OK [21:33:18] New patchset: Sara; "nth iteration of adding ganglia for labs." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4489 [21:33:30] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/4489 [21:34:00] New review: Sara; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/4489 [21:34:02] Change merged: Sara; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4489 [21:53:43] PROBLEM Current Load is now: CRITICAL on aggregator2 i-000001e7 output: Connection refused by host [21:54:23] PROBLEM Current Users is now: CRITICAL on aggregator2 i-000001e7 output: Connection refused by host [21:55:03] PROBLEM Disk Space is now: CRITICAL on aggregator2 i-000001e7 output: Connection refused by host [21:55:43] PROBLEM Free ram is now: CRITICAL on aggregator2 i-000001e7 output: Connection refused by host [21:56:53] PROBLEM Total Processes is now: CRITICAL on aggregator2 i-000001e7 output: Connection refused by host [21:57:33] PROBLEM dpkg-check is now: CRITICAL on aggregator2 i-000001e7 output: Connection refused by host [22:04:08] PROBLEM Free ram is now: WARNING on bots-2 i-0000009c output: Warning: 7% free memory [22:13:43] PROBLEM Current Load is now: CRITICAL on aggregator2 i-000001e8 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:14:23] PROBLEM Current Users is now: CRITICAL on aggregator2 i-000001e8 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:15:03] PROBLEM Disk Space is now: CRITICAL on aggregator2 i-000001e8 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:15:43] PROBLEM Free ram is now: CRITICAL on aggregator2 i-000001e8 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:16:53] PROBLEM Total Processes is now: CRITICAL on aggregator2 i-000001e8 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:17:33] PROBLEM dpkg-check is now: CRITICAL on aggregator2 i-000001e8 output: CHECK_NRPE: Error - Could not complete SSL handshake. [23:32:11] New patchset: Sara; "Another iteration of adding ganglia for labs." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4496 [23:32:23] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/4496 [23:34:41] New review: Sara; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/4496 [23:34:43] Change merged: Sara; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4496 [23:53:43] PROBLEM Current Load is now: CRITICAL on login-test i-000001e9 output: CHECK_NRPE: Error - Could not complete SSL handshake. [23:54:23] PROBLEM Current Users is now: CRITICAL on login-test i-000001e9 output: CHECK_NRPE: Error - Could not complete SSL handshake. [23:55:03] PROBLEM Disk Space is now: CRITICAL on login-test i-000001e9 output: CHECK_NRPE: Error - Could not complete SSL handshake. [23:55:43] PROBLEM Free ram is now: CRITICAL on login-test i-000001e9 output: CHECK_NRPE: Error - Could not complete SSL handshake. [23:56:53] PROBLEM Total Processes is now: CRITICAL on login-test i-000001e9 output: CHECK_NRPE: Error - Could not complete SSL handshake. [23:57:33] PROBLEM dpkg-check is now: CRITICAL on login-test i-000001e9 output: CHECK_NRPE: Error - Could not complete SSL handshake.