[01:26:52] 06Labs, 10MediaWiki-Vagrant: Git Pull failed for SecurePoll extension using MediaWiki-vagrant - https://phabricator.wikimedia.org/T147958#2715192 (10Huji) 05Open>03declined It was a dirty clone. I decided to recreate the instance using MediaWiki-vagrant. [04:11:25] Change on 12wikitech.wikimedia.org a page Nova Resource:Tools/Access Request/Lpeters4umd was modified, changed by Tim Landscheidt link https://wikitech.wikimedia.org/w/index.php?diff=900150 edit summary: [07:11:05] Change on 12wikitech.wikimedia.org a page Nova Resource:Tools/Access Request/Mcawish2016 was created, changed by Mcawish2016 link https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Access_Request/Mcawish2016 edit summary: Created page with "{{Tools Access Request |Justification=to see page view record |Completed=false |User Name=Mcawish2016 }}" [09:29:56] Change on 12www.mediawiki.org a page Wikimedia Labs/Tool Labs/List of Toolserver Tools was modified, changed by Waldir link https://www.mediawiki.org/w/index.php?diff=2261767 edit summary: fix some metadata about Dispenser's tools [11:25:55] 06Labs, 10Labs-Infrastructure, 10DBA, 10MassMessage, and 3 others: mysqld process hang in db1069 - S2 mysql instance - https://phabricator.wikimedia.org/T145077#2715913 (10Marostegui) Looks like MariaDB assigned to bug to someone already (after me asking for an update: ``` Elena Stepanova reassigned MDEV-... [11:30:08] 06Labs, 10Labs-Infrastructure, 10DBA, 10MassMessage, and 3 others: mysqld process hang in db1069 - S2 mysql instance - https://phabricator.wikimedia.org/T145077#2715915 (10jcrespo) I think we specifically disabled parallel replication? Maybe multisorce is the cause?, in which case it is still a bug for me. [11:35:37] 06Labs, 10Labs-Infrastructure, 10DBA, 10MassMessage, and 3 others: mysqld process hang in db1069 - S2 mysql instance - https://phabricator.wikimedia.org/T145077#2715921 (10Marostegui) Yes - I am unsure why she said so: ``` MariaDB SANITARIUM localhost (none) > show global variables like 'slave_parallel_mo... [14:20:26] (03PS1) 10Gehel: maps - adding dummy passwords for postgresql monitoring and replication [labs/private] - 10https://gerrit.wikimedia.org/r/315959 (https://phabricator.wikimedia.org/T147194) [14:28:27] (03CR) 10Volans: [C: 031] "LGTM" [labs/private] - 10https://gerrit.wikimedia.org/r/315959 (https://phabricator.wikimedia.org/T147194) (owner: 10Gehel) [14:29:29] (03CR) 10Gehel: [C: 032] maps - adding dummy passwords for postgresql monitoring and replication [labs/private] - 10https://gerrit.wikimedia.org/r/315959 (https://phabricator.wikimedia.org/T147194) (owner: 10Gehel) [14:29:37] (03CR) 10Gehel: [V: 032] maps - adding dummy passwords for postgresql monitoring and replication [labs/private] - 10https://gerrit.wikimedia.org/r/315959 (https://phabricator.wikimedia.org/T147194) (owner: 10Gehel) [15:18:26] !log tools.sal Cleaned up !log spam from serial IRC griefer [15:18:29] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.sal/SAL, Master [15:26:32] (03CR) 10Ricordisamoa: [C: 032] Add and use SparqlElementProvider [labs/tools/ptable] - 10https://gerrit.wikimedia.org/r/315671 (https://phabricator.wikimedia.org/T122706) (owner: 10Ricordisamoa) [15:29:53] (03Merged) 10jenkins-bot: Add and use SparqlElementProvider [labs/tools/ptable] - 10https://gerrit.wikimedia.org/r/315671 (https://phabricator.wikimedia.org/T122706) (owner: 10Ricordisamoa) [15:36:22] anyone noticed there's some kind of outage on https://tools.wmflabs.org/geohack/geohack.php ? [15:36:51] nginx 'bad gateway' [15:38:44] I haven't, but I don't maintain geohack [15:38:53] You might try asking Magnus Manske or Kolossos [15:39:03] 10Tool-Labs-tools-Wikidata-Periodic-Table, 10Wikidata: Create a WDQS-based ElementProvider - https://phabricator.wikimedia.org/T122706#2716952 (10Ricordisamoa) 05Open>03Resolved Now deployed and working. [15:39:14] 10Labs-project-other, 10Wikimedia-Bugzilla: bugs.wmflabs: bad gateway bugzilla.wmflabs: can't connect to db - https://phabricator.wikimedia.org/T138883#2716956 (10Hydriz) 05Open>03Resolved a:03Hydriz http://bugs.wmflabs.org should be used, and that error is now resolved so it is usable now. [15:43:40] (03CR) 10Ricordisamoa: Add and use SparqlElementProvider (032 comments) [labs/tools/ptable] - 10https://gerrit.wikimedia.org/r/315671 (https://phabricator.wikimedia.org/T122706) (owner: 10Ricordisamoa) [15:48:40] don't know how to reach those people. Maybe one of these bug trackers is the place to report it [15:49:09] I'm not really from round here, so I'll leave it with you :-) bye! [15:49:36] nope [15:50:05] not my responsibility to keep geohack up [15:50:38] * Dispenser wonders when those two will add me as a maintainer again... [15:51:08] !log tools.geohack restart, seems to have stopped serving connections [15:51:24] harry-wood: should be back again [15:52:49] yuvipanda hi, it seems wm-bot has gone? [15:53:13] maybe? I've no idea about wm-bot :) [15:53:17] and the bot that logs [15:57:23] !log tools drain tools-worker-1012, seems stuck [15:57:27] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL, Master [15:58:02] chasemp: tools-worker-1012 is hung with cause unknown, I'm considering just rebooting [15:58:27] yuvipanda: ok, I don't have time today to look either [15:58:41] * yuvipanda nods [15:59:05] andrewbogott: ^ [16:00:49] yuvipanda: if you don't mind, leave it alone for a bit, I want to see if I can get anything from the console... [16:00:52] then I'll reboot it [16:01:33] RECOVERY - Host tools-secgroup-test-102 is UP: PING OK - Packet loss = 0%, RTA = 0.44 ms [16:01:43] andrewbogott: ok! [16:01:53] I haven't checked to see if it's the same problem, though. I only couldn't ssh, so drained it... [16:05:53] PROBLEM - Host tools-secgroup-test-102 is DOWN: CRITICAL - Host Unreachable (10.68.21.170) [16:09:47] matt_flaschen, I restarted udp2log-mw on deployment-fluorine02 again [16:10:04] so you can find the error you were running into [16:10:19] don't know what the proper fix for that udp2log-on-jessie issue is [16:16:01] yuvipanda: did tools-worker-1012 recover? It seems fine to me... [16:16:21] and nothing very interesting in dmesg [16:17:03] andrewbogott: yeah, seems ok now. not sure what happened [16:17:15] ok :) [16:17:29] want me to delete it anyway, or should we repool? [16:17:45] andrewbogott: let's repool and see if it happens again [16:18:19] ok — can you do that? (Only because I'll have to look up the instructions) [16:18:26] andrewbogott: sure [16:18:32] thx [16:19:57] !log tools repoooled tools-worker-1012, seems to have recovered?! [16:20:03] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL, Master [16:42:24] RECOVERY - Host tools-secgroup-test-103 is UP: PING OK - Packet loss = 0%, RTA = 0.75 ms [16:47:32] Thanks, Krenair. I appreciate it. [16:58:20] PROBLEM - Host tools-secgroup-test-103 is DOWN: CRITICAL - Host Unreachable (10.68.21.22) [17:05:51] PROBLEM - Puppet run on tools-services-02 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [17:06:51] PROBLEM - Puppet run on tools-services-01 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [17:13:37] Heh, from a meeting. "Every time you kill an unused instance, an angel gets its wings" [17:13:44] cc andrewbogott ^ :) [17:14:10] nodepool must be in the wing giving business [17:14:14] ;) [17:14:18] hehe [17:14:34] :) [17:17:01] RECOVERY - Host secgroup-lag-102 is UP: PING OK - Packet loss = 0%, RTA = 0.78 ms [17:22:00] PROBLEM - Host secgroup-lag-102 is DOWN: CRITICAL - Host Unreachable (10.68.17.218) [17:22:22] PROBLEM - Puppet run on tools-webgrid-lighttpd-1411 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [17:38:04] PROBLEM - SSH on tools-exec-1410 is CRITICAL: Server answer [17:41:34] PROBLEM - Puppet run on tools-webgrid-lighttpd-1403 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [17:47:49] PROBLEM - Puppet run on tools-webgrid-lighttpd-1416 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [17:48:03] Change on 12wikitech.wikimedia.org a page Nova Resource:Tools/Access Request/Nithum was created, changed by Nithum link https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Access_Request/Nithum edit summary: Created page with "{{Tools Access Request |Justification=To edit WikiDetox demo. |Completed=false |User Name=Nithum }}" [17:53:04] RECOVERY - SSH on tools-exec-1410 is OK: SSH OK - OpenSSH_6.9p1 Ubuntu-2~trusty1 (protocol 2.0) [18:04:03] PROBLEM - SSH on tools-exec-1410 is CRITICAL: Server answer [18:16:18] PROBLEM - Host tools-exec-cyberbot is DOWN: CRITICAL - Host Unreachable (10.68.16.39) [18:34:21] PROBLEM - Puppet run on tools-webgrid-lighttpd-1418 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [18:36:47] PROBLEM - Puppet run on tools-webgrid-lighttpd-1405 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [18:39:04] RECOVERY - SSH on tools-exec-1410 is OK: SSH OK - OpenSSH_6.9p1 Ubuntu-2~trusty1 (protocol 2.0) [19:22:20] RECOVERY - Puppet run on tools-webgrid-lighttpd-1411 is OK: OK: Less than 1.00% above the threshold [0.0] [19:31:48] RECOVERY - Puppet run on tools-webgrid-lighttpd-1405 is OK: OK: Less than 1.00% above the threshold [0.0] [19:34:19] RECOVERY - Puppet run on tools-webgrid-lighttpd-1418 is OK: OK: Less than 1.00% above the threshold [0.0] [19:41:33] RECOVERY - Puppet run on tools-webgrid-lighttpd-1403 is OK: OK: Less than 1.00% above the threshold [0.0] [19:44:27] chasemp: hello :] More Jenkins jobs are on Nodepool and it looks all fine! [19:44:44] chasemp: I even deleted 7 m1.large instances as a result [19:45:07] so in short, Nodepool/CI are all happy [19:45:10] thanks a ton :] [19:46:04] RECOVERY - Puppet run on tools-webgrid-lighttpd-1409 is OK: OK: Less than 1.00% above the threshold [0.0] [19:48:20] RECOVERY - Puppet run on tools-webgrid-lighttpd-1410 is OK: OK: Less than 1.00% above the threshold [0.0] [19:56:33] hashar: cool I've been watching a bit from teh sidelines as you go [19:57:32] chasemp: at least I am confident I got all metrics to watch / track contention [20:10:59] hashar: hopefully OpenStack will not go down as at the past ;) [20:12:21] things are bit more robust now on several fronts but we'll have to wait and see [20:12:22] PROBLEM - Puppet run on tools-webgrid-lighttpd-1210 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [20:12:35] Luke081515: ops tuned a bunch of default settings to enhance labs :] and really wmflabs is rather resilient/stable etc [20:12:36] PROBLEM - Puppet run on tools-webgrid-lighttpd-1403 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:12:57] I have been using labs since the early day, and all the improvement is really impressive [20:13:00] PROBLEM - Puppet run on tools-exec-1409 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [20:13:12] PROBLEM - Puppet run on tools-exec-gift is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:13:36] PROBLEM - Puppet run on tools-webgrid-lighttpd-1412 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:14:15] PROBLEM - Puppet run on tools-exec-1212 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:14:21] PROBLEM - Puppet run on tools-webgrid-lighttpd-1410 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:14:39] PROBLEM - Puppet run on tools-exec-1217 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:14:43] PROBLEM - Puppet run on tools-bastion-02 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:14:52] PROBLEM - Puppet run on tools-exec-1208 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:14:52] PROBLEM - Puppet run on tools-webgrid-lighttpd-1205 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:15:18] sounds nice :) [20:15:28] PROBLEM - Puppet run on tools-bastion-03 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:17:00] PROBLEM - Puppet run on tools-webgrid-generic-1402 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [20:17:04] PROBLEM - Puppet run on tools-webgrid-lighttpd-1409 is CRITICAL: CRITICAL: 62.50% of data above the critical threshold [0.0] [20:17:04] PROBLEM - Puppet run on tools-webgrid-lighttpd-1408 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:17:20] hashar deffitly more stable, as all the tests have been renabled that were enabled before the problem in agust :) [20:17:20] andrewbogott: puppet seems to be hanging again?^what fixed it last time? [20:17:28] PROBLEM - Puppet run on tools-webgrid-lighttpd-1209 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:17:29] Havent noticed tests taking long now [20:17:43] chasemp: yeah, I'm looking at it. We didn't really figure out the cause last time [20:18:08] (Presumably something with the tools puppetmaster) [20:18:24] chasemp: still just things in tools, right? [20:18:29] andrewbogott: afaik [20:18:48] PROBLEM - Puppet run on tools-webgrid-lighttpd-1415 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:18:50] PROBLEM - Puppet run on tools-webgrid-lighttpd-1416 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:19:26] PROBLEM - Puppet run on tools-cron-01 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:20:01] the puppetmaster is clearly still running, and the cpu isn't pegged. So I don't know why so slow [20:21:59] PROBLEM - Puppet run on tools-webgrid-lighttpd-1401 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:22:51] PROBLEM - Puppet run on tools-webgrid-lighttpd-1404 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:22:51] andrewbogott: I'm tempted to do a force update for teh git repo there and reboot as voodoo [20:23:03] I did the force update already... [20:23:11] I'm going to restart the puppetmaster first and see what that gets us [20:23:19] PROBLEM - Puppet run on tools-webgrid-lighttpd-1411 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:23:22] (other than making everything freak out in the meantime) [20:23:58] 'service puppetmaster stop' is hanging, which seems like it might mean something :) [20:24:40] PROBLEM - Puppet run on tools-docker-registry-01 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [20:24:40] PROBLEM - Puppet run on tools-checker-02 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:25:01] PROBLEM - Puppet run on tools-exec-1201 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [20:25:13] PROBLEM - Puppet run on tools-exec-1221 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:25:15] PROBLEM - Puppet run on tools-precise-dev is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:25:19] PROBLEM - Puppet run on tools-webgrid-lighttpd-1418 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:25:25] PROBLEM - Puppet run on tools-exec-1403 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [20:25:26] PROBLEM - Puppet run on tools-exec-1210 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:25:29] PROBLEM - Puppet run on tools-webgrid-lighttpd-1407 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:25:30] PROBLEM - Puppet run on tools-worker-1002 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:25:31] PROBLEM - Puppet run on tools-k8s-etcd-01 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:25:37] PROBLEM - Puppet run on tools-bastion-05 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [20:25:43] ^ my fault! [20:25:48] PROBLEM - Puppet run on tools-worker-1012 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [20:25:50] PROBLEM - Puppet run on tools-proxy-01 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [20:25:54] PROBLEM - Puppet run on tools-exec-1205 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [20:26:00] PROBLEM - Puppet run on tools-worker-1011 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [20:26:02] PROBLEM - Puppet run on tools-mail-01 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [20:26:05] PROBLEM - Puppet run on tools-webgrid-generic-1401 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:26:05] PROBLEM - Puppet run on tools-webgrid-lighttpd-1402 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:26:09] PROBLEM - Puppet run on tools-exec-1206 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:26:11] PROBLEM - Puppet run on tools-grid-master is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:26:11] PROBLEM - Puppet run on tools-worker-1016 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:26:35] PROBLEM - Puppet run on tools-exec-1213 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:26:41] PROBLEM - Puppet run on tools-exec-1402 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:26:49] PROBLEM - Puppet run on tools-webgrid-lighttpd-1201 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [20:26:51] chasemp: this puppetmaster is using a class (and a service) that yuvi is in the process of deprecating. So most likely this box is slated for a rebuild sometime soon... [20:26:55] PROBLEM - Puppet run on tools-worker-1005 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:26:59] PROBLEM - Puppet run on tools-exec-1215 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:27:01] PROBLEM - Puppet run on tools-checker-01 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [20:27:01] (It's still using puppet::self instead of ::standalone) [20:27:03] PROBLEM - Puppet run on tools-elastic-02 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:27:03] PROBLEM - Puppet run on tools-webgrid-generic-1404 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:27:09] andrewbogott: hm ok [20:27:17] PROBLEM - Puppet run on tools-exec-1203 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [20:27:19] PROBLEM - Puppet run on tools-k8s-master-02 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [20:27:21] PROBLEM - Puppet run on tools-webgrid-lighttpd-1206 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:27:23] PROBLEM - Puppet run on tools-exec-1202 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:27:26] PROBLEM - Puppet run on tools-worker-1007 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:27:38] PROBLEM - Puppet run on tools-worker-1013 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:27:40] PROBLEM - Puppet run on tools-webgrid-lighttpd-1413 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:27:42] PROBLEM - Puppet run on tools-k8s-master-01 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [20:27:48] PROBLEM - Puppet run on tools-exec-1408 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:27:48] chasemp: I think restarting the puppetmaster fixed it for now [20:27:50] PROBLEM - Puppet run on tools-webgrid-lighttpd-1405 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:27:52] PROBLEM - Puppet run on tools-puppetmaster-02 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:27:54] PROBLEM - Puppet run on tools-worker-1006 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [20:27:58] PROBLEM - Puppet run on tools-k8s-etcd-03 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [20:28:04] PROBLEM - Puppet run on tools-worker-1020 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [20:28:08] PROBLEM - Puppet run on tools-webgrid-lighttpd-1208 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:28:12] PROBLEM - Puppet run on tools-webgrid-lighttpd-1204 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:28:14] PROBLEM - Puppet run on tools-exec-1407 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:28:14] PROBLEM - Puppet run on tools-flannel-etcd-01 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:28:16] PROBLEM - Puppet run on tools-static-11 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:28:17] PROBLEM - Puppet run on tools-webgrid-lighttpd-1406 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:28:19] PROBLEM - Puppet run on tools-worker-1008 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:28:23] PROBLEM - Puppet run on tools-grid-shadow is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:28:25] PROBLEM - Puppet run on tools-exec-1214 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:28:34] PROBLEM - Puppet run on tools-exec-1220 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:28:35] PROBLEM - Puppet run on tools-exec-1401 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:28:36] PROBLEM - Puppet run on tools-worker-1009 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:28:38] PROBLEM - Puppet run on tools-exec-1406 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [20:28:39] PROBLEM - Puppet run on tools-worker-1021 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:28:41] PROBLEM - Puppet run on tools-exec-1209 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:28:42] PROBLEM - Puppet run on tools-proxy-02 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:28:42] PROBLEM - Puppet run on tools-worker-1018 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:28:46] PROBLEM - Puppet run on tools-puppetmaster-01 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [20:28:48] PROBLEM - Puppet run on tools-exec-1405 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [20:28:52] PROBLEM - Puppet run on tools-mail is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:28:52] PROBLEM - Puppet run on tools-elastic-01 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:29:04] PROBLEM - Puppet run on tools-redis-1002 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:29:10] PROBLEM - Puppet run on tools-prometheus-02 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:29:25] PROBLEM - Puppet run on tools-logs-02 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:29:32] PROBLEM - Puppet run on tools-prometheus-01 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:29:32] PROBLEM - Puppet run on tools-worker-1017 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:29:32] PROBLEM - Puppet run on tools-redis-1001 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [20:29:34] PROBLEM - Puppet run on tools-worker-1023 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:29:40] PROBLEM - Puppet run on tools-exec-1218 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:29:40] PROBLEM - Puppet run on tools-webgrid-generic-1403 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:29:48] PROBLEM - Puppet run on tools-worker-1001 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:29:48] PROBLEM - Puppet run on tools-worker-1010 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:30:02] PROBLEM - Puppet run on tools-exec-1219 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [20:30:18] 06Labs, 10Tool-Labs: Tools puppet runs hanging - https://phabricator.wikimedia.org/T148244#2718154 (10Andrew) [20:30:18] PROBLEM - Puppet run on tools-k8s-etcd-02 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:30:18] PROBLEM - Puppet run on tools-exec-1207 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:30:20] PROBLEM - Puppet run on tools-flannel-etcd-02 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:30:28] PROBLEM - Puppet run on tools-exec-1404 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:36:07] 10Tool-Labs-tools-Wikidata-Periodic-Table, 10Wikidata: Create a WDQS-based ElementProvider - https://phabricator.wikimedia.org/T122706#2718182 (10ArthurPSmith) I see you've closed - looks good by the way. Anyway, on the question of retaining WDQ - no I don't think that's necessary, I think Magnus would like to... [20:37:39] RECOVERY - Puppet run on tools-webgrid-lighttpd-1413 is OK: OK: Less than 1.00% above the threshold [0.0] [20:38:11] RECOVERY - Puppet run on tools-exec-gift is OK: OK: Less than 1.00% above the threshold [0.0] [20:38:47] RECOVERY - Puppet run on tools-puppetmaster-01 is OK: OK: Less than 1.00% above the threshold [0.0] [20:39:40] RECOVERY - Puppet run on tools-webgrid-generic-1403 is OK: OK: Less than 1.00% above the threshold [0.0] [20:40:23] RECOVERY - Puppet run on tools-exec-1403 is OK: OK: Less than 1.00% above the threshold [0.0] [20:40:38] RECOVERY - Puppet run on tools-bastion-05 is OK: OK: Less than 1.00% above the threshold [0.0] [20:41:34] RECOVERY - Puppet run on tools-exec-1213 is OK: OK: Less than 1.00% above the threshold [0.0] [20:41:41] RECOVERY - Puppet run on tools-exec-1402 is OK: OK: Less than 1.00% above the threshold [0.0] [20:42:05] RECOVERY - Puppet run on tools-webgrid-generic-1404 is OK: OK: Less than 1.00% above the threshold [0.0] [20:42:23] RECOVERY - Puppet run on tools-exec-1202 is OK: OK: Less than 1.00% above the threshold [0.0] [20:42:49] RECOVERY - Puppet run on tools-webgrid-lighttpd-1405 is OK: OK: Less than 1.00% above the threshold [0.0] [20:42:53] RECOVERY - Puppet run on tools-puppetmaster-02 is OK: OK: Less than 1.00% above the threshold [0.0] [20:43:29] RECOVERY - Puppet run on tools-exec-1401 is OK: OK: Less than 1.00% above the threshold [0.0] [20:44:09] RECOVERY - Puppet run on tools-prometheus-02 is OK: OK: Less than 1.00% above the threshold [0.0] [20:45:12] RECOVERY - Puppet run on tools-exec-1221 is OK: OK: Less than 1.00% above the threshold [0.0] [20:45:15] RECOVERY - Puppet run on tools-precise-dev is OK: OK: Less than 1.00% above the threshold [0.0] [20:45:19] RECOVERY - Puppet run on tools-webgrid-lighttpd-1418 is OK: OK: Less than 1.00% above the threshold [0.0] [20:45:27] RECOVERY - Puppet run on tools-exec-1404 is OK: OK: Less than 1.00% above the threshold [0.0] [20:46:09] RECOVERY - Puppet run on tools-exec-1206 is OK: OK: Less than 1.00% above the threshold [0.0] [20:47:23] RECOVERY - Puppet run on tools-webgrid-lighttpd-1206 is OK: OK: Less than 1.00% above the threshold [0.0] [20:47:27] RECOVERY - Puppet run on tools-worker-1007 is OK: OK: Less than 1.00% above the threshold [0.0] [20:47:37] RECOVERY - Puppet run on tools-worker-1013 is OK: OK: Less than 1.00% above the threshold [0.0] [20:48:14] RECOVERY - Puppet run on tools-exec-1407 is OK: OK: Less than 1.00% above the threshold [0.0] [20:48:16] RECOVERY - Puppet run on tools-flannel-etcd-01 is OK: OK: Less than 1.00% above the threshold [0.0] [20:48:16] RECOVERY - Puppet run on tools-static-11 is OK: OK: Less than 1.00% above the threshold [0.0] [20:48:28] RECOVERY - Puppet run on tools-exec-1214 is OK: OK: Less than 1.00% above the threshold [0.0] [20:48:38] RECOVERY - Puppet run on tools-exec-1406 is OK: OK: Less than 1.00% above the threshold [0.0] [20:48:44] RECOVERY - Puppet run on tools-proxy-02 is OK: OK: Less than 1.00% above the threshold [0.0] [20:49:04] RECOVERY - Puppet run on tools-redis-1002 is OK: OK: Less than 1.00% above the threshold [0.0] [20:49:30] RECOVERY - Puppet run on tools-prometheus-01 is OK: OK: Less than 1.00% above the threshold [0.0] [20:49:34] RECOVERY - Puppet run on tools-worker-1023 is OK: OK: Less than 1.00% above the threshold [0.0] [20:49:38] RECOVERY - Puppet run on tools-checker-02 is OK: OK: Less than 1.00% above the threshold [0.0] [20:49:48] RECOVERY - Puppet run on tools-worker-1001 is OK: OK: Less than 1.00% above the threshold [0.0] [20:50:02] RECOVERY - Puppet run on tools-exec-1201 is OK: OK: Less than 1.00% above the threshold [0.0] [20:50:25] RECOVERY - Puppet run on tools-exec-1210 is OK: OK: Less than 1.00% above the threshold [0.0] [20:50:29] RECOVERY - Puppet run on tools-worker-1002 is OK: OK: Less than 1.00% above the threshold [0.0] [20:51:03] RECOVERY - Puppet run on tools-mail-01 is OK: OK: Less than 1.00% above the threshold [0.0] [20:52:17] RECOVERY - Puppet run on tools-k8s-master-02 is OK: OK: Less than 1.00% above the threshold [0.0] [20:52:33] RECOVERY - Puppet run on tools-webgrid-lighttpd-1403 is OK: OK: Less than 1.00% above the threshold [0.0] [20:52:42] RECOVERY - Puppet run on tools-k8s-master-01 is OK: OK: Less than 1.00% above the threshold [0.0] [20:53:36] RECOVERY - Puppet run on tools-webgrid-lighttpd-1412 is OK: OK: Less than 1.00% above the threshold [0.0] [20:53:50] RECOVERY - Puppet run on tools-mail is OK: OK: Less than 1.00% above the threshold [0.0] [20:54:50] RECOVERY - Puppet run on tools-webgrid-lighttpd-1205 is OK: OK: Less than 1.00% above the threshold [0.0] [20:54:50] RECOVERY - Puppet run on tools-exec-1208 is OK: OK: Less than 1.00% above the threshold [0.0] [20:54:58] PROBLEM - Free space - all mounts on tools-worker-1003 is CRITICAL: CRITICAL: tools.tools-worker-1003.diskspace._var_lib_docker.byte_percentfree (No valid datapoints found) tools.tools-worker-1003.diskspace._public_dumps.byte_percentfree (No valid datapoints found)tools.tools-worker-1003.diskspace.root.byte_percentfree (<30.00%) [20:55:20] RECOVERY - Puppet run on tools-flannel-etcd-02 is OK: OK: Less than 1.00% above the threshold [0.0] [20:55:52] RECOVERY - Puppet run on tools-services-02 is OK: OK: Less than 1.00% above the threshold [0.0] [20:56:52] RECOVERY - Puppet run on tools-services-01 is OK: OK: Less than 1.00% above the threshold [0.0] [20:57:22] RECOVERY - Puppet run on tools-webgrid-lighttpd-1210 is OK: OK: Less than 1.00% above the threshold [0.0] [20:57:24] RECOVERY - Puppet run on tools-webgrid-lighttpd-1209 is OK: OK: Less than 1.00% above the threshold [0.0] [20:58:01] RECOVERY - Puppet run on tools-exec-1409 is OK: OK: Less than 1.00% above the threshold [0.0] [20:58:52] RECOVERY - Puppet run on tools-elastic-01 is OK: OK: Less than 1.00% above the threshold [0.0] [20:59:15] RECOVERY - Puppet run on tools-exec-1212 is OK: OK: Less than 1.00% above the threshold [0.0] [20:59:29] RECOVERY - Puppet run on tools-worker-1017 is OK: OK: Less than 1.00% above the threshold [0.0] [20:59:35] RECOVERY - Puppet run on tools-exec-1217 is OK: OK: Less than 1.00% above the threshold [0.0] [20:59:41] RECOVERY - Puppet run on tools-bastion-02 is OK: OK: Less than 1.00% above the threshold [0.0] [21:00:17] RECOVERY - Puppet run on tools-k8s-etcd-02 is OK: OK: Less than 1.00% above the threshold [0.0] [21:00:33] RECOVERY - Puppet run on tools-k8s-etcd-01 is OK: OK: Less than 1.00% above the threshold [0.0] [21:00:48] RECOVERY - Puppet run on tools-worker-1012 is OK: OK: Less than 1.00% above the threshold [0.0] [21:01:02] RECOVERY - Puppet run on tools-worker-1011 is OK: OK: Less than 1.00% above the threshold [0.0] [21:01:15] RECOVERY - Puppet run on tools-grid-master is OK: OK: Less than 1.00% above the threshold [0.0] [21:01:16] RECOVERY - Puppet run on tools-worker-1016 is OK: OK: Less than 1.00% above the threshold [0.0] [21:01:58] RECOVERY - Puppet run on tools-webgrid-generic-1402 is OK: OK: Less than 1.00% above the threshold [0.0] [21:02:04] RECOVERY - Puppet run on tools-webgrid-lighttpd-1409 is OK: OK: Less than 1.00% above the threshold [0.0] [21:02:06] RECOVERY - Puppet run on tools-webgrid-lighttpd-1408 is OK: OK: Less than 1.00% above the threshold [0.0] [21:02:52] RECOVERY - Puppet run on tools-worker-1006 is OK: OK: Less than 1.00% above the threshold [0.0] [21:03:02] RECOVERY - Puppet run on tools-worker-1020 is OK: OK: Less than 1.00% above the threshold [0.0] [21:03:24] RECOVERY - Puppet run on tools-grid-shadow is OK: OK: Less than 1.00% above the threshold [0.0] [21:03:29] RECOVERY - Puppet run on tools-exec-1220 is OK: OK: Less than 1.00% above the threshold [0.0] [21:03:36] RECOVERY - Puppet run on tools-worker-1009 is OK: OK: Less than 1.00% above the threshold [0.0] [21:03:40] RECOVERY - Puppet run on tools-worker-1021 is OK: OK: Less than 1.00% above the threshold [0.0] [21:03:40] RECOVERY - Puppet run on tools-exec-1209 is OK: OK: Less than 1.00% above the threshold [0.0] [21:03:44] RECOVERY - Puppet run on tools-worker-1018 is OK: OK: Less than 1.00% above the threshold [0.0] [21:03:48] RECOVERY - Puppet run on tools-webgrid-lighttpd-1415 is OK: OK: Less than 1.00% above the threshold [0.0] [21:03:50] RECOVERY - Puppet run on tools-webgrid-lighttpd-1416 is OK: OK: Less than 1.00% above the threshold [0.0] [21:04:22] RECOVERY - Puppet run on tools-webgrid-lighttpd-1410 is OK: OK: Less than 1.00% above the threshold [0.0] [21:04:24] RECOVERY - Puppet run on tools-cron-01 is OK: OK: Less than 1.00% above the threshold [0.0] [21:05:20] RECOVERY - Puppet run on tools-exec-1207 is OK: OK: Less than 1.00% above the threshold [0.0] [21:05:26] RECOVERY - Puppet run on tools-bastion-03 is OK: OK: Less than 1.00% above the threshold [0.0] [21:06:48] RECOVERY - Puppet run on tools-webgrid-lighttpd-1201 is OK: OK: Less than 1.00% above the threshold [0.0] [21:06:56] RECOVERY - Puppet run on tools-exec-1215 is OK: OK: Less than 1.00% above the threshold [0.0] [21:06:58] RECOVERY - Puppet run on tools-webgrid-lighttpd-1401 is OK: OK: Less than 1.00% above the threshold [0.0] [21:07:04] RECOVERY - Puppet run on tools-elastic-02 is OK: OK: Less than 1.00% above the threshold [0.0] [21:07:04] RECOVERY - Puppet run on tools-checker-01 is OK: OK: Less than 1.00% above the threshold [0.0] [21:07:20] RECOVERY - Puppet run on tools-exec-1203 is OK: OK: Less than 1.00% above the threshold [0.0] [21:07:58] RECOVERY - Puppet run on tools-k8s-etcd-03 is OK: OK: Less than 1.00% above the threshold [0.0] [21:08:08] RECOVERY - Puppet run on tools-webgrid-lighttpd-1208 is OK: OK: Less than 1.00% above the threshold [0.0] [21:08:10] RECOVERY - Puppet run on tools-webgrid-lighttpd-1204 is OK: OK: Less than 1.00% above the threshold [0.0] [21:08:20] RECOVERY - Puppet run on tools-webgrid-lighttpd-1411 is OK: OK: Less than 1.00% above the threshold [0.0] [21:08:23] RECOVERY - Puppet run on tools-worker-1008 is OK: OK: Less than 1.00% above the threshold [0.0] [21:08:49] RECOVERY - Puppet run on tools-exec-1405 is OK: OK: Less than 1.00% above the threshold [0.0] [21:09:23] RECOVERY - Puppet run on tools-logs-02 is OK: OK: Less than 1.00% above the threshold [0.0] [21:09:33] RECOVERY - Puppet run on tools-redis-1001 is OK: OK: Less than 1.00% above the threshold [0.0] [21:09:37] RECOVERY - Puppet run on tools-exec-1218 is OK: OK: Less than 1.00% above the threshold [0.0] [21:09:39] RECOVERY - Puppet run on tools-docker-registry-01 is OK: OK: Less than 1.00% above the threshold [0.0] [21:09:47] RECOVERY - Puppet run on tools-worker-1010 is OK: OK: Less than 1.00% above the threshold [0.0] [21:09:57] RECOVERY - Puppet run on tools-exec-1219 is OK: OK: Less than 1.00% above the threshold [0.0] [21:10:29] RECOVERY - Puppet run on tools-webgrid-lighttpd-1407 is OK: OK: Less than 1.00% above the threshold [0.0] [21:10:49] RECOVERY - Puppet run on tools-proxy-01 is OK: OK: Less than 1.00% above the threshold [0.0] [21:10:51] RECOVERY - Puppet run on tools-exec-1205 is OK: OK: Less than 1.00% above the threshold [0.0] [21:11:05] RECOVERY - Puppet run on tools-webgrid-lighttpd-1402 is OK: OK: Less than 1.00% above the threshold [0.0] [21:11:06] RECOVERY - Puppet run on tools-webgrid-generic-1401 is OK: OK: Less than 1.00% above the threshold [0.0] [21:11:56] RECOVERY - Puppet run on tools-worker-1005 is OK: OK: Less than 1.00% above the threshold [0.0] [21:12:49] RECOVERY - Puppet run on tools-exec-1408 is OK: OK: Less than 1.00% above the threshold [0.0] [21:12:52] RECOVERY - Puppet run on tools-webgrid-lighttpd-1404 is OK: OK: Less than 1.00% above the threshold [0.0] [21:13:18] RECOVERY - Puppet run on tools-webgrid-lighttpd-1406 is OK: OK: Less than 1.00% above the threshold [0.0] [21:49:12] 10Striker, 10Phabricator, 10Security-Reviews, 13Patch-For-Review: Unable to mirror repository from git.legoktm.com into diffusion - https://phabricator.wikimedia.org/T143969#2718352 (10dpatrick) In addition to @faidon's concerns above about client exploitation by a malicious server, I'm wondering how we mi... [21:49:30] hi, can somebody help me? I installed openjdk at my 14.04, but I now get: major version 52 is newer than 51, the highest major version supported by this compiler. [21:49:30] It is recommended that the compiler be upgraded. [21:49:35] how can I upgrade that thing? [21:54:15] yuvipanda: Do you know how? [21:54:51] no idea, sorry. seems java specific? [21:56:41] yeo [21:56:43] *yep [21:59:41] the same way you installed that openjdk version ? [22:00:57] I used sudo apt-get install openjdk-8-jre [22:01:33] openjdk-8 on trusty is fairly unsupported, I think you have to use debian jessie if you want openjdk-8 [22:02:03] Luke081515, do sudo apt-get install openjdk-8-jdk ? [22:03:02] major version 52 is newer than 51, the highest major version supported by this compiler. [22:03:02] It is recommended that the compiler be upgraded. [22:03:03] still [22:03:26] :/ [22:03:55] Luke081515: the openjdk-8 we have on Trusty is terribly outdated / not updated [22:04:13] it got added as a one off age ago for a one shot project that got abandoned [22:04:15] you need debian jessie fi you want openjdk-8 [22:04:21] ah, ok [22:04:23] would probably be betteer to drop entirely [22:04:28] so yeah Jessie [22:04:33] luckily I have one jessie instance in my project :D [22:04:42] :D [22:04:46] Jessie 8.5 is enough? [22:05:24] you would get the lastest version anyway :] [22:05:33] anyway sleep sleep time for me *wave* [22:05:41] ok, thx :) [22:05:58] 10Striker, 10Phabricator, 10Security-Reviews, 13Patch-For-Review: Unable to mirror repository from git.legoktm.com into diffusion - https://phabricator.wikimedia.org/T143969#2718378 (10bd808) >>! In T143969#2718352, @dpatrick wrote: > - DoS via storage exhaustion (large repo being mirrored, not necessarily... [22:10:03] 06Labs, 10Tool-Labs, 13Patch-For-Review: support python3 uwsgi apps - https://phabricator.wikimedia.org/T104374#2718393 (10yuvipanda) @Ricordisamoa yes, I do! Can I co-ordinate with you on IRC or email one of these days to move all your uwsgi-plain tools? :) https://wikitech.wikimedia.org/wiki/Help:Tool_Labs... [22:12:06] yuvipanda: FYI I'm getting the same error on a new instance just deployed: af-master.automation-framework.eqiad.wmflabs [22:12:33] I have the hiera puppetmaster defined at project level FWIW [22:14:30] it's quite late here so I'm going to bed, but wanted to keep you posted. on 'af-master' you can do whatever you want, also destroy it, the other 2 I'm working on them :) [22:17:27] volans: ok :) I've been out this morning giving a talk, will look later for sure [22:17:53] ok thanks, I've updated the wiki using hostname -f where possible to find the FQDN [22:18:35] ttyl [22:26:08] 10Striker, 10Phabricator, 10Security-Reviews, 13Patch-For-Review: Unable to mirror repository from git.legoktm.com into diffusion - https://phabricator.wikimedia.org/T143969#2718421 (10dpatrick) >>! In T143969#2718378, @bd808 wrote: >>>! In T143969#2718352, @dpatrick wrote: >> - Using the git client to mak... [22:36:00] 10Striker, 10Phabricator, 10Security-Reviews, 13Patch-For-Review: Unable to mirror repository from git.legoktm.com into diffusion - https://phabricator.wikimedia.org/T143969#2584962 (10Platonides) Does in some case phd generate an email to some users on import? (ie. as a DoS to their mailbox through a repo... [22:36:39] 10Striker: Add a web shell allowing people to perform actions as their tool from striker - https://phabricator.wikimedia.org/T144713#2608023 (10bd808) from irc chat: ``` [22:34] so in striker [22:34] we'll have a 'launch web console' thing [22:34] click that [22:34] 10Striker: Add a web shell allowing people to perform actions as their tool from striker - https://phabricator.wikimedia.org/T144713#2718454 (10bd808) p:05Triage>03Normal [22:42:12] 06Labs, 10Striker, 06Operations, 07LDAP: Store Wikimedia unified account name (SUL) in LDAP directory - https://phabricator.wikimedia.org/T148048#2718457 (10bd808) p:05Triage>03Normal [22:54:01] 06Labs, 10Labs-Infrastructure, 06Operations, 10ops-eqiad: labvirt1005 - HP RAID controller issue (battery?) - https://phabricator.wikimedia.org/T148255#2718489 (10Dzahn) [22:59:53] 06Labs, 10Labs-Infrastructure, 06Operations, 10ops-eqiad: labvirt1005 - HP RAID controller issue (battery?) - https://phabricator.wikimedia.org/T148255#2718504 (10Dzahn) btw, when searching phab i saw a couple older resolved tickets, like "doesnt boot up" T100030 and "memory errors" T97521 all on this sam... [23:23:27] RECOVERY - MAGIC SELF HEAL [23:26:11] lol [23:26:34] (03PS1) 10BryanDavis: Validate new usernames with action=query&list=users&usprop=cancreate [labs/striker] - 10https://gerrit.wikimedia.org/r/316025 (https://phabricator.wikimedia.org/T147024) [23:26:36] (03PS1) 10BryanDavis: Check request ip for account creation blocks on Wikitech [labs/striker] - 10https://gerrit.wikimedia.org/r/316026 (https://phabricator.wikimedia.org/T147024)