[01:06:14] 10Beta-Cluster-Infrastructure, 10Release-Engineering-Team (Watching / External), 10Services (watching), 10services-tooling: RFC: Streamline Node.hs testing+deployment - https://phabricator.wikimedia.org/T147581 (10Krinkle) [01:06:19] 10Beta-Cluster-Infrastructure, 10Release-Engineering-Team (Watching / External), 10Services (watching), 10services-tooling: RFC: Streamline Node.js testing+deployment - https://phabricator.wikimedia.org/T147581 (10Krinkle) [01:39:18] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [01:44:17] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [01:52:50] 10Phabricator, 10User-MModell: Add support for task types - https://phabricator.wikimedia.org/T93499 (10MGChecker) Well, currently most tasks don't have any defined subtype, even tough suitable subtypes exist for many of them. This greatly reduces the usefulness of this future, as you can't really use it to fi... [03:30:15] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [03:40:19] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [04:11:18] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [04:21:18] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [04:27:19] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [06:04:17] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [06:13:29] 10Phabricator, 10User-MModell: Add support for task types - https://phabricator.wikimedia.org/T93499 (10Aklapper) > When creating a task using the simple creation form, users should be required chose a subtype, defaulting to task. I'd not want this option shown for //all and any// users. Adds just more clutter... [06:14:19] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [06:35:19] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [06:45:17] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [07:46:18] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [07:56:17] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [08:02:17] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [08:12:17] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [08:33:16] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [09:32:26] 10Phabricator: Add link to atom feed in each main blog page - https://phabricator.wikimedia.org/T205181 (10Peter) [09:34:17] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [09:50:47] 10Phabricator (Upstream), 10Upstream: Add link to atom feed in each main blog page - https://phabricator.wikimedia.org/T205181 (10Peachey88) [11:59:26] PROBLEM - SSH on integration-slave-docker-1021 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:09:16] RECOVERY - SSH on integration-slave-docker-1021 is OK: SSH OK - OpenSSH_6.7p1 Debian-5+deb8u7 (protocol 2.0) [12:15:27] PROBLEM - SSH on integration-slave-docker-1021 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:20:13] RECOVERY - SSH on integration-slave-docker-1021 is OK: SSH OK - OpenSSH_6.7p1 Debian-5+deb8u7 (protocol 2.0) [12:41:23] PROBLEM - SSH on integration-slave-docker-1021 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:50:44] 10Beta-Cluster-Infrastructure, 10Operations, 10Wikidata, 10wikidata-tech-focus, and 3 others: Run mediawiki::maintenance scripts in Beta Cluster - https://phabricator.wikimedia.org/T125976 (10Krenair) >>! In T125976#4607109, @thcipriani wrote: > and `openldap::maintenance` isn't probably needed on this mac... [16:18:27] 10Continuous-Integration-Infrastructure, 10Release-Engineering-Team (Someday), 10Tidy: Figure out the package conflict between libtidy-dev from sury and hhvm-tidy - https://phabricator.wikimedia.org/T169008 (10Izno) [16:23:19] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [16:26:16] 10Beta-Cluster-Infrastructure, 10Wikidata: Persistently high maxlag on wikidata.beta.wmflabs.org - https://phabricator.wikimedia.org/T201983 (10Dalba) 05Open>03Invalid The wiki has fixed itself! [16:33:18] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [16:52:14] PROBLEM - Puppet errors on deployment-kafka-jumbo-1 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [16:52:15] PROBLEM - Puppet errors on deployment-kafka-main-2 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [16:52:22] PROBLEM - Puppet errors on deployment-parsoid09 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [16:52:23] PROBLEM - Puppet errors on deployment-ircd is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [16:52:25] PROBLEM - Puppet errors on deployment-sca02 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [16:53:01] PROBLEM - Puppet errors on deployment-cpjobqueue is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [16:53:10] PROBLEM - Puppet errors on deployment-restbase02 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [16:53:26] PROBLEM - Puppet errors on deployment-imagescaler02 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [16:53:28] hmm [16:53:29] PROBLEM - Puppet errors on deployment-maps03 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [16:53:32] Krenair ^ [16:53:33] PROBLEM - Puppet errors on deployment-redis06 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [16:53:43] PROBLEM - Puppet errors on deployment-certcentral-testclient03 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [16:53:54] that'll be me [16:53:58] cheers [16:54:09] lol [16:54:09] PROBLEM - Puppet errors on deployment-zotero01 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [16:54:17] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [16:54:25] PROBLEM - Puppet errors on deployment-mediawiki-07 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [16:54:25] PROBLEM - Puppet errors on deployment-db03 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [16:54:25] PROBLEM - Puppet errors on deployment-mwmaint01 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [16:54:37] PROBLEM - Puppet errors on deployment-ms-fe02 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [16:54:44] PROBLEM - Puppet errors on deployment-sentry01 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [16:54:44] PROBLEM - Puppet errors on deployment-aqs01 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [16:54:50] PROBLEM - Puppet errors on deployment-db04 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [16:54:50] PROBLEM - Puppet errors on deployment-ores01 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [16:54:56] PROBLEM - Puppet errors on deployment-cache-text04 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [16:54:56] PROBLEM - Puppet errors on deployment-etcd-01 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [16:55:02] PROBLEM - Puppet errors on deployment-cache-upload04 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [16:55:16] PROBLEM - Puppet errors on deployment-pdfrender02 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [16:55:20] PROBLEM - Puppet errors on deployment-redis05 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [16:55:32] PROBLEM - Puppet errors on deployment-logstash2 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [16:55:44] PROBLEM - Puppet errors on deployment-aqs02 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [16:57:11] PROBLEM - Puppet errors on deployment-fluorine02 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [16:57:11] PROBLEM - Puppet errors on deployment-restbase01 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [16:57:11] PROBLEM - Puppet errors on deployment-deploy01 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [16:57:13] PROBLEM - Puppet errors on deployment-mathoid is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [16:57:23] PROBLEM - Puppet errors on deployment-kafka-main-1 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [16:57:31] hm [16:57:35] PROBLEM - Puppet errors on deployment-mediawiki-09 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [16:57:37] PROBLEM - Puppet errors on deployment-conf03 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [16:57:41] PROBLEM - Puppet errors on deployment-imagescaler01 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [16:57:43] deployment-kafka-jumbo-1 seems ok now [16:57:48] hopefully it will recover [16:58:03] PROBLEM - Puppet errors on deployment-eventlog05 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [16:58:31] PROBLEM - Puppet errors on deployment-jobrunner03 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [16:58:41] 10Gerrit, 10Release-Engineering-Team (Kanban), 10Developer-Advocacy, 10GitHub-Mirrors, and 2 others: Add CODE_OF_CONDUCT.md to Wikimedia repositories - https://phabricator.wikimedia.org/T165540 (10Tgr) 05Resolved>03Open IMO before we close this task we should * decide on the final language. There was t... [16:59:18] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [17:01:27] 10Gerrit, 10Release-Engineering-Team (Kanban), 10Developer-Advocacy, 10GitHub-Mirrors, and 2 others: Add CODE_OF_CONDUCT.md to Wikimedia repositories - https://phabricator.wikimedia.org/T165540 (10Tgr) [17:05:17] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [17:07:09] RECOVERY - Puppet errors on deployment-restbase01 is OK: OK: Less than 1.00% above the threshold [0.0] [17:07:11] RECOVERY - Puppet errors on deployment-fluorine02 is OK: OK: Less than 1.00% above the threshold [0.0] [17:07:11] RECOVERY - Puppet errors on deployment-deploy01 is OK: OK: Less than 1.00% above the threshold [0.0] [17:07:13] RECOVERY - Puppet errors on deployment-mathoid is OK: OK: Less than 1.00% above the threshold [0.0] [17:07:14] RECOVERY - Puppet errors on deployment-kafka-main-2 is OK: OK: Less than 1.00% above the threshold [0.0] [17:07:18] RECOVERY - Puppet errors on deployment-kafka-jumbo-1 is OK: OK: Less than 1.00% above the threshold [0.0] [17:07:24] RECOVERY - Puppet errors on deployment-parsoid09 is OK: OK: Less than 1.00% above the threshold [0.0] [17:07:24] RECOVERY - Puppet errors on deployment-kafka-main-1 is OK: OK: Less than 1.00% above the threshold [0.0] [17:07:24] RECOVERY - Puppet errors on deployment-sca02 is OK: OK: Less than 1.00% above the threshold [0.0] [17:07:24] RECOVERY - Puppet errors on deployment-ircd is OK: OK: Less than 1.00% above the threshold [0.0] [17:07:34] RECOVERY - Puppet errors on deployment-mediawiki-09 is OK: OK: Less than 1.00% above the threshold [0.0] [17:07:36] RECOVERY - Puppet errors on deployment-conf03 is OK: OK: Less than 1.00% above the threshold [0.0] [17:08:03] RECOVERY - Puppet errors on deployment-cpjobqueue is OK: OK: Less than 1.00% above the threshold [0.0] [17:08:05] RECOVERY - Puppet errors on deployment-eventlog05 is OK: OK: Less than 1.00% above the threshold [0.0] [17:08:09] RECOVERY - Puppet errors on deployment-restbase02 is OK: OK: Less than 1.00% above the threshold [0.0] [17:08:27] RECOVERY - Puppet errors on deployment-imagescaler02 is OK: OK: Less than 1.00% above the threshold [0.0] [17:08:31] RECOVERY - Puppet errors on deployment-maps03 is OK: OK: Less than 1.00% above the threshold [0.0] [17:08:31] RECOVERY - Puppet errors on deployment-jobrunner03 is OK: OK: Less than 1.00% above the threshold [0.0] [17:08:35] RECOVERY - Puppet errors on deployment-redis06 is OK: OK: Less than 1.00% above the threshold [0.0] [17:08:43] RECOVERY - Puppet errors on deployment-certcentral-testclient03 is OK: OK: Less than 1.00% above the threshold [0.0] [17:09:11] RECOVERY - Puppet errors on deployment-zotero01 is OK: OK: Less than 1.00% above the threshold [0.0] [17:09:24] RECOVERY - Puppet errors on deployment-mediawiki-07 is OK: OK: Less than 1.00% above the threshold [0.0] [17:09:26] RECOVERY - Puppet errors on deployment-db03 is OK: OK: Less than 1.00% above the threshold [0.0] [17:09:28] RECOVERY - Puppet errors on deployment-mwmaint01 is OK: OK: Less than 1.00% above the threshold [0.0] [17:09:38] RECOVERY - Puppet errors on deployment-ms-fe02 is OK: OK: Less than 1.00% above the threshold [0.0] [17:09:42] RECOVERY - Puppet errors on deployment-aqs01 is OK: OK: Less than 1.00% above the threshold [0.0] [17:09:42] RECOVERY - Puppet errors on deployment-sentry01 is OK: OK: Less than 1.00% above the threshold [0.0] [17:09:48] RECOVERY - Puppet errors on deployment-db04 is OK: OK: Less than 1.00% above the threshold [0.0] [17:09:52] RECOVERY - Puppet errors on deployment-ores01 is OK: OK: Less than 1.00% above the threshold [0.0] [17:09:56] RECOVERY - Puppet errors on deployment-cache-text04 is OK: OK: Less than 1.00% above the threshold [0.0] [17:09:56] RECOVERY - Puppet errors on deployment-etcd-01 is OK: OK: Less than 1.00% above the threshold [0.0] [17:10:01] RECOVERY - Puppet errors on deployment-cache-upload04 is OK: OK: Less than 1.00% above the threshold [0.0] [17:10:17] RECOVERY - Puppet errors on deployment-pdfrender02 is OK: OK: Less than 1.00% above the threshold [0.0] [17:10:19] RECOVERY - Puppet errors on deployment-redis05 is OK: OK: Less than 1.00% above the threshold [0.0] [17:10:26] took its bloody time [17:10:35] RECOVERY - Puppet errors on deployment-logstash2 is OK: OK: Less than 1.00% above the threshold [0.0] [17:11:25] Krenair lol [17:12:28] 10Beta-Cluster-Infrastructure, 10Operations, 10Wikidata, 10wikidata-tech-focus, and 3 others: Run mediawiki::maintenance scripts in Beta Cluster - https://phabricator.wikimedia.org/T125976 (10Reedy) >>! In T125976#4608070, @Krenair wrote: >>>! In T125976#4607399, @Dzahn wrote: >> It seems like adding the m... [17:13:10] PROBLEM - Puppet errors on deployment-deploy01 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [17:14:08] 10Beta-Cluster-Infrastructure, 10Operations, 10Wikidata, 10wikidata-tech-focus, and 3 others: Run mediawiki::maintenance scripts in Beta Cluster - https://phabricator.wikimedia.org/T125976 (10Krenair) >>! In T125976#4608137, @Reedy wrote: >>>! In T125976#4608070, @Krenair wrote: >>>>! In T125976#4607399, @... [17:16:01] 10Beta-Cluster-Infrastructure, 10Operations, 10Wikidata, 10wikidata-tech-focus, and 3 others: Run mediawiki::maintenance scripts in Beta Cluster - https://phabricator.wikimedia.org/T125976 (10Reedy) Sure, but it's more effort to do so. Plus then storing it somewhere, chances of it not being noticed by some... [17:20:45] RECOVERY - Puppet errors on deployment-aqs02 is OK: OK: Less than 1.00% above the threshold [0.0] [17:22:43] RECOVERY - Puppet errors on deployment-imagescaler01 is OK: OK: Less than 1.00% above the threshold [0.0] [17:39:01] PROBLEM - Puppet errors on deployment-poolcounter04 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [17:40:21] PROBLEM - Puppet errors on deployment-maps04 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [17:42:11] PROBLEM - Puppet errors on deployment-certcentral-testdns is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [17:42:17] PROBLEM - Puppet errors on deployment-dumps-puppetmaster02 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [17:42:33] PROBLEM - Puppet errors on deployment-prometheus01 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [17:43:09] RECOVERY - Puppet errors on deployment-deploy01 is OK: OK: Less than 1.00% above the threshold [0.0] [17:49:04] 10Beta-Cluster-Infrastructure, 10Operations, 10Wikidata, 10wikidata-tech-focus, and 3 others: Run mediawiki::maintenance scripts in Beta Cluster - https://phabricator.wikimedia.org/T125976 (10Krenair) It doesn't particularly matter how much effort it takes, it is possible. [17:50:21] RECOVERY - Puppet errors on deployment-maps04 is OK: OK: Less than 1.00% above the threshold [0.0] [17:57:10] RECOVERY - Puppet errors on deployment-certcentral-testdns is OK: OK: Less than 1.00% above the threshold [0.0] [17:57:16] RECOVERY - Puppet errors on deployment-dumps-puppetmaster02 is OK: OK: Less than 1.00% above the threshold [0.0] [18:00:20] !log rm deployment-snapshot01:/etc/ferm/conf.d/10_prometheus-nutcracker-exporter as it was breaking ferm from starting (T153468), puppet has not re-created it so I assume it was historical (shouldn't puppet be purging such files?) [18:00:27] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [18:00:28] T153468: Ferm/DNS library weirdness causing puppet errors on some deployment-prep instances - https://phabricator.wikimedia.org/T153468 [18:01:57] !log rm deployment-maps03:/etc/ferm/conf.d/10_redis_exporter_6379 as it was breaking ferm from starting (T153468), puppet has not re-created it so I assume it was historical (shouldn't puppet be purging such files?) [18:02:02] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [18:07:29] Project beta-scap-eqiad build #222736: 04FAILURE in 19 min: https://integration.wikimedia.org/ci/job/beta-scap-eqiad/222736/ [18:07:30] RECOVERY - Puppet errors on deployment-prometheus01 is OK: OK: Less than 1.00% above the threshold [0.0] [18:08:28] uh, that's interesting [18:08:30] can't ssh to snapshot01?? [18:08:38] is this because I just fixed ferm there possibly? [18:09:02] RECOVERY - Puppet errors on deployment-poolcounter04 is OK: OK: Less than 1.00% above the threshold [0.0] [18:10:19] This is very interesting [18:10:30] ferm on deployment-snapshot01 has a very strange idea of what our deployment hosts are [18:10:39] root@deployment-snapshot01:~# grep DEPLOYMENT /etc/ferm/* -r [18:10:39] root@deployment-snapshot01:~# host 10.68.21.205 [18:10:39] Host 205.21.68.10.in-addr.arpa. not found: 3(NXDOMAIN) [18:10:40] root@deployment-snapshot01:~# host 10.68.20.135 [18:10:41] 135.20.68.10.in-addr.arpa domain name pointer ci-jessie-wikimedia-1097330.contintcloud.eqiad.wmflabs. [18:10:49] okay thanks for that hexchat [18:10:58] /etc/ferm/conf.d/10_deployment-ssh: proto tcp dport ssh saddr $DEPLOYMENT_HOSTS ACCEPT; [18:10:58] /etc/ferm/conf.d/00_defs:@def $DEPLOYMENT_HOSTS = (10.68.21.205 10.68.20.135 ); [18:13:04] It looks like ferm is entirely unmanaged by puppet there :| [18:20:48] PROBLEM - Host deployment-maps03 is DOWN: PING CRITICAL - Packet loss = 100% [18:20:51] **** [18:20:58] !log removed ferm package from deployment-snapshot01 as it appeared unmanaged by puppet and was causing problems with SSH access from the current deployment hosts (previous logs referenced T153468, this just explains why puppet hadn't purged stuff) [18:21:04] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [18:21:05] T153468: Ferm/DNS library weirdness causing puppet errors on some deployment-prep instances - https://phabricator.wikimedia.org/T153468 [18:21:17] Is there a way to tell how much ie8 (and other old browsers) view gerrit.wikimedia.org? [18:21:43] !log went to do the same with deployment-maps03 and accidentally broke SSH access to the server [18:21:46] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [18:21:47] Yippee, build fixed! [18:21:48] Project beta-scap-eqiad build #222737: 09FIXED in 13 min: https://integration.wikimedia.org/ci/job/beta-scap-eqiad/222737/ [18:22:32] Because ie8 support is deprecated because gwtui is deprecated now. [18:26:19] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [18:28:04] 10Beta-Cluster-Infrastructure, 10Cloud-VPS: Please fix my screw-up - unbreak SSH access to deployment-maps03 VM - https://phabricator.wikimedia.org/T205195 (10Krenair) [18:31:18] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [18:34:26] 10Beta-Cluster-Infrastructure, 10Cloud-VPS: Please fix my screw-up - unbreak SSH access to deployment-maps03 VM - https://phabricator.wikimedia.org/T205195 (10Krenair) [18:52:35] polymer 3.0 looks exciting being able to define html in js safely! https://www.polymer-project.org/3.0/docs/about_30 [19:07:20] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [19:17:17] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [20:05:12] !log github: deleted mirror wikimedia/mediawiki-extensions-Collection-OfflineContentGenerator-zim_renderer | T183891; moving to the next one [20:05:19] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:05:20] T183891: Archive mediawiki/extensions/Collection/OfflineContentGenerator and all OCG-related repos - https://phabricator.wikimedia.org/T183891 [20:18:18] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [20:21:55] there's nothing like archiving gerrit repos with a bit of music [20:28:16] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [20:49:18] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [20:51:01] !log github: deleting several wikimedia/mediawiki-extensions-Collection-.* mirror repos for T183891 [20:51:07] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:51:08] T183891: Archive mediawiki/extensions/Collection/OfflineContentGenerator and all OCG-related repos - https://phabricator.wikimedia.org/T183891 [20:54:19] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [21:12:00] (03CR) 10MarcoAurelio: [C: 031] "I've added Tyler for his advice on this one." [tools/release] - 10https://gerrit.wikimedia.org/r/461733 (https://phabricator.wikimedia.org/T106067) (owner: 10Reedy) [21:12:25] thcipriani: ^ [21:12:44] is it possible to stop branching dissableaccount while still not undeployed? maybe not? [21:15:19] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<11.11%) [21:17:43] git reset --hard [21:17:49] hmm [21:17:52] nothere [21:53:27] PROBLEM - SSH on integration-slave-docker-1021 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:03:14] RECOVERY - SSH on integration-slave-docker-1021 is OK: SSH OK - OpenSSH_6.7p1 Debian-5+deb8u7 (protocol 2.0) [22:04:06] 10Beta-Cluster-Infrastructure, 10Operations, 10Wikidata, 10wikidata-tech-focus, and 3 others: Run mediawiki::maintenance scripts in Beta Cluster - https://phabricator.wikimedia.org/T125976 (10Reedy) >>! In T125976#4608196, @Krenair wrote: > It doesn't particularly matter how much effort it takes, it is pos... [22:05:30] (03CR) 10Reedy: [C: 04-2] "Don't really need any advice ;)" [tools/release] - 10https://gerrit.wikimedia.org/r/461733 (https://phabricator.wikimedia.org/T106067) (owner: 10Reedy) [22:09:23] PROBLEM - SSH on integration-slave-docker-1021 is CRITICAL: CRITICAL - Socket timeout after 10 seconds