[00:00:53] Project beta-scap-eqiad build #237676: 04FAILURE in 19 sec: https://integration.wikimedia.org/ci/job/beta-scap-eqiad/237676/ [00:01:18] (03PS8) 10Dduvall: Provide a Jenkinsfile that runs basic functional tests [integration/pipelinelib] - 10https://gerrit.wikimedia.org/r/489760 [00:01:20] (03PS6) 10Dduvall: Use blubberoid.wikimedia.org to process blubber.yaml [integration/pipelinelib] - 10https://gerrit.wikimedia.org/r/480689 (https://phabricator.wikimedia.org/T212247) [00:03:34] (03CR) 10Dduvall: "check experimental" [integration/pipelinelib] - 10https://gerrit.wikimedia.org/r/480689 (https://phabricator.wikimedia.org/T212247) (owner: 10Dduvall) [00:09:17] 10Phabricator (Search), 10Release-Engineering-Team (Kanban): make phabricator search match projects based on alterenate hashtags - https://phabricator.wikimedia.org/T215085 (10mmodell) I'm not entirely sure, it needs more investigation. [00:10:16] maintenance-disconnect-full-disks build 46212 integration-slave-docker-1033 (/var/lib/docker: 96%): OFFLINE due to disk space [00:10:25] 10Phabricator (Search), 10Release-Engineering-Team (Kanban): make phabricator search match projects based on alterenate hashtags - https://phabricator.wikimedia.org/T215085 (10mmodell) @Legoktm what was the example that didn't work for you which we were investigating at all-hands? [00:12:34] 10Phabricator (Search), 10Release-Engineering-Team (Kanban): make phabricator search match projects based on alterenate hashtags - https://phabricator.wikimedia.org/T215085 (10mmodell) It still works for some specific examples, like #logspam matches ( `logspam` ) [00:14:38] Yippee, build fixed! [00:14:39] Project beta-scap-eqiad build #237677: 09FIXED in 10 min: https://integration.wikimedia.org/ci/job/beta-scap-eqiad/237677/ [00:14:51] 10Phabricator, 10Security-Team: Create a security issue task type with additional attributes - https://phabricator.wikimedia.org/T204160 (10mmodell) [00:15:57] 10Phabricator (Search), 10Release-Engineering-Team (Kanban): make phabricator search match projects based on alterenate hashtags - https://phabricator.wikimedia.org/T215085 (10Legoktm) >>! In T215085#4949680, @mmodell wrote: > @Legoktm what was the example that didn't work for you which we were investigating a... [00:17:08] 10Gerrit: gerrit sometimes giving "internal server error: Error inserting change/patchset" - https://phabricator.wikimedia.org/T205503 (10matmarex) Same as T214656. [00:20:17] 10Project-Admins, 10Security-Team: Create maybe-public tag - https://phabricator.wikimedia.org/T215981 (10Tgr) [00:20:31] 10Phabricator (Search), 10Release-Engineering-Team (Kanban): make phabricator search match projects based on alterenate hashtags - https://phabricator.wikimedia.org/T215085 (10mmodell) This one must really be some edge case. The algorithm seems to be heavily biased towards prefix matching which is probably cor... [00:25:17] maintenance-disconnect-full-disks build 46215 integration-slave-docker-1033: OFFLINE due to disk space [00:27:08] https://gerrit.wikimedia.org/r/#/c/operations/software/gerrit/plugins/wikimedia/+/490225/ is going to be fun to test :P [00:27:51] (03PS7) 10Dduvall: Use blubberoid.wikimedia.org to process blubber.yaml [integration/pipelinelib] - 10https://gerrit.wikimedia.org/r/480689 (https://phabricator.wikimedia.org/T212247) [00:28:36] legoktm https://gerrit.wikimedia.org/r/#/c/operations/software/gerrit/plugins/wikimedia/+/489482/ :) [00:30:34] *maybe* i can create it so that you do so that we can at least add additional buttons later without duplicate code :) (though not sure at the moment if that's possible as i need access to the "plugin" variable. [00:30:39] (03CR) 10Dduvall: "Refactored to specify content-type header directly." (031 comment) [integration/pipelinelib] - 10https://gerrit.wikimedia.org/r/480689 (https://phabricator.wikimedia.org/T212247) (owner: 10Dduvall) [00:30:41] that's future work though :) [00:30:59] (03CR) 10Dduvall: "check experimental" [integration/pipelinelib] - 10https://gerrit.wikimedia.org/r/480689 (https://phabricator.wikimedia.org/T212247) (owner: 10Dduvall) [00:35:35] Project mediawiki-core-doxygen-docker build #4691: 04FAILURE in 31 min: https://integration.wikimedia.org/ci/job/mediawiki-core-doxygen-docker/4691/ [00:46:13] (03PS8) 10Dduvall: Use blubberoid.wikimedia.org to process blubber.yaml [integration/pipelinelib] - 10https://gerrit.wikimedia.org/r/480689 (https://phabricator.wikimedia.org/T212247) [00:46:38] (03CR) 10jerkins-bot: [V: 04-1] Use blubberoid.wikimedia.org to process blubber.yaml [integration/pipelinelib] - 10https://gerrit.wikimedia.org/r/480689 (https://phabricator.wikimedia.org/T212247) (owner: 10Dduvall) [00:47:33] (03PS9) 10Dduvall: Use blubberoid.wikimedia.org to process blubber.yaml [integration/pipelinelib] - 10https://gerrit.wikimedia.org/r/480689 (https://phabricator.wikimedia.org/T212247) [00:48:20] (03CR) 10Dduvall: "check experimental" [integration/pipelinelib] - 10https://gerrit.wikimedia.org/r/480689 (https://phabricator.wikimedia.org/T212247) (owner: 10Dduvall) [00:50:14] maintenance-disconnect-full-disks build 46220 integration-slave-docker-1033: OFFLINE due to disk space [01:15:16] maintenance-disconnect-full-disks build 46225 integration-slave-docker-1033: OFFLINE due to disk space [01:18:31] Yippee, build fixed! [01:18:32] Project mediawiki-core-doxygen-docker build #4692: 09FIXED in 14 min: https://integration.wikimedia.org/ci/job/mediawiki-core-doxygen-docker/4692/ [01:40:17] maintenance-disconnect-full-disks build 46230 integration-slave-docker-1033: OFFLINE due to disk space [01:44:49] 10Release-Engineering-Team (Watching / External), 10MW-1.32-release, 10Patch-For-Review: Release 1.32.1 as a maintenance release - https://phabricator.wikimedia.org/T213595 (10mmodell) https://gerrit.wikimedia.org/r/#/c/mediawiki/core/+/490265 [02:05:13] maintenance-disconnect-full-disks build 46235 integration-slave-docker-1033: OFFLINE due to disk space [02:21:31] 10Release-Engineering-Team (Watching / External), 10MW-1.32-release, 10Patch-For-Review: Release 1.32.1 as a maintenance release - https://phabricator.wikimedia.org/T213595 (10Legoktm) Can we bundle the planned security release with this? [02:30:31] maintenance-disconnect-full-disks build 46240 integration-slave-docker-1033: OFFLINE due to disk space [02:47:30] Something is causing higher cpu then normal on cobalt (gerrit) [02:47:31] https://gerrit.wikimedia.org/r/monitoring?part=graph&graph=cpu [02:51:32] gerrit [02:51:34] probably :P [02:52:17] Lol [02:52:42] paladox: thcipriani was looking into it a little bit earlier [02:53:11] Ah ok [02:53:35] Hopefully that will not turn into the same problem that happened last night [02:55:17] maintenance-disconnect-full-disks build 46245 integration-slave-docker-1033: OFFLINE due to disk space [02:57:17] Otherwise we could blame Reedy for all his patches he uploaded :D (jokes) [02:57:33] 19:41:51 rsync: failed to set times on "/cache/.": Operation not permitted (1) [02:57:33] 19:41:51 rsync: recv_generator: mkdir "/cache/JSV" failed: Permission denied (13) [02:57:37] from integration-slave-docker-1041 [02:58:01] That happens sometimes though that shouldent cause the test to fail [03:00:42] Ignore me per link reedy pasted in the other channel [03:01:08] More tests are failing now [03:02:07] Anyways /me goes as it’s 3am :) [03:06:03] 10Beta-Cluster-Infrastructure: ApiFeatureUsage data is not being populated in the Beta Cluster - https://phabricator.wikimedia.org/T183156 (10Anomie) In production I'd probably tag with #wikimedia-logstash and/or #ElasticSearch. Team-wise, probably SRE (for logstash) and Search (for ES), assuming the component... [03:07:06] 10Beta-Cluster-Infrastructure: ApiFeatureUsage data is not being populated in the Beta Cluster - https://phabricator.wikimedia.org/T183156 (10Anomie) [03:20:12] maintenance-disconnect-full-disks build 46250 integration-slave-docker-1033: OFFLINE due to disk space [03:22:06] 10Beta-Cluster-Infrastructure, 10Discovery-Search, 10Elasticsearch, 10Wikimedia-Logstash: ApiFeatureUsage data is not being populated in the Beta Cluster - https://phabricator.wikimedia.org/T183156 (10greg) [03:25:50] 10Continuous-Integration-Infrastructure: Many tests are currently failing :( - https://phabricator.wikimedia.org/T215992 (10Reedy) [03:26:06] 10Continuous-Integration-Infrastructure: Many tests are currently failing :( - https://phabricator.wikimedia.org/T215992 (10Reedy) p:05Triage→03High [03:30:01] 10Release-Engineering-Team (Watching / External), 10MW-1.32-release, 10Patch-For-Review: Release 1.32.1 as a maintenance release - https://phabricator.wikimedia.org/T213595 (10RazeSoldier) I think we should wait for T215566. [03:45:19] maintenance-disconnect-full-disks build 46255 integration-slave-docker-1033: OFFLINE due to disk space [04:10:31] maintenance-disconnect-full-disks build 46260 integration-slave-docker-1033: OFFLINE due to disk space [04:12:11] is something stuck in zuul? there are jobs queued in gate-submit queue for 1.5 hrs... [04:29:11] SMalyshev: stuff is just broken [04:29:16] https://phabricator.wikimedia.org/T215992 [04:29:38] oh :) [04:29:47] 10Continuous-Integration-Infrastructure: CI permission errors causing tests to fial - https://phabricator.wikimedia.org/T215992 (10Legoktm) p:05High→03Unbreak! [04:30:13] kay then, will wait until Appropriate People come and Fix Things [04:32:23] I have no idea what's going on [04:32:29] I assume hashar will come online soon [04:32:46] 10Continuous-Integration-Infrastructure: CI permission errors causing tests to fail - https://phabricator.wikimedia.org/T215992 (10Legoktm) [04:35:17] maintenance-disconnect-full-disks build 46265 integration-slave-docker-1033: OFFLINE due to disk space [05:00:17] maintenance-disconnect-full-disks build 46270 integration-slave-docker-1033: OFFLINE due to disk space [05:25:12] maintenance-disconnect-full-disks build 46275 integration-slave-docker-1033: OFFLINE due to disk space [05:50:41] maintenance-disconnect-full-disks build 46280 integration-slave-docker-1033: OFFLINE due to disk space [06:15:13] maintenance-disconnect-full-disks build 46285 integration-slave-docker-1033: OFFLINE due to disk space [06:40:13] maintenance-disconnect-full-disks build 46290 integration-slave-docker-1033: OFFLINE due to disk space [07:00:48] Project beta-scap-eqiad build #237713: 04FAILURE in 5.4 sec: https://integration.wikimedia.org/ci/job/beta-scap-eqiad/237713/ [07:05:13] maintenance-disconnect-full-disks build 46295 integration-slave-docker-1033: OFFLINE due to disk space [07:14:11] Yippee, build fixed! [07:14:11] Project beta-scap-eqiad build #237714: 09FIXED in 9 min 59 sec: https://integration.wikimedia.org/ci/job/beta-scap-eqiad/237714/ [07:30:22] maintenance-disconnect-full-disks build 46300 integration-slave-docker-1033: OFFLINE due to disk space [07:55:29] maintenance-disconnect-full-disks build 46305 integration-slave-docker-1033: OFFLINE due to disk space [08:20:13] maintenance-disconnect-full-disks build 46310 integration-slave-docker-1033: OFFLINE due to disk space [08:45:29] maintenance-disconnect-full-disks build 46315 integration-slave-docker-1033: OFFLINE due to disk space [09:10:13] maintenance-disconnect-full-disks build 46320 integration-slave-docker-1033: OFFLINE due to disk space [09:35:12] maintenance-disconnect-full-disks build 46325 integration-slave-docker-1033: OFFLINE due to disk space [10:00:16] maintenance-disconnect-full-disks build 46330 integration-slave-docker-1033: OFFLINE due to disk space [10:12:17] 10Release-Engineering-Team (Kanban), 10Scap, 10User-MModell: Document scap swat command - https://phabricator.wikimedia.org/T196411 (10zeljkofilipin) [10:13:32] 10Phabricator: Allow video embeds in formats other than OGV (e.g. WEBM) - https://phabricator.wikimedia.org/T215360 (10matmarex) People are using GIFs instead, because no one wants to spend 15 minutes every time re-encoding an OGV until you get one that is <4 MB and not utterly messed up by the encoder, and it m... [10:25:16] maintenance-disconnect-full-disks build 46335 integration-slave-docker-1033: OFFLINE due to disk space [10:50:14] maintenance-disconnect-full-disks build 46340 integration-slave-docker-1033: OFFLINE due to disk space [11:15:15] maintenance-disconnect-full-disks build 46345 integration-slave-docker-1033: OFFLINE due to disk space [11:40:16] maintenance-disconnect-full-disks build 46350 integration-slave-docker-1033: OFFLINE due to disk space [12:05:16] maintenance-disconnect-full-disks build 46355 integration-slave-docker-1033: OFFLINE due to disk space [12:08:00] Project mediawiki-core-doxygen-docker build #4703: 04FAILURE in 3 min 54 sec: https://integration.wikimedia.org/ci/job/mediawiki-core-doxygen-docker/4703/ [12:10:32] PROBLEM - English Wikipedia Main page on beta-cluster is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 2788 bytes in 0.065 second response time [12:13:12] PROBLEM - English Wikipedia Mobile Main page on beta-cluster is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 2770 bytes in 0.107 second response time [12:20:02] Project beta-update-databases-eqiad build #31931: 04FAILURE in 1.9 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/31931/ [12:30:13] maintenance-disconnect-full-disks build 46360 integration-slave-docker-1033: OFFLINE due to disk space [12:59:08] meow [12:59:09] Waiting for the completion of castor-save-workspace-cache [13:20:10] Project beta-update-databases-eqiad build #31932: 04STILL FAILING in 9.4 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/31932/ [13:57:13] zuul test-prio seems to be backed up waiting for https://integration.wikimedia.org/ci/job/operations-mw-config-composer-test-docker/12236/ btw [14:08:19] ah now I get it, integration-castor03 is offline, hence mw jenkins jobs wait forever [14:08:46] ugh [14:11:00] * godog filing a task [14:20:03] Project beta-update-databases-eqiad build #31933: 04STILL FAILING in 2.2 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/31933/ [14:20:45] 10Continuous-Integration-Infrastructure, 10Operations: jenkins / zuul backing up due to jenkins slaves down - https://phabricator.wikimedia.org/T216039 (10fgiunchedi) [14:21:38] ^ [14:26:07] !log deployement-prep: upgrading to elastic 5.6.14 [14:26:08] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [14:29:47] 10Continuous-Integration-Infrastructure, 10Operations: jenkins / zuul backing up due to jenkins slaves down - https://phabricator.wikimedia.org/T216039 (10fgiunchedi) p:05Triage→03High [14:34:58] !log modified castor-save-workspace-cache to exit 0 and run on blubber nodes while integration-castor03 is down [14:35:00] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [14:36:41] thcipriani: cheers [14:41:18] 10Continuous-Integration-Infrastructure, 10Operations: jenkins / zuul backing up due to jenkins slaves down - https://phabricator.wikimedia.org/T216039 (10thcipriani) Ugh. For the moment (while integration-castor03 is down) I've modified castor-save-workspace-cache to be a no-op (exit 0) and run on nodes label... [14:42:23] Yippee, build fixed! [14:42:23] Project mediawiki-core-doxygen-docker build #4704: 09FIXED in 8 min 17 sec: https://integration.wikimedia.org/ci/job/mediawiki-core-doxygen-docker/4704/ [14:42:51] godog: sorry for the chaos :( [14:45:13] thcipriani: no worries, thanks for the quick fix! [15:11:27] 10Beta-Cluster-Infrastructure: Beta cluster is broken: No working replica DB server: Unknown error (172.16.5.5:3306)) - https://phabricator.wikimedia.org/T216045 (10dcausse) [15:12:04] 10Beta-Cluster-Infrastructure: Beta cluster is broken: No working replica DB server: Unknown error (172.16.5.5:3306)) - https://phabricator.wikimedia.org/T216045 (10dcausse) p:05Triage→03High [15:14:51] ~/buffer 12 [15:20:05] Project beta-update-databases-eqiad build #31934: 04STILL FAILING in 4.9 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/31934/ [15:21:02] 10Beta-Cluster-Infrastructure: Beta cluster is broken: No working replica DB server: Unknown error (172.16.5.5:3306)) - https://phabricator.wikimedia.org/T216045 (10jcrespo) That is T216030 [15:26:22] 10Continuous-Integration-Infrastructure, 10Operations: jenkins / zuul backing up due to jenkins slaves down - https://phabricator.wikimedia.org/T216039 (10jcrespo) This is T216030 https://lists.wikimedia.org/pipermail/cloud/2019-February/000538.html [16:15:12] maintenance-disconnect-full-disks build 46405 integration-slave-docker-1033: OFFLINE due to disk space [16:20:05] Project beta-update-databases-eqiad build #31935: 04STILL FAILING in 5 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/31935/ [16:25:49] greg-g: thanks for forwarding the email for those of us not on cloud-l [16:28:01] (03CR) 10Effie Mouzeli: [V: 03+1 C: 03+1] "No outstanding errors when testing scap pull" [tools/scap] - 10https://gerrit.wikimedia.org/r/489668 (owner: 10Giuseppe Lavagetto) [16:38:14] thcipriani: if you have time today, could you glance at https://gerrit.wikimedia.org/r/c/integration/config/+/487880 please? [16:39:19] kostajh: yep, it's on my list, I'm on train duty this week (among other things) so I am running a bit behind on code review :( [16:39:37] sure, I understand. thanks! [16:40:11] maintenance-disconnect-full-disks build 46410 integration-slave-docker-1033: OFFLINE due to disk space [16:42:17] kostajh: oh, I was thinking of other patches of yours I (IIRC) was asked to look at, this one is a pretty easy review, will deploy shortly :) [16:43:05] the other more dubious one (b/c shell substitution in arguments) is https://gerrit.wikimedia.org/r/c/integration/config/+/487877 [16:43:59] ah right, that's the one I wanted to fiddle with a bit, not sure if it'll work or not [16:44:55] also might be another way to do it that'd be cleaner, i.e., job-template...somehow [16:45:09] (03CR) 10Thcipriani: [C: 03+2] Sonar: Enable experimental for core, skins, and extensions [integration/config] - 10https://gerrit.wikimedia.org/r/487880 (https://phabricator.wikimedia.org/T215177) (owner: 10Kosta Harlan) [16:45:12] I just tested it and it didn't work, but I think I have a fix for it [16:46:04] (03PS4) 10Kosta Harlan: Sonar: Specify branch name and target [integration/config] - 10https://gerrit.wikimedia.org/r/487877 (https://phabricator.wikimedia.org/T215175) [16:47:04] (03Merged) 10jenkins-bot: Sonar: Enable experimental for core, skins, and extensions [integration/config] - 10https://gerrit.wikimedia.org/r/487880 (https://phabricator.wikimedia.org/T215177) (owner: 10Kosta Harlan) [16:48:36] !log reloading zuul to deploy https://gerrit.wikimedia.org/r/#/c/integration/config/+/487880/ [16:48:37] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [16:49:24] ^ kostajh wmf-sonar-scanner is an experimental job for core, skins, and extensions, FYI [16:49:35] (or should be) [16:50:05] thcipriani: cool, will test [16:50:50] o/ hello! it seems that all of our browser tests are failing quite quickly this morning. is there a problem with CI? [16:50:58] here's an example: https://integration.wikimedia.org/ci/view/Reading-Web/job/selenium-MinervaNeue/842/console [16:51:40] this one actually reports a 500 server error: https://integration.wikimedia.org/ci/view/Reading-Web/job/selenium-MobileFrontend/1148/BROWSER=chrome,MEDIAWIKI_ENVIRONMENT=beta,PLATFORM=Linux,label=BrowserTests/console [16:52:07] i guess it's just that the beta cluster is down? [16:53:14] niedzielski: beta cluster is having problems due to https://lists.wikimedia.org/pipermail/cloud/2019-February/000538.html [16:53:57] apergos: yup! [16:54:09] thanks thcipriani ! [16:55:53] 10Gerrit, 10GitHub-Mirrors, 10Wikidata: Set up the Github mirror for Gerrit repository wikibase/termbox - https://phabricator.wikimedia.org/T216050 (10WMDE-leszek) [16:59:52] (sorry for futzing with it) [16:59:58] > rsync: readlink_stat("/castor-mw-ext-and-skins/master/mwgate-npm-node-6-docker/@babel/helper-call-delegate/7.0.3" (in caches)) failed: Structure needs cleaning (117) [17:00:04] wonder what that could mean [17:00:26] something rotten in castor no doubt. [17:05:12] maintenance-disconnect-full-disks build 46415 integration-slave-docker-1033: OFFLINE due to disk space [17:08:33] (03PS1) 10Thcipriani: Castor seems pretty borked at the moment [integration/config] - 10https://gerrit.wikimedia.org/r/490368 [17:09:21] (03CR) 10jerkins-bot: [V: 04-1] Castor seems pretty borked at the moment [integration/config] - 10https://gerrit.wikimedia.org/r/490368 (owner: 10Thcipriani) [17:16:50] !log disconnected castor03 from jenkins [17:16:51] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [17:20:05] Project beta-update-databases-eqiad build #31936: 04STILL FAILING in 4.4 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/31936/ [17:21:14] !log stopped rsync on castor03 [17:21:15] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [17:21:44] !log stopping rsync server on castor03 [17:21:45] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [17:29:08] 10Continuous-Integration-Infrastructure: Jenkins failing everything due to npm being screwed up - https://phabricator.wikimedia.org/T216053 (10Anomie) [17:30:14] maintenance-disconnect-full-disks build 46420 integration-slave-docker-1033: OFFLINE due to disk space [17:33:04] !log rebuilding integration-castor03 [17:33:05] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [17:47:14] 10Continuous-Integration-Infrastructure: Jenkins failing everything due to npm being screwed up - https://phabricator.wikimedia.org/T216053 (10Lucas_Werkmeister_WMDE) There are also similar errors in the unrelated change https://gerrit.wikimedia.org/r/c/mediawiki/extensions/Wikibase/+/490358. That said, the core... [17:50:18] !log bringing integration-slave-docker-1033 back online after clearing out old docker images [17:50:19] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [17:53:39] gerrit's having a hackathon in may https://groups.google.com/forum/#!topic/repo-discuss/LVtCQo086ps [17:57:38] thcipriani: integration-slave-docker-1038 is having issues with the /srv mount and jenkins can't start a slave process. if i destroy it, that should give us enough quota to spin up another [17:58:05] * marxarelli does [17:58:06] marxarelli: andrewbog.ott got us some head room in the integration project, so that should be fine now [17:58:19] oh good! [17:59:41] !log deleting integration-slave-docker-1038 node and deleting instance [17:59:42] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [18:02:14] !log launching new integration-slave-docker-1048 instance [18:02:14] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [18:10:04] 10Continuous-Integration-Infrastructure: Jenkins failing everything due to npm being screwed up - https://phabricator.wikimedia.org/T216053 (10Lucas_Werkmeister_WMDE) [18:15:12] !log adding new jenkins node integration-slave-docker-1048 [18:15:13] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [18:16:09] 10Continuous-Integration-Infrastructure: Jenkins failing everything due to npm being screwed up - https://phabricator.wikimedia.org/T216053 (10greg) Castor (our caching middlelayer for this stuff in CI) was one of the effected vms, we're removed it from service so nothing is using its cache but things should be... [18:17:05] 10Continuous-Integration-Infrastructure: Jenkins failing everything due to npm being screwed up - https://phabricator.wikimedia.org/T216053 (10Lucas_Werkmeister_WMDE) 05Open→03Resolved a:03Lucas_Werkmeister_WMDE The Wikibase build went through, I was just about to close this when I saw your comment :) let’... [18:20:09] Project beta-update-databases-eqiad build #31937: 04STILL FAILING in 9 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/31937/ [18:28:34] so deployment-prep [18:28:41] deployment-db03 and deployment-db04 are back up [18:28:47] (db03 being the master and db04 being the slave) [18:29:14] db04 has a corrupt enwiki.logging table [18:29:21] meaning mysql won't start there [18:29:38] this greatly upsets mediawiki which doesn't appear to like falling back to reading off master [18:30:49] https://phabricator.wikimedia.org/P8078 [18:31:10] uhhh. [18:31:16] who maintains those? [18:31:34] Krenair maybe rebuild the slave so it re pulls from the master? [18:31:38] or re sync it? [18:31:41] the same answer for everything in beta: whoever maintains it in production. [18:31:56] that's the SHOULD answer ^ reality is different of course [18:32:09] which means no action until tomorrow (and it's going to be lower priority than some things because reality). sigh [18:32:30] I mean when there are already only two people out of 4 dbas we should have... [18:32:40] !log bringing up new integration-castor03, re-enabling castor-save* jobs [18:32:41] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [18:33:24] yes [18:33:45] in the mean time I was hoping that I could find some way to recover it mysql [18:33:47] myself* [18:34:35] I so don't know enough about determining if it's corrupt (broken) from some transaction on and could be rolled back and binlogs replayed [18:34:52] or if cloning from another host (the master, since there is no other) is the realistic solution [18:35:46] tempted to just set up a new slave instance [18:36:26] why not? [18:36:40] in the end you want more than one slave there so that if one's bad the other one can just pick up the slack [18:37:13] How are we for space resources (cpu, ram, storage )? [18:37:37] I imagine we can get any extra quota necessary to recover from this within reason? [18:37:49] I'm looking at this command innobackupex-1.5.1 --stream=tar /srv/sqldata --user=root --slave-info | nc NEW-SERVER 9210 and wondering if it would do the trick [18:37:53] but honestly I have no idea [18:38:10] (source: https://wikitech.wikimedia.org/wiki/Setting_up_a_MySQL_replica which see the big fat warning on top, besides it might be old, etc etc) [18:39:13] I'd prefer not to do a whole new slave as it's 45G [18:39:26] or maybe 40G. still [18:41:03] in general unless folks know there is some specific transction after which things were broken, the procedure seems to always be 'clone from another host' [18:42:28] looks like we have the quota for it [18:45:11] I'm not confident I can restore db04 without making matters worse, and since we have a seemingly-working master to copy from... think I'll try that [18:45:21] !log integration-slave-jessie-1003 seems to be consitently unable to start jobs, marking as offline manually [18:45:22] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [18:48:16] wait [18:49:01] sounds like the dodgy hypervisor is about to be evacuated, I might have to wait to do that copying [18:49:38] will make a new instance regardless [18:49:42] ok, good luck [18:49:45] * apergos salutes [18:49:58] (going to be mostly away and hope food revives the brain) [18:55:44] created deployment-db05, it's assigned to cloudvirt1028 which doesn't host any other deployment-prep VMs yet, I've requested that db0{3,4} be sent elsewhere [18:55:55] 10Continuous-Integration-Infrastructure, 10Operations: jenkins / zuul backing up due to jenkins slaves down - https://phabricator.wikimedia.org/T216039 (10thcipriani) p:05High→03Normal Built a new integration-castor and undid my dirty hacks to the `castor-save-*` jobs. Lowering priority but leaving open: w... [18:59:02] 10Release-Engineering-Team (Kanban), 10Release Pipeline: Pipeline image build cleanup - https://phabricator.wikimedia.org/T177867 (10dduvall) a:03dduvall [19:00:07] there should be a runbook for 'cloning a slave in beta from master or another slave' ('should' = 'it would be nice if', not 'there likely is') [19:00:13] maintenance-disconnect-full-disks build 46438 integration-slave-jessie-1001 (/: 96%): OFFLINE due to disk space [19:00:48] I'll check the backlog later and see what's happening, maybe try to scrounge up some info from the dbas in their so-called free time... [19:02:51] ok [19:03:00] I'm getting deployment-db05 ready anyway [19:07:31] thcipriani: are you a good person to ask about cattle vs pets questions for integration-slave-* hosts? We are trying to figure out which instances to evacuate to new physical hosts vs which to just delete and let folks rebuild from scratch [19:08:18] bd808: sure, in general the integration-slave-docker-* hosts are easy to spin back up. The other ones I'm less sure about, but can look. [19:08:29] integration-slave-docker-10{17,33}, integration-slave-jessie-1003, and integration-slave-jessie-android are all on a physcial host we need to abandon [19:08:55] * thcipriani checks config for thost [19:08:57] *those [19:10:12] maintenance-disconnect-full-disks build 46440 integration-slave-jessie-1001: OFFLINE due to disk space [19:12:04] bd808: those all look like they can be recreated fairly easily, I think (judging from horizon setups anyway). Can I just delete and readd or are you going to obliterate them from the back end somehow? (if so I need to make some notes before that happens) [19:12:29] thcipriani: let me check with the smart people ;) [19:14:46] marxarelli: do you want to get the docker machines (1017 and 1033) rebuilt and I can try to get the other two done? [19:15:02] i can do them all if you want [19:15:09] you're on train :) [19:15:27] play the train card! [19:16:19] * thcipriani plays train card [19:16:21] marxarelli: I can't rebuild those two instances, I've got train to do! [19:16:29] hio, I just created a new instance in deployment-prep [19:16:34] deployment-eventgate-analytics [19:16:38] marxarelli: thank you [19:16:39] running puppet, I get [19:16:45] thcipriani: np! :) [19:16:48] certificate verify failed: [self signed certificate in certificate chain for /CN=Puppet CA: deployment-puppetmaster03.deployment-prep.eqiad.wmflabs] [19:18:20] ottomata, yeah there's some magic incantations you need to run at first boot in deployment-prep [19:18:24] to get it hooked up to puppet [19:18:43] deployment-db05: had to set "mariadb::config::basedir: /opt/wmf-mariadb101" in hiera because it's stretch instead of jessie, moving on to next problem [19:19:47] hm? that seems unrelated, no? hm. [19:19:55] this won't even eval the catalog because of cert problems [19:20:06] Project beta-update-databases-eqiad build #31938: 04STILL FAILING in 5.4 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/31938/ [19:20:15] ottomata, it is, I needed to send it to this channel regardless [19:20:31] I was going to post it here but when I got here I saw you had left messages too [19:20:42] ottomata, your instance should now have working puppet certs [19:20:42] hm ok [19:20:57] ah ok it does [19:20:59] weird, ok thanks [19:21:00] it's got some thing about a missing class instead but you can figure that bit out [19:21:17] for the record I just ran some version of the process block at https://phabricator.wikimedia.org/T195686 [19:22:08] I should really write that somewhere more long-term than a random migration ticket [19:22:17] pretty sure the same type of thing we necessary under the old puppetmasters [19:22:36] or maybe I did document it, forgot, and keep referring back to the old ticket [19:22:45] huh is puppet repo out of date on puppetmaster03?...? its missing this class file [19:22:49] possibly [19:23:02] that file was comitted dec 10 though [19:23:02] oh god [19:23:09] root@deployment-puppetmaster03:/var/lib/git/operations/puppet(production u+25-144)# [19:23:12] that mess can wait till later [19:23:21] 144 commits behind [19:23:56] yeah [19:23:56] error: could not apply 4e5d85ec1b... prometheus: upgrade to node-exporter 0.17 [19:23:58] ugh is sync broken again? [19:23:59] sigh [19:24:37] i'm going to try to fix sync............ [19:25:01] !log deleting integration-slave-docker-1017 jenkins node and instance [19:25:02] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [19:27:07] hm ok the offending change has been merged, removing it from rebase list [19:28:02] okay more deployment-db05 for the record [19:28:18] [ERROR] Could not open mysql.plugin table. Some plugins may be not loaded [19:28:27] [ERROR] Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist [19:28:33] following https://stackoverflow.com/questions/34198735/could-not-open-mysql-plugin-table-some-plugins-may-be-not-loaded [19:28:37] moved /srv/sqldata out the way [19:28:56] /opt/wmf-mariadb101/scripts/mysql_install_db --user=mysql --basedir=/opt/wmf-mariadb101 --datadir=/srv/sqldata [19:29:32] ran puppet again [19:29:37] Notice: /Stage[main]/Mariadb::Config/File[/srv/sqldata]/group: group changed 'root' to 'mysql' [19:29:38] Notice: /Stage[main]/Mariadb::Config/File[/srv/sqldata]/mode: mode changed '0700' to '0755' [19:29:38] Notice: /Stage[main]/Mariadb::Service/Service[mariadb]/ensure: ensure changed 'stopped' to 'running' [19:29:50] mariadb is running [19:31:19] (03Abandoned) 10Thcipriani: Castor seems pretty borked at the moment [integration/config] - 10https://gerrit.wikimedia.org/r/490368 (owner: 10Thcipriani) [19:32:02] !log deleting integration-slave-docker-1033 jenkins node and instance [19:32:02] think that's probably all I can do until deployment-db03 is back up on a trust-worthy host [19:32:03] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [19:33:22] !log deleting integration-slave-jessie-1003 jenkins node and instance [19:33:22] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [19:34:55] !log deleting integration-slave-jessie-android jenkins node and instance [19:34:56] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [19:35:07] Project beta-scap-eqiad build #237778: 04FAILURE in 15 min: https://integration.wikimedia.org/ci/job/beta-scap-eqiad/237778/ [19:35:15] maintenance-disconnect-full-disks build 46445 integration-slave-jessie-1001: OFFLINE due to disk space [19:41:08] Krenair: i think i fixed the rebases [19:41:18] mostly they were caused by merged patches not being removed from rebase list [19:41:21] but now sync is giving [19:41:22] error: could not apply c7262be... LOCAL HACK: Add puppetdb passwords [19:41:22] cool [19:41:26] and ihave no idea where that commit is [19:41:26] hm [19:41:29] * Krenair looks [19:41:36] sounds like one of mine [19:42:19] ottomata, this is operations/puppet ? [19:42:33] or labs/private? [19:42:52] ops/puppet [19:43:52] root@deployment-puppetmaster03:/var/lib/git/operations/puppet(production u+19)# git show c7262be [19:43:52] fatal: ambiguous argument 'c7262be': unknown revision or path not in the working tree. [19:44:47] yeah [19:44:53] dunno where puppet sync is getting that [19:45:06] that's why its failing [19:45:08] its trying to apply that [19:45:08] oh [19:45:12] maybe it is in private... [19:45:16] i'm just running sync upstream [19:45:19] dunno how that works with private [19:45:25] oh that probably does both [19:45:29] ok [19:45:43] ah yes, puppet is synced now [19:45:56] conflict is trivial too, fixing [19:46:04] fixed [19:46:11] root@deployment-puppetmaster03:/var/lib/git/labs/private(master u+25)# [19:46:20] root@deployment-puppetmaster03:/var/lib/git/operations/puppet(production u+19)# [19:48:37] !log pooling replacement jenkins node integration-slave-docker-1049 [19:48:38] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [19:57:32] !log seeing jenkins agent connection failures for integration-slave-docker-{1044,1046,1047} [19:57:33] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:00:13] maintenance-disconnect-full-disks build 46450 integration-slave-jessie-1001: OFFLINE due to disk space [20:12:10] 10Beta-Cluster-Infrastructure: Recover from corrupted beta MySQL slave (deployment-db04) - https://phabricator.wikimedia.org/T216067 (10Krenair) [20:13:45] 10Beta-Cluster-Infrastructure: Recover from corrupted beta MySQL slave (deployment-db04) - https://phabricator.wikimedia.org/T216067 (10Krenair) Along with the usual deployment-prep setup steps around puppet certs, I've applied the usual MySQL role and set `mariadb::config::basedir: /opt/wmf-mariadb101` in hiera... [20:13:57] I've turned the work around the beta MySQL slave into a ticket: https://phabricator.wikimedia.org/T216067 [20:14:59] thank you for adding me as a subscriber, I went to do so and it was already done :-) [20:15:57] !log integration-slave-docker-{1044,1046,1047} unresponsiveness due to cloudvirt failure. 1046 is being moved already by CS. deleting 1044 and 1047 [20:16:02] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:19:20] 10Beta-Cluster-Infrastructure: Recover from corrupted beta MySQL slave (deployment-db04) - https://phabricator.wikimedia.org/T216067 (10Krenair) [20:20:13] marxarelli: jenkins validates host-keys now, might be what you're seeing as connection failures. You ahve to accept them to validate them in the computer interface now. [20:20:32] oh wow https://docs.google.com/document/d/e/2PACX-1vRW41XdoY-HlKdgSzAZ4Dwm-OeAyi1zEkDvwIfw3FsZ5H4yhtPNaZPs-50lgG_BHsWLKixxlrHpske1/pub [20:20:44] thcipriani: their cloudvirt is having trouble :( [20:20:45] "First-class CI integration" is comming to gerrit [20:21:03] marxarelli: oh, bummer :( [20:22:00] thcipriani: andrewbo.gott is moving integration-slave-docker-1046. i deleted the other two and am about to spin up replacements [20:22:52] regarding T215511 is jenkins posting dupe comments expected behaviour? [20:22:53] T215511: Jenkins is posting duplicate comments when a depended on change merge fails - https://phabricator.wikimedia.org/T215511 [20:25:09] maintenance-disconnect-full-disks build 46455 integration-slave-jessie-1001: OFFLINE due to disk space [20:27:04] hmmmm [20:27:22] Evaluation Error: Unknown function: 'directory_ensure'. at /etc/puppet/modules/service/manifests/docker.pp:43:19 [20:27:23] ??? [20:27:38] oh maybe that is actually wrong... [20:27:58] it is supposed to be ensure_directory() [20:28:05] nevermind sorry this is an actual typo! [20:29:43] 10Release-Engineering-Team (Kanban), 10Release, 10Train Deployments: 1.33.0-wmf.2 deployment blockers - https://phabricator.wikimedia.org/T206656 (10thcipriani) [20:29:47] 10Release-Engineering-Team, 10Operations, 10Wikimedia-production-error: HHVM CPU usage when deploying MediaWiki - https://phabricator.wikimedia.org/T208549 (10thcipriani) [20:31:07] 10Release-Engineering-Team, 10Operations, 10Wikimedia-production-error: HHVM CPU usage when deploying MediaWiki - https://phabricator.wikimedia.org/T208549 (10thcipriani) 05Resolved→03Open p:05Unbreak!→03High I see this still happening. Last night it happened when deploying l10n for an extension. Ha... [20:32:31] !log pooling jenkins node for integration-slave-docker-1050 [20:32:32] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:35:05] !log launching replacement instance integration-slave-docker-1051 [20:35:06] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:36:34] 10Release-Engineering-Team, 10Analytics, 10EventBus, 10MediaWiki-Core-Testing, and 4 others: Flaky quibble-vendor-mysql-hhvm-docker test in Jenkins - https://phabricator.wikimedia.org/T216069 (10mobrovac) p:05Triage→03High [20:40:30] ah just seen greg-g's email, so this ^ is probably just a consequence [20:41:49] mobrovac: total carnage today :) [20:42:00] :/ [20:42:10] perfect time to take a day off then, isn't it? :D [20:42:20] if only it were not as rainy ... [20:42:21] :/ [20:42:34] a great day to work on some thorny problem on your laptop :-P [20:43:19] or fire up Smash TV on the snes emulator [20:45:57] !log launching replacement instance integration-slave-docker-1052 [20:45:58] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:46:23] !log pooling jenkins node for integration-slave-docker-1051 [20:46:24] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:48:15] Krenair: had to make a new host instance [20:48:18] in https://phabricator.wikimedia.org/T195686 [20:48:25] what do you do at this stepo? [20:48:31] nano /usr/local/share/ca-certificates/Puppet_Internal_CA.crt [20:49:15] log onto another host (the puppetmaster will do if you have a connection open there) and cat the file on that host [20:49:18] copy+paste [20:49:22] ah k [20:49:33] just ensure it comes from another deployment-prep host [20:49:51] that isn't new/broken enough to also be lacking the right one [20:50:13] maintenance-disconnect-full-disks build 46460 integration-slave-jessie-1001: OFFLINE due to disk space [20:51:27] got it! thhanks. [21:01:31] !log pooling new jenkins node for integration-slave-docker-1052 [21:01:33] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [21:10:52] !log starting migrated integration-slave-docker-1046 instance [21:10:54] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [21:14:33] is beta cluster still down (checking wrt greg's mail from this morn). [21:15:12] maintenance-disconnect-full-disks build 46465 integration-slave-jessie-1001: OFFLINE due to disk space [21:15:27] !log removing old docker images on integration-slave-docker-1046 [21:15:28] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [21:20:05] !log dduvall@integration-slave-jessie-1001:/srv/jenkins-workspace/workspace$ `sudo rm -rf *` due to full disk [21:20:06] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [21:21:00] !log bringing integration-slave-docker-1046 and integration-slave-jessie-1001 back online [21:21:01] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [21:32:53] !log dduvall@integration-slave-jessie-1001:/mnt/home/jenkins-deploy$ `rm -rf .gradle/ .m2/` due to full disk [21:32:54] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [21:40:41] "integration-slave-jessie-1001 Disk space is too low. Only 0.893GB left on /tmp" [21:40:43] jenkins you lie [21:42:57] *jerkings :P [22:09:41] mdholloway: fyi we had to delete the integration-slave-jessie-android instance today due to cloudvirt disk failures. i'm rebuilding now but i'm unaware of anything special setup for it in ops/puppet or integration/config [22:09:53] pinging you since i saw you in the SAL btw :) [22:10:31] marxarelli: thanks for the ping [22:10:41] i'm not sure that one is actually used anymore [22:11:07] except for a screenshot test suite that no one seems to be keeping up... [22:11:12] let me do a little checking around. [22:11:49] mdholloway: hmm. k. grepping integration/config i see https://gerrit.wikimedia.org/r/plugins/gitiles/integration/config/+/master/jjb/mobile.yaml#59 [22:11:58] (one job pinned to it) [22:12:15] thanks for looking! [22:12:51] i mean if we can get rid of it and that job, that's a great resolution too! :) [22:13:06] https://integration.wikimedia.org/ci/job/apps-android-wikipedia-periodic-test/ [22:13:15] lol, last successful build 1 yr 5 mos ago [22:13:23] as i suspected [22:13:37] lemme ping the android folks and see if they have any objection to just clearing that stuff out [22:13:39] haha, cool [22:13:47] awesome. thanks! [22:15:03] mdholloway: and if anyone wants to resurrect the job, we can always assist in docker-izing it [22:16:08] marxarelli: oh, cool! i'm over in slack asking them now. i suspect they actually don't want to resurrect it, but will report back. [22:16:22] what's 'slack'? :P [22:16:29] * marxarelli trolls [22:17:11] * marxarelli searches ietf for the ' [22:17:39] aw, my chubby finger ruined my slack/rfc joke [22:17:54] lol [22:18:21] marxarelli: Was it going to be a joke about finding the original lack:// protocol and the secure variant, slack://? ;-) [22:18:45] James_F: no. your joke is superior! :) [22:19:14] * James_F grins. I give you the credit for inspiring it with an IETF reference. [22:19:23] ha! [22:20:26] just a delirium kind of day :D [22:21:33] marxarelli: i define slack as the thing that when i closed the tab after leaving my last job got me back like 15% of CPU and a couple gigs of RAM. :) [22:21:54] * brennen pokes stick in cage. [22:23:16] brennen: your cpu is now slacking! (wah wah wah waaaaaah) [22:23:41] hahaha [22:25:23] i resisted joining for like a year after i learned it existed, then caved because i like talking to people in my department [22:25:44] 10Project-Admins: Create Phabricator project for Status tool - https://phabricator.wikimedia.org/T216082 (10abian) [22:26:03] marxarelli: anyway, i think there's no need to rebuild that instance, and we can just clear out the job config that uses it [22:26:24] mdholloway: wahoo! thanks for checking into that [22:26:40] i was the only one maintaining that testing job even after i moved over to reading infrastructure, and, well, it steadily slipped down my priorities list [22:30:25] hey no worries [22:30:27] i don't think there's anything else it references that can be cleared out, but we could remove the android emulator plugin in jenkins, i believe [22:31:15] i don't think the other CI jobs rely on it [22:32:15] oh even better! [22:34:18] mdholloway: would you mind making those jjb changes? [22:34:24] marxarelli: doing now! [22:34:27] brennen lol [22:34:34] great! [22:34:37] thank you! [22:34:44] no prob [22:37:01] marxarelli: actually, maybe wait a sec on disabling the android emulator plugin, it looks like there's something referencing it in apps-android-wikipedia-test [22:37:04] not sure if it's still valid [22:37:21] ah ok [22:44:18] hmm, actually it looks like that's relying on the android lint plugin, which i guess (?) is distinct from the android emulator plugin [23:02:50] (03PS1) 10Mholloway: Remove apps-android-wikipedia-periodic-test job and related bits [integration/config] - 10https://gerrit.wikimedia.org/r/490492 (https://phabricator.wikimedia.org/T198495) [23:07:06] 10Continuous-Integration-Infrastructure, 10Release-Engineering-Team (Kanban), 10Discovery-Search, 10Elasticsearch: Set up data storage to collect loosely structured data from CI - https://phabricator.wikimedia.org/T211904 (10Jrbranaa) [23:07:59] 10Continuous-Integration-Infrastructure, 10Release-Engineering-Team: Store CI builds output outside Jenkins (e.g. static storage) - https://phabricator.wikimedia.org/T53447 (10Jrbranaa) [23:46:16] 10Release-Engineering-Team (Kanban), 10Code-Stewardship-Reviews, 10Graphoid, 10Operations, and 2 others: graphoid: Code stewardship request - https://phabricator.wikimedia.org/T211881 (10Jrbranaa) > In any case, and with the risk of repeating myself, the service is under a code stewardship request cause it...