[00:01:40] RoanKattouw with that change, do you see it affecting any other things? [00:01:48] No it looks fine otherwise [00:01:55] It's fixed at the cost of ugliness, yes [00:01:55] Ok thanks [00:02:01] yep [00:02:01] Thanks [00:02:05] your welcome :) [00:14:25] The 'new' SWAT hours will be 06:00/11:00/16:00 SF, right? [00:14:42] 06Release-Engineering-Team, 10Gerrit, 10Wikimedia-Logstash, 07Technical-Debt: Look into shoving gerrit logs into logstash - https://phabricator.wikimedia.org/T141324#2494353 (10demon) [00:14:51] bd808: Should be fun ^ :) [00:15:12] ostriches i fixed the problem [00:16:20] ostriches: we get logs from cassandra [00:16:37] I think their config should have log4j magic in it [00:17:17] ostriches https://gerrit.wikimedia.org/r/#/c/301027/2 [00:17:20] paladox: I'll review and get it merged tomorrow. [00:20:14] bd808: Hadoop or cassandra? [00:20:18] I see something in the former. [00:20:26] ok thanks [00:21:15] ostriches: I'm not sure that hadoop is still wired up. cassandra has https://github.com/wikimedia/operations-puppet/blob/production/modules/cassandra/manifests/logging.pp and https://github.com/wikimedia/operations-puppet/blob/production/modules/cassandra/templates/logback.xml-2.2.erb [00:21:33] looks like it uses logback rather than palin log4j [00:21:49] something similar should be possible [00:21:51] Ah I was looking at puppet/modules/cdh/templates/hadroop/log4j.properties.erb [00:22:10] yah, maybe hadoop is still wired up too [00:23:12] Either way I'll probably have to bring in net.logstash.logback.* lib or log4j.appender.gelf.* to get it to work [00:23:17] Anyway, I got enough to noodle now, thx [00:23:56] twentyafterfour: I see that the patch to drop 1.25 tests was merged, but the job is still running--maybe there's some cache purge or restarting left to do? [00:25:18] awight: Going afk, but zuul needs reload to pick up config changes :) [00:25:26] (although it might not affect an in-process job) [00:25:40] (might have to hard stop/start) [00:25:45] (anyway, afk for realz now) [00:26:22] ostriches mutante merged it [00:26:42] RoanKattouw mutante merged https://gerrit.wikimedia.org/r/301027 :) [00:30:35] Cool [00:34:43] (03Draft2) 10Paladox: Testing [integration/zuul] - 10https://gerrit.wikimedia.org/r/301033 [00:34:47] (03Abandoned) 10Paladox: Testing [integration/zuul] - 10https://gerrit.wikimedia.org/r/301033 (owner: 10Paladox) [00:34:57] It has been deployed now [00:35:05] RoanKattouw ^^ [00:38:01] RoanKattouw: i noticed the line wrap issue too. and merged the fix from paladox. looks better to me [00:38:10] :) [00:39:20] 06Release-Engineering-Team, 10Gerrit, 10Wikimedia-Logstash, 07Technical-Debt: Look into shoving gerrit logs into logstash - https://phabricator.wikimedia.org/T141324#2494353 (10bd808) See also {T64505}. Note I'm not sure if that is really useful or not. I think it was from the early Logstash days when I wa... [01:08:45] !log deployed ores a291da1 in sca03, ores-beta.wmflabs.org works as expected [01:08:48] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [01:46:08] ostriches: FYI https://phabricator.wikimedia.org/T141329 is probably caused by the new Gerrit version [01:55:33] PROBLEM - Puppet run on deployment-mediawiki02 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [02:26:53] RoanKattouw: Well yes, you couldn't create patch sets from the web before ;-) [02:27:04] Yes exactly :) [02:27:45] Ah ok, so it kept the original author as author and made you committer. [02:28:03] Actually no you could, the old commit message editor is gone so you have to use the new editor for that too now [02:28:05] I can see that happening. Similar would happen if I cherry-picked you, amended, and re-pushed (unless I reset the author) [02:28:14] And rebase patchsets still work [02:28:22] ostriches: No, I don't think that's true [02:28:36] Pretty sure it is :) [02:28:41] It uses the user submitting the patch to Gerrit (the committer typically), not the author [02:28:59] Then why do we have "Forge Author" granted to everyone? [02:29:09] because that is totally how it happens 90% of the time. [02:29:11] Because you need to be able to amend others' patches [02:29:17] --amend does not change the author, only the committer [02:29:28] That's my point. [02:29:34] It's treating the change as an --amend [02:29:39] Rather than a brand new patch [02:29:50] If I rebase your patch from the web, or edit its commit message in the pre-upgrade Gerrit, or download your patch and amend it and push it back up, all of those have you as the author and me as the committer, and all of those had grrrit-wm report (PS2) Catrope: blah blah (owner: Demon) [02:30:40] What's a bit odd to me is that this behavior didn't change for rebases but it did for commit msg changes. I'm guessing the way Gerrit outputs this in its event stream must have changed [02:33:46] That wouldn't surprise me re: event stream. [02:33:55] Commit message change logic is probably 100% different now [02:34:01] Since it's integrated with the inline editor [02:35:35] RECOVERY - Puppet run on deployment-mediawiki02 is OK: OK: Less than 1.00% above the threshold [0.0] [02:40:51] RoanKattouw: True story, upstream is experimenting with *a new* UI. https://gerrit-review.googlesource.com/c/79298/?polygerrit=0 [02:40:59] (whoops, swap for =1 to enable) [02:41:41] Oh looks like upstream fixed the sillyness where the unified diff screen is still the old code [02:42:27] (We can't pull that in yet. Not feature complete...tons of things just Aren't There Yet, also only in master) [02:42:29] Wow that's actually better IMO [02:42:33] Much clearer [02:42:36] :) [02:42:51] And less of a difference between the old UI and this one in terms of moving ALL OF THE CHEESE [02:44:48] As soon as it's more feature-complete we can bring it in :) [03:28:54] reading https://wikitech.wikimedia.org/wiki/Icinga , it's a bit disappointing [03:29:19] some parts of that page were written by me in 2006 [03:31:56] I think half of it must have been written by me, altogether [03:35:50] I'm guessing spence was a pmtpa host [03:37:46] wtf is the section about the USB dongle [03:44:39] for paging [03:45:06] it was so unreliable that eventually mark made his own [03:45:11] out of an arduino or something [03:47:35] 2015-08-25 [03:47:35] 09:40 YuviPanda: restarted ircecho on shinken-01 [03:47:35] ==='"`UNIQ--h-4--QIN... (more) [03:48:12] says https://wikitech.wikimedia.org/wiki/Nova_Resource:Shinken [03:57:42] it's {{#invoke:String|clip|s={{Nova_Resource:{{{Project Name}}}/SAL}}|max=600|trail=... ([[Nova_Resource:{{{Project Name}}}/SAL|more]])}} (from Template:Nova_Project) [04:02:09] which essentially does mw.ustring.sub( s, 1, max ) .. trail [04:18:31] Yippee, build fixed! [04:18:32] Project selenium-MultimediaViewer » firefox,beta,Linux,contintLabsSlave && UbuntuTrusty build #85: 09FIXED in 22 min: https://integration.wikimedia.org/ci/job/selenium-MultimediaViewer/BROWSER=firefox,MEDIAWIKI_ENVIRONMENT=beta,PLATFORM=Linux,label=contintLabsSlave%20&&%20UbuntuTrusty/85/ [04:26:32] PROBLEM - Puppet run on deployment-mediawiki02 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [04:35:21] PROBLEM - Puppet run on deployment-ms-be02 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [04:38:07] PROBLEM - Puppet run on integration-slave-trusty-1018 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [04:42:31] PROBLEM - Puppet run on integration-slave-trusty-1003 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [05:01:33] RECOVERY - Puppet run on deployment-mediawiki02 is OK: OK: Less than 1.00% above the threshold [0.0] [05:13:08] RECOVERY - Puppet run on integration-slave-trusty-1018 is OK: OK: Less than 1.00% above the threshold [0.0] [05:15:20] RECOVERY - Puppet run on deployment-ms-be02 is OK: OK: Less than 1.00% above the threshold [0.0] [05:22:33] RECOVERY - Puppet run on integration-slave-trusty-1003 is OK: OK: Less than 1.00% above the threshold [0.0] [05:36:22] PROBLEM - Puppet run on deployment-ms-be02 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [06:13:45] 06Release-Engineering-Team, 10Gerrit, 06Operations, 13Patch-For-Review: replace gerrit server (ytterbium) with jessie server (lead) - https://phabricator.wikimedia.org/T125018#2494697 (10Dzahn) decom: https://gerrit.wikimedia.org/r/#/c/300806/ https://gerrit.wikimedia.org/r/#/c/300812/ 01:36 ostriches:... [06:16:22] RECOVERY - Puppet run on deployment-ms-be02 is OK: OK: Less than 1.00% above the threshold [0.0] [08:07:43] PROBLEM - Puppet run on deployment-ms-fe01 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [08:20:11] (03CR) 10Hashar: [C: 04-1] "This change is no more needed. I talked with Paladox about it on Monday and it has a few issues. The "proper" fix has been to inject the " [integration/config] - 10https://gerrit.wikimedia.org/r/300790 (owner: 10Paladox) [08:22:08] (03CR) 10Hashar: "I have hacked the job over the week-end and it seems to work under Jessie + Java 8. I have extended details on T139137 and a .plan to pol" [integration/config] - 10https://gerrit.wikimedia.org/r/299674 (https://phabricator.wikimedia.org/T139137) (owner: 10Mholloway) [08:28:21] 10Continuous-Integration-Config, 06Wikipedia-Android-App-Backlog, 13Patch-For-Review: [Dev] Fix periodic tests - https://phabricator.wikimedia.org/T139137#2494952 (10hashar) The hacky job I have created does poll the git repository and it is now passing just fine! https://integration.wikimedia.org/ci/job/app... [08:37:30] PROBLEM - Puppet run on deployment-tmh01 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [08:42:10] !log T141269 On integration-slave-trusty-1018 , deleting workspace that has a corrupt git: rm -fR /mnt/jenkins-workspace/workspace/mediawiki-extensions-hhvm* [08:42:11] T141269: zuul-cloner fails mediawiki-extensions-hhvm job with "error: object file .git/objects/30 is empty" - https://phabricator.wikimedia.org/T141269 [08:42:13] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [08:42:45] RECOVERY - Puppet run on deployment-ms-fe01 is OK: OK: Less than 1.00% above the threshold [0.0] [08:43:00] 10Continuous-Integration-Infrastructure: zuul-cloner fails mediawiki-extensions-hhvm job with "error: object file .git/objects/30 is empty" - https://phabricator.wikimedia.org/T141269#2494976 (10hashar) 05Open>03Resolved a:03hashar Both ran on slave integration-slave-trusty-1018 that had a corrupt git repo... [08:44:04] 10Continuous-Integration-Infrastructure, 10Zuul, 07Upstream: Circular dependencies break Zuul - https://phabricator.wikimedia.org/T129938#2494980 (10hashar) 05Resolved>03Open If it happened again, then it is not fixed! [09:12:30] RECOVERY - Puppet run on deployment-tmh01 is OK: OK: Less than 1.00% above the threshold [0.0] [09:42:19] PROBLEM - SSH on integration-slave-trusty-1011 is CRITICAL: Server answer [09:47:25] (03PS1) 10Hashar: (WIP) overhaul / document debian-glue jobs (WIP) [integration/config] - 10https://gerrit.wikimedia.org/r/301084 (https://phabricator.wikimedia.org/T117869) [09:47:42] (03PS2) 10Hashar: Overhaul / document debian-glue jobs [integration/config] - 10https://gerrit.wikimedia.org/r/301084 (https://phabricator.wikimedia.org/T117869) [09:48:12] (03CR) 10jenkins-bot: [V: 04-1] Overhaul / document debian-glue jobs [integration/config] - 10https://gerrit.wikimedia.org/r/301084 (https://phabricator.wikimedia.org/T117869) (owner: 10Hashar) [09:50:15] (03PS3) 10Hashar: Overhaul / document debian-glue jobs [integration/config] - 10https://gerrit.wikimedia.org/r/301084 (https://phabricator.wikimedia.org/T117869) [09:50:47] (03CR) 10jenkins-bot: [V: 04-1] Overhaul / document debian-glue jobs [integration/config] - 10https://gerrit.wikimedia.org/r/301084 (https://phabricator.wikimedia.org/T117869) (owner: 10Hashar) [09:54:33] (03PS4) 10Hashar: Overhaul / document debian-glue jobs [integration/config] - 10https://gerrit.wikimedia.org/r/301084 (https://phabricator.wikimedia.org/T117869) [09:58:30] (03CR) 10Hashar: "That is large of overhaul of the jobs building the debian packages. I have sprinted a working 'debian-glue' job over the last few days/wee" [integration/config] - 10https://gerrit.wikimedia.org/r/301084 (https://phabricator.wikimedia.org/T117869) (owner: 10Hashar) [10:18:11] 10Continuous-Integration-Infrastructure, 10Zuul, 07Upstream: Circular dependencies break Zuul - https://phabricator.wikimedia.org/T129938#2495180 (10Paladox) What I meant is, legoktm said it wasent this it was something to do with Depnds-On: not Circular dependencies @Legoktm would you be able to explain t... [10:21:27] 06Release-Engineering-Team, 10MediaWiki-History-or-Diffs, 07Beta-Cluster-reproducible, 05MW-1.28-release-notes, 05WMF-deploy-2016-07-26_(1.28.0-wmf.12): Viewing diffs throws 503@beta - https://phabricator.wikimedia.org/T141272#2495201 (10Danny_B) [10:23:16] hashar hi, and :), i see you overhaul the debian-glue test :) [10:26:29] (03PS1) 10Hashar: Add debian-glue job to all operations/debs repos [integration/config] - 10https://gerrit.wikimedia.org/r/301093 [10:41:57] (03PS20) 10Zfilipin: WIP Run language screenshots script for VisualEditor in Jenkins [integration/config] - 10https://gerrit.wikimedia.org/r/300035 (https://phabricator.wikimedia.org/T139613) [10:52:45] (03PS21) 10Zfilipin: WIP Run language screenshots script for VisualEditor in Jenkins [integration/config] - 10https://gerrit.wikimedia.org/r/300035 (https://phabricator.wikimedia.org/T139613) [10:57:28] PROBLEM - Puppet run on deployment-sca02 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [10:58:13] PROBLEM - Puppet run on deployment-sca01 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [11:01:05] PROBLEM - Puppet run on deployment-ms-be01 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [11:03:01] PROBLEM - Puppet run on deployment-sentry01 is CRITICAL: CRITICAL: 70.00% of data above the critical threshold [0.0] [11:03:11] PROBLEM - Puppet run on deployment-ircd is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [11:31:36] !log enable puppet agent on integration-puppetmaster . Had it disabled while hacking on https://gerrit.wikimedia.org/r/#/c/300830/ [11:31:40] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [11:37:15] !log rebased integration puppetmaster git repo [11:37:18] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [11:37:29] RECOVERY - Puppet run on deployment-sca02 is OK: OK: Less than 1.00% above the threshold [0.0] [11:38:01] RECOVERY - Puppet run on deployment-sentry01 is OK: OK: Less than 1.00% above the threshold [0.0] [11:38:11] RECOVERY - Puppet run on deployment-ircd is OK: OK: Less than 1.00% above the threshold [0.0] [11:38:13] RECOVERY - Puppet run on deployment-sca01 is OK: OK: Less than 1.00% above the threshold [0.0] [11:41:05] RECOVERY - Puppet run on deployment-ms-be01 is OK: OK: Less than 1.00% above the threshold [0.0] [11:43:32] PROBLEM - Puppet run on integration-slave-trusty-1003 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [11:45:10] PROBLEM - Puppet run on integration-aptly01 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [11:45:34] PROBLEM - Puppet run on zuul-dev-jessie is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [11:45:38] RECOVERY - Puppet staleness on integration-puppetmaster is OK: OK: Less than 1.00% above the threshold [3600.0] [11:47:02] PROBLEM - Puppet run on integration-slave-jessie-1001 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [11:47:34] PROBLEM - Puppet run on integration-raita is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [11:50:20] PROBLEM - Puppet run on castor is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [11:50:52] PROBLEM - Puppet run on integration-slave-jessie-1002 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [11:52:32] PROBLEM - Puppet run on integration-slave-trusty-1001 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [11:53:12] PROBLEM - Puppet run on integration-slave-trusty-1016 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [11:55:11] PROBLEM - Puppet run on integration-slave-trusty-1012 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [11:55:45] PROBLEM - Puppet run on integration-slave-jessie-android is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [11:57:40] (03PS1) 10Legoktm: Run tests for ParserMigration [integration/config] - 10https://gerrit.wikimedia.org/r/301109 [11:57:51] PROBLEM - Puppet run on integration-slave-trusty-1004 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [11:58:20] (03CR) 10Legoktm: [C: 032] Run tests for ParserMigration [integration/config] - 10https://gerrit.wikimedia.org/r/301109 (owner: 10Legoktm) [11:59:14] (03Merged) 10jenkins-bot: Run tests for ParserMigration [integration/config] - 10https://gerrit.wikimedia.org/r/301109 (owner: 10Legoktm) [11:59:18] !log deploying https://gerrit.wikimedia.org/r/301109 [11:59:22] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [11:59:53] !log also pulled in I73f01f87b06b995bdd855628006225879a17fee5 [11:59:56] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [12:00:21] PROBLEM - Puppet run on integration-slave-trusty-1006 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [12:00:33] PROBLEM - Puppet run on integration-publisher is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [12:00:54] (03CR) 10Legoktm: "@twentyafterfour: Please deploy zuul changes when merging...were the jjb jobs deleted too?" [integration/config] - 10https://gerrit.wikimedia.org/r/299940 (owner: 10Ejegg) [12:03:11] PROBLEM - Puppet run on integration-slave-trusty-1023 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [12:05:12] PROBLEM - Puppet run on integration-jessie-lego-test01 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [12:05:24] PROBLEM - Puppet run on integration-saltmaster is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [12:06:50] PROBLEM - Puppet run on integration-slave-trusty-1014 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [12:09:10] PROBLEM - Puppet run on integration-slave-trusty-1018 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [12:20:57] 10Continuous-Integration-Config, 06Wikipedia-Android-App-Backlog: Jenkins should build release Android APKs - https://phabricator.wikimedia.org/T104207#1409982 (10hashar) Is that essentially the same as T136662 ? [12:25:45] 10Continuous-Integration-Infrastructure, 10Zuul, 07Upstream: Circular dependencies break Zuul - https://phabricator.wikimedia.org/T129938#2495362 (10Reedy) It was https://gerrit.wikimedia.org/r/#/c/300804 >[23:43:56] yes that was the problem >[23:44:03] the vendor repo is not controlled... [12:26:21] hashar: ^ It's a different bug AFAIK [12:39:20] 10Continuous-Integration-Infrastructure, 10Zuul, 07Upstream: Circular dependencies break Zuul - https://phabricator.wikimedia.org/T129938#2495367 (10Paladox) Maybe that needs to be filled upstream so that zuul can pull it in either if it is not controlled by zuul. [12:46:09] 10Continuous-Integration-Infrastructure, 10Zuul, 07Upstream: Circular dependencies break Zuul - https://phabricator.wikimedia.org/T129938#2495377 (10hashar) Zuul deadlocking due to unknown project was T128569. It is supposedly fixed by rCIZU0deaaadac7143692961b9d28abee8cea570ff3ce which I have deployed end o... [12:48:10] 10Continuous-Integration-Infrastructure, 10Zuul, 07Upstream: Circular dependencies break Zuul - https://phabricator.wikimedia.org/T129938#2495394 (10hashar) 05Open>03Resolved /var/log/zuul/error.log has a spam of: ``` 2016-07-25 22:19:30,252 ERROR zuul.Scheduler: Exception in run handler: Traceback (most... [12:49:23] !log cherry-pick https://gerrit.wikimedia.org/r/#/c/300827/ on deployment-puppetmaster [12:49:27] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [12:49:48] how often cherry-picks get wiped out from deployment-puppetmaster ? [12:50:05] 10Continuous-Integration-Infrastructure, 10Zuul, 07Upstream, 07WorkType-Maintenance: Zuul deadlocks if unknown repo has activity in Gerrit - https://phabricator.wikimedia.org/T128569#2495396 (10hashar) 05Resolved>03Open Happened again with Zuul version: 2.1.0-151-g30a433b-wmf4precise1 >>! In T129938#2... [12:56:50] godog: they are not wiped out [12:56:57] godog: at least not intentionally [12:57:13] there is a cronjob that attempts to git remote update && git rebase origin/production every so often [12:57:18] which sometimes bails out on rebase conflict [12:57:30] some of us manually handle the rebase conflict [12:57:36] so it should stick :] [12:57:39] I am off [12:57:45] be bcack this evening [13:00:50] RECOVERY - Puppet run on deployment-imagescaler01 is OK: OK: Less than 1.00% above the threshold [0.0] [13:25:57] faster than expected [13:31:02] hashar: I see, thanks, I asked because I had to apply again 300827 [13:45:05] (03CR) 10Hashar: "recheck" [integration/config] - 10https://gerrit.wikimedia.org/r/301084 (https://phabricator.wikimedia.org/T117869) (owner: 10Hashar) [13:45:11] (03PS5) 10Hashar: Overhaul / document debian-glue jobs [integration/config] - 10https://gerrit.wikimedia.org/r/301084 (https://phabricator.wikimedia.org/T117869) [13:53:34] (03Abandoned) 10Hashar: Revert "Pin Zuul version used for testing" [integration/config] - 10https://gerrit.wikimedia.org/r/292346 (https://phabricator.wikimedia.org/T136610) (owner: 10Hashar) [14:06:41] (03PS1) 10Hashar: Fix up zuul layoutdiff [integration/config] - 10https://gerrit.wikimedia.org/r/301120 [14:06:52] (03CR) 10Hashar: [C: 032] Fix up zuul layoutdiff [integration/config] - 10https://gerrit.wikimedia.org/r/301120 (owner: 10Hashar) [14:07:07] (03PS6) 10Hashar: Overhaul / document debian-glue jobs [integration/config] - 10https://gerrit.wikimedia.org/r/301084 (https://phabricator.wikimedia.org/T117869) [14:07:19] (03PS3) 10Hashar: Add debian-glue job to all operations/debs repos [integration/config] - 10https://gerrit.wikimedia.org/r/301093 [14:07:58] (03Merged) 10jenkins-bot: Fix up zuul layoutdiff [integration/config] - 10https://gerrit.wikimedia.org/r/301120 (owner: 10Hashar) [14:09:12] (03CR) 10Hashar: [C: 032] Overhaul / document debian-glue jobs [integration/config] - 10https://gerrit.wikimedia.org/r/301084 (https://phabricator.wikimedia.org/T117869) (owner: 10Hashar) [14:10:12] (03Merged) 10jenkins-bot: Overhaul / document debian-glue jobs [integration/config] - 10https://gerrit.wikimedia.org/r/301084 (https://phabricator.wikimedia.org/T117869) (owner: 10Hashar) [14:12:33] 10Continuous-Integration-Config: Make debian-glue to use Gerrit as the upstream repository - https://phabricator.wikimedia.org/T117869#2495655 (10hashar) 05Open>03Resolved Repositories now solely use the `debian-glue` job which first clone from Gerrit then fetch/checkout the proposed patch from Zuul. [14:12:36] 10Continuous-Integration-Config, 10MediaWiki-Debian: Set up CI auto-building for mediawiki/debian repository - https://phabricator.wikimedia.org/T122978#2495657 (10hashar) [14:14:05] (03PS1) 10Hashar: Make debian-glue concurrent [integration/config] - 10https://gerrit.wikimedia.org/r/301121 [14:22:01] 06Release-Engineering-Team (Deployment-Blockers), 05Release: MW-1.28.0-wmf.12 deployment blockers - https://phabricator.wikimedia.org/T139214#2495691 (10Jdforrester-WMF) [14:24:01] (03CR) 10Hashar: [C: 032] Make debian-glue concurrent [integration/config] - 10https://gerrit.wikimedia.org/r/301121 (owner: 10Hashar) [14:26:04] (03Merged) 10jenkins-bot: Make debian-glue concurrent [integration/config] - 10https://gerrit.wikimedia.org/r/301121 (owner: 10Hashar) [14:27:46] 10Continuous-Integration-Config, 10MediaWiki-Debian: Set up CI auto-building for mediawiki/debian repository - https://phabricator.wikimedia.org/T122978#2495702 (10hashar) 05Open>03Resolved a:03hashar CI is configured. The repository trigger the job `debian-glue` which lookup for the distribution in `deb... [14:47:50] PROBLEM - Puppet run on deployment-zookeeper01 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [14:48:08] PROBLEM - Puppet run on deployment-urldownloader is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [14:48:18] PROBLEM - Puppet run on deployment-salt02 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [14:50:46] PROBLEM - Puppet run on deployment-sca03 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [14:50:48] PROBLEM - Puppet run on deployment-changeprop is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [14:52:10] PROBLEM - Puppet run on deployment-elastic06 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [14:52:21] PROBLEM - Puppet run on deployment-eventlogging04 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [14:52:29] PROBLEM - Puppet run on mira is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [14:52:30] 06Release-Engineering-Team, 10Gerrit, 10Wikimedia-Logstash, 07Technical-Debt: Look into shoving gerrit logs into logstash - https://phabricator.wikimedia.org/T141324#2495773 (10demon) >>! In T141324#2494439, @bd808 wrote: > See also {T64505}. Note I'm not sure if that is really useful or not. I think it wa... [14:52:35] PROBLEM - Puppet run on deployment-mediawiki02 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [14:53:15] PROBLEM - Puppet run on deployment-ores-redis is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [14:53:21] PROBLEM - Puppet run on deployment-mediawiki03 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [14:53:21] PROBLEM - Puppet run on deployment-parsoid05 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [14:53:25] PROBLEM - Puppet run on deployment-parsoid07 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [14:55:21] PROBLEM - Puppet run on deployment-stream is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [14:55:45] PROBLEM - Puppet run on deployment-mx is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [14:55:51] PROBLEM - Puppet run on deployment-parsoid08 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [14:55:59] PROBLEM - Puppet run on deployment-eventlogging03 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [14:57:24] PROBLEM - Puppet run on deployment-conftool is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [14:58:09] PROBLEM - Puppet run on deployment-db2 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [14:58:19] PROBLEM - Puppet run on deployment-restbase02 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [14:58:31] PROBLEM - Puppet run on deployment-sca02 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [14:59:07] PROBLEM - Puppet run on deployment-tin is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [14:59:13] PROBLEM - Puppet run on deployment-ircd is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [14:59:17] PROBLEM - Puppet run on deployment-sca01 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [15:00:11] PROBLEM - Puppet run on deployment-pdf02 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [15:00:53] PROBLEM - Puppet run on deployment-fluorine is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [15:00:55] PROBLEM - Puppet run on deployment-pdf01 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [15:01:49] PROBLEM - Puppet run on deployment-imagescaler01 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [15:02:01] PROBLEM - Puppet run on deployment-kafka01 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [15:02:03] PROBLEM - Puppet run on deployment-ms-be01 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [15:02:09] <|---|> o.O [15:02:25] PROBLEM - Puppet run on deployment-kafka03 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [15:03:18] PROBLEM - Puppet run on deployment-memc04 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [15:03:32] PROBLEM - Puppet run on deployment-tmh01 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [15:04:02] PROBLEM - Puppet run on deployment-sentry01 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [15:05:42] PROBLEM - Puppet run on deployment-cache-upload04 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [15:05:48] PROBLEM - Puppet run on deployment-poolcounter01 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [15:05:54] PROBLEM - Puppet run on deployment-redis01 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [15:07:02] PROBLEM - Puppet run on deployment-elastic07 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [15:07:20] PROBLEM - Puppet run on deployment-ms-be02 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [15:07:25] hasharAway it seems there are alot of failures ^^ [15:07:52] PROBLEM - Puppet run on deployment-elastic05 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [15:08:16] PROBLEM - Puppet run on deployment-sentry2 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [15:08:43] PROBLEM - Puppet run on deployment-ms-fe01 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [15:08:55] PROBLEM - Puppet run on deployment-memc05 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [15:10:27] PROBLEM - Puppet run on deployment-jobrunner01 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [15:10:31] PROBLEM - Puppet run on deployment-restbase01 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [15:10:57] PROBLEM - Puppet run on deployment-parsoid06 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [15:14:05] PROBLEM - Puppet run on deployment-db1 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [15:15:08] PROBLEM - Puppet run on deployment-mediawiki01 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [15:15:10] ugh [15:15:16] g'morning to you, too, shinken-wm [15:15:35] PROBLEM - Puppet run on deployment-redis02 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [15:15:36] PROBLEM - Puppet run on deployment-elastic08 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [15:15:54] PROBLEM - Puppet run on deployment-logstash2 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [15:16:50] PROBLEM - Puppet run on deployment-kafka04 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [15:17:30] PROBLEM - Puppet run on deployment-cache-text04 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [15:17:44] PROBLEM - Puppet run on deployment-mathoid is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [15:18:06] PROBLEM - Puppet run on deployment-conf03 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [15:19:21] huh, seems to be the same error everywhere (from a quick spot check) Could not evaluate: No such file or directory - /etc/default/puppetmaster [15:19:48] thcipriani, issue on the server side? [15:19:57] which is seems to be false. deployment-puppetmaster:/etc/default/puppetmaster is a file. [15:20:06] seems like it. Might just need a kick. [15:24:58] hm, nope, issue on the clients. tried restarting puppetmaster [15:25:29] hmm [15:33:32] looks to be line 109 in modules/puppet/manifests/self/config.pp [15:33:58] cherry picked somewhere on the puppetmaster... [15:34:35] Oh could it be godog> !log cherry-pick https://gerrit.wikimedia.org/r/#/c/300827/ on deployment-puppetmaster [15:34:58] paladox: no, that's thumbor [15:35:02] Oh [15:35:05] sorry [15:35:27] np, anything I can do to help? [15:35:39] oh, nvmd, not cherry picked. Looks like 2742291fd54bd05a6597607ae96c7736a71b665a [15:36:13] godog: making our puppet in beta freak out a bit https://gerrit.wikimedia.org/r/#/c/301071/ :) [15:37:12] File:sad_trombone.wav [15:39:50] yeah probably needs a guard for $is_puppetmaster [15:40:05] _joe_: ^ what do you think? [15:43:59] ostriches do you mind having a look at https://gerrit.wikimedia.org/r/301001 please? Since i think i am correct at the classes but mutante needs some other people from releng to have a look at it please. [15:44:11] I will later. [15:44:14] ok [15:44:16] thanks [15:52:34] 06Release-Engineering-Team, 03releng-201617-q1, 15User-greg, 15User-zeljkofilipin: Perform a technical debt analysis of software and services maintained by WMF Release Engineering - https://phabricator.wikimedia.org/T138225#2495964 (10greg) Quick status update: Upon @dduvall's return and digging into this... [15:58:06] <_joe_> godog: uhm, yes [15:58:08] <_joe_> I'll fix it [15:58:44] _joe_: made a patch https://gerrit.wikimedia.org/r/#/c/301144/ seems to have fixed the issue, FWIW [15:58:45] nice, thanks _joe_ [15:58:56] and thcipriani ! [15:59:20] <_joe_> thcipriani: yeah that should work [15:59:37] <_joe_> merging [16:00:11] RECOVERY - Puppet run on deployment-mediawiki01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:00:43] RECOVERY - Puppet run on deployment-mx is OK: OK: Less than 1.00% above the threshold [0.0] [16:00:57] RECOVERY - Puppet run on deployment-eventlogging03 is OK: OK: Less than 1.00% above the threshold [0.0] [16:02:33] RECOVERY - Puppet run on deployment-mediawiki02 is OK: OK: Less than 1.00% above the threshold [0.0] [16:03:07] RECOVERY - Puppet run on deployment-db2 is OK: OK: Less than 1.00% above the threshold [0.0] [16:03:15] RECOVERY - Puppet run on deployment-ores-redis is OK: OK: Less than 1.00% above the threshold [0.0] [16:05:08] thanks all ( godog _joe_ thcipriani ) [16:05:09] RECOVERY - Puppet run on deployment-pdf02 is OK: OK: Less than 1.00% above the threshold [0.0] [16:05:22] RECOVERY - Puppet run on deployment-stream is OK: OK: Less than 1.00% above the threshold [0.0] [16:05:52] RECOVERY - Puppet run on deployment-parsoid08 is OK: OK: Less than 1.00% above the threshold [0.0] [16:05:58] RECOVERY - Puppet run on deployment-pdf01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:06:50] RECOVERY - Puppet run on deployment-imagescaler01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:07:04] RECOVERY - Puppet run on deployment-ms-be01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:07:24] RECOVERY - Puppet run on deployment-conftool is OK: OK: Less than 1.00% above the threshold [0.0] [16:07:24] RECOVERY - Puppet run on deployment-kafka03 is OK: OK: Less than 1.00% above the threshold [0.0] [16:07:54] 06Release-Engineering-Team (Deployment-Blockers), 05Release: MW-1.28.0-wmf.12 deployment blockers - https://phabricator.wikimedia.org/T139214#2495997 (10Jdforrester-WMF) [16:08:18] RECOVERY - Puppet run on deployment-restbase02 is OK: OK: Less than 1.00% above the threshold [0.0] [16:08:20] RECOVERY - Puppet run on deployment-memc04 is OK: OK: Less than 1.00% above the threshold [0.0] [16:09:02] RECOVERY - Puppet run on deployment-sentry01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:09:08] RECOVERY - Puppet run on deployment-tin is OK: OK: Less than 1.00% above the threshold [0.0] [16:09:12] RECOVERY - Puppet run on deployment-ircd is OK: OK: Less than 1.00% above the threshold [0.0] [16:09:48] 06Release-Engineering-Team, 15User-greg: Identify "first responders" for "all" "components" deployed on Wikimedia servers - https://phabricator.wikimedia.org/T141066#2496002 (10GWicke) [16:10:54] RECOVERY - Puppet run on deployment-fluorine is OK: OK: Less than 1.00% above the threshold [0.0] [16:12:01] RECOVERY - Puppet run on deployment-kafka01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:12:01] RECOVERY - Puppet run on deployment-elastic07 is OK: OK: Less than 1.00% above the threshold [0.0] [16:12:21] RECOVERY - Puppet run on deployment-eventlogging04 is OK: OK: Less than 1.00% above the threshold [0.0] [16:13:19] RECOVERY - Puppet run on deployment-sentry2 is OK: OK: Less than 1.00% above the threshold [0.0] [16:13:33] RECOVERY - Puppet run on deployment-tmh01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:13:45] RECOVERY - Puppet run on deployment-ms-fe01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:13:55] RECOVERY - Puppet run on deployment-memc05 is OK: OK: Less than 1.00% above the threshold [0.0] [16:15:27] RECOVERY - Puppet run on deployment-jobrunner01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:15:43] RECOVERY - Puppet run on deployment-cache-upload04 is OK: OK: Less than 1.00% above the threshold [0.0] [16:15:49] RECOVERY - Puppet run on deployment-poolcounter01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:15:53] RECOVERY - Puppet run on deployment-redis01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:17:22] RECOVERY - Puppet run on deployment-ms-be02 is OK: OK: Less than 1.00% above the threshold [0.0] [16:17:54] RECOVERY - Puppet run on deployment-elastic05 is OK: OK: Less than 1.00% above the threshold [0.0] [16:20:32] RECOVERY - Puppet run on deployment-restbase01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:20:38] RECOVERY - Puppet run on deployment-elastic08 is OK: OK: Less than 1.00% above the threshold [0.0] [16:20:56] RECOVERY - Puppet run on deployment-parsoid06 is OK: OK: Less than 1.00% above the threshold [0.0] [16:21:48] RECOVERY - Puppet run on deployment-kafka04 is OK: OK: Less than 1.00% above the threshold [0.0] [16:22:04] RECOVERY - Puppet run on integration-slave-jessie-1001 is OK: OK: Less than 1.00% above the threshold [0.0] [16:22:30] RECOVERY - Puppet run on deployment-cache-text04 is OK: OK: Less than 1.00% above the threshold [0.0] [16:22:34] RECOVERY - Puppet run on integration-raita is OK: OK: Less than 1.00% above the threshold [0.0] [16:22:44] RECOVERY - Puppet run on deployment-mathoid is OK: OK: Less than 1.00% above the threshold [0.0] [16:23:07] RECOVERY - Puppet run on deployment-conf03 is OK: OK: Less than 1.00% above the threshold [0.0] [16:23:28] anyone here who would want to create a new wiki? [16:23:38] as in the actual create_wiki script [16:24:07] RECOVERY - Puppet run on deployment-db1 is OK: OK: Less than 1.00% above the threshold [0.0] [16:25:09] RECOVERY - Puppet run on integration-aptly01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:25:35] RECOVERY - Puppet run on deployment-redis02 is OK: OK: Less than 1.00% above the threshold [0.0] [16:25:35] RECOVERY - Puppet run on zuul-dev-jessie is OK: OK: Less than 1.00% above the threshold [0.0] [16:25:47] RECOVERY - Puppet run on deployment-sca03 is OK: OK: Less than 1.00% above the threshold [0.0] [16:25:53] RECOVERY - Puppet run on deployment-logstash2 is OK: OK: Less than 1.00% above the threshold [0.0] [16:25:53] RECOVERY - Puppet run on integration-slave-jessie-1002 is OK: OK: Less than 1.00% above the threshold [0.0] [16:27:13] RECOVERY - Puppet run on deployment-elastic06 is OK: OK: Less than 1.00% above the threshold [0.0] [16:27:29] RECOVERY - Puppet run on mira is OK: OK: Less than 1.00% above the threshold [0.0] [16:27:31] RECOVERY - Puppet run on integration-slave-trusty-1001 is OK: OK: Less than 1.00% above the threshold [0.0] [16:27:42] mutante, what create_wiki script? [16:27:49] RECOVERY - Puppet run on deployment-zookeeper01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:27:55] you mean addWiki.php? [16:28:12] RECOVERY - Puppet run on deployment-urldownloader is OK: OK: Less than 1.00% above the threshold [0.0] [16:28:13] RECOVERY - Puppet run on integration-slave-trusty-1016 is OK: OK: Less than 1.00% above the threshold [0.0] [16:28:17] RECOVERY - Puppet run on deployment-salt02 is OK: OK: Less than 1.00% above the threshold [0.0] [16:28:19] RECOVERY - Puppet run on deployment-parsoid05 is OK: OK: Less than 1.00% above the threshold [0.0] [16:28:20] RECOVERY - Puppet run on deployment-mediawiki03 is OK: OK: Less than 1.00% above the threshold [0.0] [16:28:25] RECOVERY - Puppet run on deployment-parsoid07 is OK: OK: Less than 1.00% above the threshold [0.0] [16:29:09] Krenair: yes, sorry, i mean addWiki [16:29:18] what was the order of things again [16:29:27] we need to get the mw-config changed landed first? [16:29:30] wikitech:Add_a_wiki [16:30:05] No, addWiki.php gets run before the deployment [16:30:12] RECOVERY - Puppet run on integration-slave-trusty-1012 is OK: OK: Less than 1.00% above the threshold [0.0] [16:30:22] RECOVERY - Puppet run on castor is OK: OK: Less than 1.00% above the threshold [0.0] [16:30:48] RECOVERY - Puppet run on deployment-changeprop is OK: OK: Less than 1.00% above the threshold [0.0] [16:31:16] Krenair: ok, thanks [16:31:33] Krenair: just one more question, you think that kind of mw-config change is swat-able? [16:32:26] Arguably maybe, I wouldn't personally [16:32:35] I'd add a separate deployment window [16:32:44] fair enough, ok yep [16:32:50] RECOVERY - Puppet run on integration-slave-trusty-1004 is OK: OK: Less than 1.00% above the threshold [0.0] [16:35:46] RECOVERY - Puppet run on integration-slave-jessie-android is OK: OK: Less than 1.00% above the threshold [0.0] [16:38:10] RECOVERY - Puppet run on integration-slave-trusty-1023 is OK: OK: Less than 1.00% above the threshold [0.0] [16:40:11] RECOVERY - Puppet run on integration-jessie-lego-test01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:40:21] RECOVERY - Puppet run on integration-slave-trusty-1006 is OK: OK: Less than 1.00% above the threshold [0.0] [16:40:33] RECOVERY - Puppet run on integration-publisher is OK: OK: Less than 1.00% above the threshold [0.0] [16:41:49] RECOVERY - Puppet run on integration-slave-trusty-1014 is OK: OK: Less than 1.00% above the threshold [0.0] [16:44:11] RECOVERY - Puppet run on integration-slave-trusty-1018 is OK: OK: Less than 1.00% above the threshold [0.0] [16:45:25] RECOVERY - Puppet run on integration-saltmaster is OK: OK: Less than 1.00% above the threshold [0.0] [16:49:45] is this channel trying to rival -operations for puppet spam? ;) [16:53:32] RECOVERY - Puppet run on integration-slave-trusty-1003 is OK: OK: Less than 1.00% above the threshold [0.0] [16:59:31] bd808: we're trying to step up our game [17:03:05] nothing better than puppet fail showers [17:04:34] RECOVERY - Puppet run on integration-* is OK: OK: Less than 1.00% of instances failed [0.0] YAY! [17:06:13] hehe I'm also turning bots notification into NOTICE or I'll go crazy trying to tell humans and bots apart [17:23:20] (03PS3) 10Awight: Use composer in DonationInterface hhvm tests [integration/config] - 10https://gerrit.wikimedia.org/r/301025 (https://phabricator.wikimedia.org/T141309) [17:36:33] 06Release-Engineering-Team, 15User-greg: Identify "first responders" for "all" "components" deployed on Wikimedia servers - https://phabricator.wikimedia.org/T141066#2496346 (10faidon) I'm still taking this proposal in, but a few preliminary notes: "First responders" or "fixers" confuses me a little bit. A fe... [17:50:01] Hmm, could https://integration.wikimedia.org/ci/job/mwext-VisualEditor-sync-gerrit/3690/console be an indication that we no longer need this job/ Did Gerrit fix their bug? [17:50:23] RoanKattouw: I'm seeing the old entries in the DB. [17:50:32] I'm wondering if we swap them all, it'll work and not revert itself. [17:50:43] Old entries? I don't follow [17:50:52] Uno momento [17:51:59] Yeah, it looks right for master. [17:52:02] And the new wmf branch [17:52:05] But old branches no [17:52:06] pasting [17:52:32] https://phabricator.wikimedia.org/P3576 [17:52:39] I think we can fix those too and move forward :) [17:52:47] :) [17:53:57] ostriches, hi would you be able to review another css change https://gerrit.wikimedia.org/r/#/c/301172/ please ? [17:54:07] It makes commit-msg easy to read [17:54:18] without needing a horrizontel scroll [17:54:20] RoanKattouw: You want I can gerrit> update submodule_subscriptions set submodule_project_name = 'mediawiki/extensions/VisualEditor' where submodule_project_name = 'VisualEditor'; [17:55:21] ostriches: OK that seems like it should work, and then we can remove the sync-gerrit job, right? [17:55:26] s/we/somebody who knows how/ [17:55:28] :) [17:55:29] Think so! [17:55:43] UPDATE 13; 4 ms [17:55:46] and finally finally wmf branch cherry-picks for VE will just work [17:55:50] (hopefully) [17:57:49] RoanKattouw i can submit a patch that removes that job ? [17:57:56] Im wondering should i? [17:58:32] Doing it [17:58:45] Oh :) [17:59:01] (03PS1) 10Chad: zuul/jjb: Remove mwext-VisualEditor-sync-gerrit job [integration/config] - 10https://gerrit.wikimedia.org/r/301179 [17:59:25] ostriches you need to remove the job from postmerge [17:59:36] Where is that? [17:59:37] eg in - name: mediawiki/extensions/VisualEditor [17:59:48] postmerge: [17:59:48] - mwext-VisualEditor-publish [17:59:48] - mwext-VisualEditor-sync-gerrit [18:00:00] you need to remove - mwext-VisualEditor-sync-gerrit from there [18:00:00] (03CR) 10jenkins-bot: [V: 04-1] zuul/jjb: Remove mwext-VisualEditor-sync-gerrit job [integration/config] - 10https://gerrit.wikimedia.org/r/301179 (owner: 10Chad) [18:00:44] that is in layout.yaml [18:00:47] ostriches ^^ [18:00:51] Ah I did miss that [18:00:53] Amending [18:00:56] thanks [18:01:16] 06Release-Engineering-Team, 15User-greg: Identify "first responders" for "all" "components" deployed on Wikimedia servers - https://phabricator.wikimedia.org/T141066#2496456 (10RobLa-WMF) [18:01:53] (03PS2) 10Chad: zuul/jjb: Remove mwext-VisualEditor-sync-gerrit job [integration/config] - 10https://gerrit.wikimedia.org/r/301179 [18:02:49] (03CR) 10Paladox: [C: 031] "Thanks :)" [integration/config] - 10https://gerrit.wikimedia.org/r/301179 (owner: 10Chad) [18:04:29] omg, new feature of gerrit I hadn't noticed. [18:04:38] "Update from Chad, Paladox, jenkins-bot [Show] [Ignore]" [18:04:47] So I can notice without having to refresh :D [18:04:47] Yay [18:04:51] Yep [18:04:56] I noticed that [18:04:59] That's neat [18:05:08] :) [18:07:22] (03CR) 10Chad: [C: 032] zuul/jjb: Remove mwext-VisualEditor-sync-gerrit job [integration/config] - 10https://gerrit.wikimedia.org/r/301179 (owner: 10Chad) [18:08:16] ostriches i am wondering weather we can remove MaxClients 50 from the gerrit apache file [18:08:24] (03Merged) 10jenkins-bot: zuul/jjb: Remove mwext-VisualEditor-sync-gerrit job [integration/config] - 10https://gerrit.wikimedia.org/r/301179 (owner: 10Chad) [18:08:32] If anything I'll raise it, but I'm not removing it :) [18:08:36] Ok [18:08:37] It has Reasons :) [18:08:42] What should we raise it too [18:08:43] ? [18:09:05] im wondering what does it do and does it really only allow 50 people at a time to load the website at the same time [18:09:06] ? [18:09:32] ostriches ^^ [18:10:18] !log zuul: reloading to pick up config change [18:10:21] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [18:11:05] RoanKattouw: Things look fine, lemme know if it breaks (again) and we can bring the job back. [18:11:10] I'll close the bug out in a min with a similar comment [18:11:13] OK [18:11:16] James_F: FYI ---^^ [18:11:47] ostriches: Gosh. [18:11:49] paladox: Yes, it does limit to 50 concurrent users. The point is that it needs to be lower than Jetty's max connection pool because jetty tends to blow up when its queue gets full. [18:12:00] Oh [18:12:02] There's an old bug from 2013 about it [18:12:04] it shows here https://gerrit.googlesource.com/homepage/+/md-pages/docs/Scaling.md#Jetty [18:12:05] One sec. [18:12:08] a different workaround [18:12:29] I don't want to set the max queue to unlimited either ;-) [18:12:30] well raising it would deffintly make things really fast lol [18:12:33] We'd still need a limit. [18:12:36] Oh [18:12:39] lol why would it make things fast? [18:12:52] Well the more people who can load it at the same time [18:12:57] the faster it will be [18:13:03] That makes absolutely no sense. [18:13:21] Oh [18:14:16] We can probably raise those limits, sure, but I'm not removing them :) [18:14:56] Ok [18:15:00] What do we raise it too [18:15:04] ostriches ^^ [18:15:14] and they should really merge this https://git.eclipse.org/r/#/c/24295/ [18:15:33] if it is true that it will give it a significate performance boost for large refs [18:16:09] Well not raising it to anything right now. Is there any evidence we're having problems with the existing setting? [18:16:40] No [18:17:09] Then it's fine :) [18:17:22] Ok [18:17:23] :) [18:17:31] ostriches i found this https://git.eclipse.org/r/#/c/24295/ lol [18:19:50] ostriches, would tomcat be better then jetty [18:19:51] ? [18:20:06] Tomcat is a pain. [18:20:18] oh [18:20:26] What's wrong with Jetty? We're doing a sane workaround for the known scaling issue :) [18:20:33] Oh, nothing [18:20:38] just wondering [18:21:01] Yeah I looked into Tomcat about 3 or 4 years ago. [18:21:04] It's super confusing :p [18:21:12] Oh [18:21:20] Theres a guide now [18:21:21] https://gerrit-review.googlesource.com/#/c/35010/6/Documentation/install-tomcat.txt [18:21:24] ostriches ^^ [18:21:33] There was a guide years ago ;-) [18:21:35] It's still hard [18:21:55] That was from 2012 ;-) [18:22:02] Oh [18:22:54] What about https://gerrit-review.googlesource.com/Documentation/install-j2ee.html#tomcat [18:23:34] LOL those doint even tell you anything [18:23:58] i doint see why gerrit promotes something it dosent clearly explain, like jetty is clearly explained but tomcat isent [18:24:29] What version of jetty do we use anyways? [18:24:32] ostriches ^^ [18:27:16] Whatever version is bundled into gerrit [18:27:17] I dunno [18:27:24] Oh [18:39:04] (03Restored) 10Paladox: Testing [integration/config] - 10https://gerrit.wikimedia.org/r/300859 (owner: 10Paladox) [18:39:21] (03PS3) 10Paladox: Hi this is a long messssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss [integration/config] - 10https://gerrit.wikimedia.org/r/300859 [18:39:32] (03Abandoned) 10Paladox: Hi this is a long messssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss [integration/config] - 10https://gerrit.wikimedia.org/r/300859 (owner: 10Paladox) [18:39:41] Just testing ^^ css change [18:40:15] Yay and works now [19:00:52] 18:56:50 [RuntimeException] [19:00:52] 18:56:50 Error Output: PHP Deprecated: Comments starting with '#' are deprecated in /etc/php5/cli/conf.d/20-xhprof.ini on line 2 in Unknown on line 0 [19:02:05] Yeh [19:02:08] Known problem [19:08:17] ostriches im not sure if the magic mw page that adds reviewers to changes automatically is working [19:08:39] I know nothing about that bot. [19:08:43] Talk to its maintainer? [19:08:44] Since your change seemed to not add hashar who is added to all integration/config [19:08:45] changes [19:08:46] oh [19:08:49] what is that bot [19:08:50] ? [19:08:55] probably broken by api changes [19:09:00] Is it not listed on the mw.org page? [19:09:07] Yeh maybe and not sure [19:09:59] Reedy: paladox the reviewer bot ? [19:10:04] Yeh [19:10:12] the one that adds you as a reviewer [19:10:18] I guess it has to be restarted. Ask in #wikimedia-labs I guess [19:10:39] Ok [19:17:56] hashar done [19:18:01] ive asked in -labs [19:22:59] bd808: I want something like mwv's php::ini in production puppet. So much nicer :) [19:23:37] the magic wiki page to be added is cool [19:23:54] but i had no idea what the actual bot part is [19:26:36] Oh [19:26:37] found it [19:26:46] https://github.com/valhallasw/gerrit-reviewer-bot/ [19:26:49] mutante ^^ [19:26:57] it may need to re accept gerrit host key [19:27:04] since its ip was changed [19:27:10] and i did get that message [19:27:21] ostriches ^^ [19:27:33] gerrit host key did not change. [19:27:37] But the IP did, yes. [19:27:41] * ostriches shrugs [19:27:44] Yeh [19:27:44] Still need bot owner. [19:28:00] but known_hosts file failed for me until i removed the key and re accepted it [19:30:22] paladox: ok, but the instrucitons for that include [19:30:24] using svn :) [19:30:29] svn co http://svn.wikimedia.org/svnroot/pywikipedia/branches/rewrite/pywikibot [19:30:31] Oh lol [19:30:44] Needs updating [19:30:46] for the gerrit bot [19:30:50] Oh [19:30:51] of course you wouldnt use git [19:30:55] :p [19:30:56] LOL [19:31:19] oh, that is just because it uses pywikibot [19:31:26] which was the last project to move away from svn [19:31:45] oh lol [19:31:49] do we know where it runs? [19:31:55] Nope [19:32:40] let's make a ticket for it and add valhalla [19:32:53] Ok, are you creating the ticket? [19:34:08] yes [19:35:04] Thankyou [19:35:05] :) [19:35:30] pywikibot is in gerrit :) [19:36:03] :) [19:36:04] from gerrit.rpc import Client; g=Client('https://gerrit.wikimedia.org/r/', 'gerrit_ui/rpc'); [19:36:12] ^^ im not sure what that is all about [19:36:55] Heh, rpc api is dead afaik. [19:36:59] The old crap api [19:37:07] Bot's probably totally fubar'd :) [19:37:16] Oh [19:37:28] Was that removed in the new gerrit's [19:37:35] and was used in gerrit 2.8 [19:37:38] ostriches ^^ [19:37:39] Likely. They were already working towards that in 2.8 [19:37:48] Replaced the unsupported and undocumented rpc api [19:37:52] Oh, well maybe that is the cause [19:37:52] With a proper restful one [19:38:03] I didn't know it'd been totally ripped out, but it didn't surprise me [19:38:09] Oh [19:38:38] what tag is this stuff in phab?:) [19:38:38] https://pypi.python.org/pypi/gerrit/ [19:38:41] ostrcihes ^^ [19:38:48] mutante not sure [19:38:49] bots-others-misc-labs-infra :) [19:39:09] Lol, maybe, add gerrit too please. [19:39:41] ostriches maybe https://pypi.python.org/pypi/gerrit/ but looks really old [19:39:58] Oh well :p [19:40:07] paladox: https://phabricator.wikimedia.org/T141390 [19:40:17] thanks [19:40:18] :) [19:55:10] 06Release-Engineering-Team, 15User-greg: Identify "first responders" for "all" "components" deployed on Wikimedia servers - https://phabricator.wikimedia.org/T141066#2496794 (10RobLa-WMF) >>! In T141066#2496346, @faidon wrote: > In any case and whichever way we go, I don't believe that this discussion should a... [19:55:40] so i can become gerrit-reviewer-bot on toollabs [19:55:44] but what then ? [19:55:52] is there a docu? [19:56:36] I think so [19:56:37] https://github.com/valhallasw/gerrit-reviewer-bot [19:56:56] It is an old one [19:57:01] and last commit was in 2014 [19:58:16] i found an error.log. the timestamp it has is from May [19:58:20] Oh [19:58:27] So not current error's [19:58:28] ? [19:58:33] nope, not there [19:58:36] oh [19:58:42] maybe another file [19:59:13] oh, found something [19:59:20] gerrit_reviewer_bot.err [19:59:26] problem seems SSL related [19:59:39] Oh [19:59:49] InsecurePlatformWarning: A true SSLContext object is not available [19:59:53] Oh [19:59:55] ostriches ^^ [20:00:06] we have ssl problems with gerrit reviewer bot [20:00:15] Sounds like the bot's problem ;-) [20:00:46] Seems maybe it dosent work with https://letsencrypt.org/ [20:01:07] mutante what server is the bot on, is it on precise? [20:01:09] Maybe it's a crappy bot then! :) [20:01:12] LOL [20:01:29] i am making a guess now that upgrading python on bot host fixes it [20:01:33] still looking [20:01:48] paladox: tool-labs [20:01:50] ostriches https://letsencrypt.org/2016/07/26/full-ipv6-support.html [20:02:01] tools-bastion-03 [20:02:05] mutante ah, is that running precise maybe precise is broken? [20:02:19] no, but trusty [20:02:46] An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. [20:03:10] This may cause the server to present an incorrect TLS certificate, which can cause validation failures. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#snimissingwarning. [20:03:14] SNIMissingWarning [20:03:21] ok [20:03:30] Oh [20:03:47] Related [20:03:47] https://github.com/donnemartin/haxor-news/issues/9 [20:03:52] mutante ^^ [20:04:02] the error message related not the actual project [20:04:02] A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. [20:04:14] Old versions of python [20:04:26] oh [20:04:49] paladox: i will paste that stuff on ticket, let's continue there [20:04:51] (03PS4) 10Hashar: Add debian-glue job to all operations/debs repos [integration/config] - 10https://gerrit.wikimedia.org/r/301093 [20:04:58] lets be mad [20:05:01] Ok [20:05:03] LOL [20:05:13] I will brb [20:05:15] ? [20:05:16] mutante hashar ^^ [20:05:24] that is a lot of repositories being added https://gerrit.wikimedia.org/r/#/c/301093/4/zuul/layout.yaml [20:05:45] I ended up using a lame yaml formatting that looks like json [20:09:33] hashar: Why be mad? [20:09:37] json is fine :p [20:10:08] (03CR) 10Hashar: [C: 032] Add debian-glue job to all operations/debs repos [integration/config] - 10https://gerrit.wikimedia.org/r/301093 (owner: 10Hashar) [20:10:31] ostriches: it lacks comments :D [20:10:44] I found out a few months ago that YAML is a superset of json [20:10:53] Yep! [20:10:55] so a json doc is parseable by yaml [20:11:17] I should trick Google / Microsoft / Apple into adding support for yaml in their browser [20:11:25] and start the http://nojson.com/ initiative [20:11:34] so [20:11:54] for the audience. The change above adds a job that builds .deb for all operations/debs/ repos [20:11:55] (03Merged) 10jenkins-bot: Add debian-glue job to all operations/debs repos [integration/config] - 10https://gerrit.wikimedia.org/r/301093 (owner: 10Hashar) [20:12:03] and I am wondering on which list I can announce it? [20:12:08] wikitech-l sounds to broad [20:12:43] hashar: just spam people \o/ [20:13:27] !log Zuul deployed https://gerrit.wikimedia.org/r/301093 which adds 'debian-glue' job on all of operations/debs/ repos [20:13:31] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [20:13:53] hashar: wmfall j/k .. eh there is still engineering list [20:17:45] I am gonaa spam wikitech-l [20:18:19] it is trying to build varnish4 ... https://integration.wikimedia.org/ci/job/debian-glue/252/console [20:19:11] 10Continuous-Integration-Config, 10Fundraising-Backlog, 10MediaWiki-extensions-DonationInterface, 03Fundraising Sprint Nitpicking, 07Unplanned-Sprint-Work: Continuous integration mw-ext composer behavior is not predictable - https://phabricator.wikimedia.org/T141309#2496849 (10DStrine) a:03awight [20:20:12] paladox: looks like the bot fails to get the list of reviewers from mediawiki.org so it's not related to the gerrit upgrade [20:28:15] * hashar has sent spam [20:30:38] hashar: i could merge the change that uses tmp-reaper [20:30:55] it would delete a bunch on gallium afaict [20:32:31] mutante: the question is do we trust my interpretation of tmpreaper ? :( [20:32:37] I havent tested it at all :( [20:32:45] or maybe I did with --dry-run [20:33:14] is there something that find . - exece woudl not do? [20:33:31] i mean: [20:34:11] find . -mtime +7 -exec rm {} \; [20:34:43] i wonder what the tmpreaper stuff does in addition [20:35:59] ok, maybe -1 then [20:43:02] hashar mutante hi, im back, sorry it took me long to get back [20:45:54] 07Browser-Tests, 10MobileFrontend, 06Reading-Web-Backlog, 03Reading-Web-Sprint-77-Segmentation-fault, and 4 others: Spike [2hrs] Wikidata description browser tests do not run anywhere - https://phabricator.wikimedia.org/T137756#2377830 (10MBinder_WMF) @Jhernandez Is this in Needs Analysis because we're wai... [20:47:23] hashar this error https://phabricator.wikimedia.org/T141390#2496818 looks noticable to the one in https://phabricator.wikimedia.org/T128569#2495396 [20:47:30] it says something to do with name [20:50:37] paladox: that is the key in a mapping [20:50:42] eg { name: someone } [20:51:00] Yep [20:51:16] so that is absolutely unrelated [20:51:19] But it seems openstack are using gerrit 2.11 and were using gerrit 2.12 [20:51:37] it is like pointing at a bicycle wheel and a rolls-royce wheel [20:51:47] they are similar but totally different :] [20:51:59] Oh [20:52:00] lol [20:53:35] paladox: yeah extended user details [20:53:41] Yep [20:54:08] Maybe something like https://github.com/valhallasw/gerrit-reviewer-bot/pull/4/files [20:54:11] hashar ^^ [20:54:17] for zuul [20:54:42] mutante: https://phabricator.wikimedia.org/T141390#2496985 [20:54:57] hashar could you apply https://github.com/valhallasw/gerrit-reviewer-bot/pull/4/files please [20:56:12] (03PS1) 10Hashar: debian-glue: pass proper distro to piupart_wrappers [integration/config] - 10https://gerrit.wikimedia.org/r/301273 [20:56:40] paladox: https://tools.wmflabs.org/?tool=gerrit-reviewer-bot [20:56:44] theres the ACL [20:56:55] Whats ACL [20:56:55] ? [20:57:01] Access List [20:57:11] Oh [20:57:12] thanks [20:57:26] I doint think any of those are on irc? [20:57:41] (03CR) 10Hashar: [C: 032] debian-glue: pass proper distro to piupart_wrappers [integration/config] - 10https://gerrit.wikimedia.org/r/301273 (owner: 10Hashar) [20:58:36] (03PS2) 10Hashar: debian-glue: pass proper distro to piupart_wrappers [integration/config] - 10https://gerrit.wikimedia.org/r/301273 [20:59:13] (03CR) 10Hashar: debian-glue: pass proper distro to piupart_wrappers [integration/config] - 10https://gerrit.wikimedia.org/r/301273 (owner: 10Hashar) [20:59:17] (03CR) 10Hashar: [C: 032] debian-glue: pass proper distro to piupart_wrappers [integration/config] - 10https://gerrit.wikimedia.org/r/301273 (owner: 10Hashar) [21:00:24] :) [21:00:30] (03Merged) 10jenkins-bot: debian-glue: pass proper distro to piupart_wrappers [integration/config] - 10https://gerrit.wikimedia.org/r/301273 (owner: 10Hashar) [21:01:57] (03CR) 10Paladox: "check experimental" [integration/zuul] (debian/jessie-wikimedia) - 10https://gerrit.wikimedia.org/r/299869 (owner: 10Hashar) [21:03:00] (03CR) 10Paladox: "recheck" [integration/zuul] (debian/jessie-wikimedia) - 10https://gerrit.wikimedia.org/r/299869 (owner: 10Hashar) [21:03:50] thcipriani: is it on anyone's roadmap to make it reasonably possible to install scap3 on a Labs host? [21:04:14] ::role::deployment::server is a giant pile of WTF to setup [21:04:44] "let me help by installing a full MW env!" [21:05:09] yeah, it's pretty horrific and it's come up a few times. Talked about it in our tech debt meeting. There's no ETA on it. [21:05:15] "Oh, you forgot to define $::mediawiki::redis_servers::eqiad in heira!" [21:05:41] yeah, that role brings down WAY too much right now. [21:05:45] making prod puppet for new scap3 things without it is a bit painful [21:06:20] I'm trying to avoid shaving this yak ;) [21:06:41] heh, I don't blame you. I have a start on it somewhere. It got out of hand quickly. [21:07:08] what are you trying to do with scap3? [21:07:18] just setup a new host outside of an existing project? [21:07:22] deploy Striker into prod [21:07:49] but to do that I need to figure out how the crazy really new python service stuff works [21:08:01] or get in line for weeks/months for ops help [21:08:19] so I need a test project with all the scap3 goodies [21:09:15] I'll figure out how to hack it into a working state. I at least got the puppet manifest to compile so far [21:09:28] we'll see how busted it is after it applies [21:09:53] gotcha. I can lend a hand setting up a deployment host where possible. I think I know most of the dark corners from the deployment-tin setup. [21:10:34] sweet. The next place I'm likely to stumble is keyholder bits I imagine [21:10:57] ah, yeah, I'm pretty familiar with most of the ways in which that can fail :) [21:12:58] Lol http://wsl-forum.qztc.io/viewtopic.php?f=6&t=10 [21:13:22] so much work to setup openssh. [21:14:38] I can now setup a linux openssh on windows [21:14:40] yay [21:15:14] hashar ^^ [21:23:55] paladox: apt-get install ssh-server ? [21:24:04] Oh [21:24:14] I mean is currently [21:24:16] also, doesn't Windows comes out of the box with a ssh service? [21:24:21] ssh is broken in linux on windows [21:24:25] hashar i think it does [21:24:26] now [21:24:33] I managed to ssh into my windows device [21:24:44] thinking i would be ssh into ubuntu [21:25:41] http://packages.ubuntu.com/search?suite=xenial§ion=all&arch=any&keywords=ssh-server&searchon=names [21:25:49] hashar there is no ssh-server for xenial [21:25:51] LOL [21:26:37] yeah [21:26:39] just 'ssh' [21:26:52] which depends on openssh-client and openssh-server [21:26:57] http://packages.ubuntu.com/en/xenial/ssh [21:27:08] Oh [21:27:43] bd808: extra point if you reply on https://github.com/valhallasw/gerrit-reviewer-bot/pull/4 mimicing a Jenkins SUCCESS [21:27:44] :D [21:27:46] Ah Run "sudo nano /etc/ssh/sshd_config"; edit the "UsePrivilegeSeparation yes" line to read "UsePrivilegeSeparation no". (This is necessary because "UsePrivilegeSeparation" uses the "chroot()" syscall, which WSL doesn't currently support.) [21:28:08] Yeh that probaly why it didnt work, microsoft need to support the chroot syscall [21:28:31] hashar but they have awsome stuff as they said planed for the next windows 10 update [21:31:47] PROBLEM - Puppet run on deployment-aqs01 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [21:32:12] (03Draft2) 10Paladox: DO NOT MERGE: Testing gerrit review bot, fixing up something random [integration/config] - 10https://gerrit.wikimedia.org/r/301289 [21:32:32] (03PS3) 10Paladox: DO NOT MERGE: Testing gerrit review bot, fixing up something random [integration/config] - 10https://gerrit.wikimedia.org/r/301289 [21:32:33] PROBLEM - Puppet run on deployment-apertium01 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [21:33:23] PROBLEM - Puppet run on integration-slave-precise-1002 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [21:34:39] PROBLEM - Puppet run on integration-slave-precise-1012 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [21:34:48] PROBLEM - Free space - all mounts on deployment-sentry2 is CRITICAL: CRITICAL: deployment-prep.deployment-sentry2.diskspace._var.byte_percentfree (<100.00%) [21:35:43] PROBLEM - Puppet run on integration-slave-precise-1011 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [21:36:03] PROBLEM - Puppet run on deployment-zotero01 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [21:48:07] (03Abandoned) 10Paladox: DO NOT MERGE: Testing gerrit review bot, fixing up something random [integration/config] - 10https://gerrit.wikimedia.org/r/301289 (owner: 10Paladox) [21:48:57] thcipriani: doh! "Error: /Stage[main]/Keyholder/Service[keyholder-agent]: Provider upstart is not functional on this host" -- keyholder doesn't work on jessie [21:49:30] * bd808 will nuke the instance and try again on trusty [21:49:41] bd808: blerg. yeah, there's an upstart script that runs keyholder-agent/-proxy [21:50:15] yeah. I knew that too [22:11:04] (03CR) 10Paladox: "recheck" [integration/zuul] (debian/jessie-wikimedia) - 10https://gerrit.wikimedia.org/r/299869 (owner: 10Hashar) [22:11:49] ostriches: the bug that most puzzled me in Gerrit was the broken email threading https://phabricator.wikimedia.org/T38288 [22:12:11] ostriches: and apparently it is fixed, albeit I have no idea about the exact root cause [22:12:47] ostriches: I have subscribed you again to the task for you to find out whether it is worth more investigation. On my side it is all good [22:13:20] It's not fixed. It can be worked around. [22:13:21] * hashar loves digging in 4+ years old bugs [22:13:30] With a config variable. [22:13:35] hashar it seems https://integration.wikimedia.org/ci/job/debian-glue/256/console is still failing [22:13:41] so you have worked around it in our gerrit config ? [22:13:46] nope. [22:14:09] hashar: Basically, gmail won't thread e-mail when subject lines change, irrespective of various X- headers. [22:14:12] * paladox is going to watch tv yay :) [22:14:22] at least on my side the message-id and in-reply-to are consistent between the initial mail and the replies [22:14:27] bah [22:14:31] screw gmail ... really [22:14:35] lol [22:14:43] I herd yahoo were brought out today [22:14:50] deffintly not being forced into aol [22:14:50] pretty sure Outlook at the same issue [22:15:00] yahoo works [22:15:10] But seems to put changes into other changes [22:15:15] Yahoo! is essentially dead [22:15:19] which are not the same change [22:15:26] hashar: Yeah, basically it ignores the message-id or whatever. [22:15:35] Yeh, im not moving from yahoo mail to aol, i really hate aol. [22:15:41] So you can trick it by using $originalMessage in the velocity template or something [22:15:45] Which...works.... [22:15:50] BT also has a contract with them, [22:15:56] so i they close it like [22:15:57] that [22:16:03] But it makes me feel icky. If the subject changes I think the subject line in email should change :) [22:16:06] they will have alot of customers on there hand [22:16:20] ostriches: then since you have changed the Velocity template, the Subjects are consistent [22:16:26] so gmail should thread properl [22:16:27] y [22:16:28] luckly bt are migrating to there own email server but 3 years on, they still havent migrated me. [22:16:31] lol [22:16:34] hashar ^^ [22:17:03] paladox: stop pinging me constently please. That is annoying [22:17:13] Sorry [22:17:14] specially when I am right in the channel and being actively chatting ! :D [22:17:21] sorry [22:17:29] 06Release-Engineering-Team, 10Parsoid: debian signing keyid E84AFDD2 has expired - https://phabricator.wikimedia.org/T141400#2497119 (10cscott) [22:17:34] hashar: So, I think *some* clients will get it right [22:17:39] But I think GMail is a notable exception [22:17:40] ostriches: so even with your template patch, someone changing the subject would cause the thread to break in gmail? [22:17:50] honestly [22:17:54] Oh yeah, my template patch was completely unrelated. [22:18:00] I would wont fix that with reason: use a better client [22:18:12] :D [22:18:18] I doint really like gmail, outlook, yahoo are much better alternatives. [22:18:26] I also use icloud email [22:18:34] but not for anything gerrit related. [22:19:15] I've been using gmail since it came out in '04. I like it :p [22:19:25] Oh [22:19:37] I have a google account but not gmail [22:19:46] I use bt yahoo email, lol [22:19:59] I want them to switch me already before they force me onto aol. [22:20:10] oh wait it is against the law for bt to allow that lol [22:20:32] ostriches: well at least it is no more broken for other mail clients :] [22:21:09] Heh, #237 is like 6 years old [22:21:11] Oh wow https://www.productsandservices.bt.com/products/broadband-packages?mboxSession=1469571564989-42099#unlimited bt offering £150 when you go with them [22:21:36] https://blogs.fsfe.org/stargrave/archives/80 "Why Gmail webmail totally sucks" that sounds NPOV :D [22:21:46] LOl [22:22:25] ostriches: #237 is definitely fixed. It was complaining about the lack of In-Reply-To which is no present (and was in 2.8) [22:22:36] the other one is the inconsistent timestamp [22:22:41] :) [22:22:44] which is somehow fixed with 2.12 / new db [22:22:50] so that fix the issue for me :] [22:23:36] :) [22:23:55] paladox: for that debian-glue build you pointed at . It comes from https://gerrit.wikimedia.org/r/#/c/299869/ and they always failed on that change [22:24:12] oh [22:24:20] but woulden it test it since there is [22:24:22] debian/ [22:24:23] folder [22:24:45] dh_virtualenv is probably broken [22:24:50] oh [22:24:53] trying to run the tests outside of the crafted virtualenv [22:24:56] and it miss dependencies [22:24:57] Oh [22:27:05] * paladox gets a new movie everyday with sky cinemar all at no extra cost LOL plus exclusive deals i get movies 8 months after cinemar release before anyone else. [22:28:40] Star wars in agust [22:30:02] paladox: and I might have found the fix :)] [22:30:44] oh [22:30:45] yay [22:30:47] thanks [22:31:21] 10MediaWiki-Releasing, 06Release-Engineering-Team, 10Parsoid: debian signing keyid E84AFDD2 has expired - https://phabricator.wikimedia.org/T141400#2497197 (10greg) [22:31:48] :) [22:34:25] hashar: Good times. https://phabricator.wikimedia.org/T38288#2497201 [22:36:34] (03PS1) 10Hashar: Switch to dh_virtualenv as a buildsystem [integration/zuul] (debian/precise-wikimedia) - 10https://gerrit.wikimedia.org/r/301306 [22:36:49] 10MediaWiki-Releasing, 06Release-Engineering-Team, 10Parsoid: debian signing keyid E84AFDD2 has expired - https://phabricator.wikimedia.org/T141400#2497207 (10greg) Adding Filippo because it looks like he created it: ``` Date: Tue Jul 22 16:22:56 2014 +0000 From: git@palladium.eqiad.wmnet Subject: [Ops] [p... [22:37:37] 10MediaWiki-Releasing, 06Release-Engineering-Team, 06Operations, 10Parsoid: debian signing keyid E84AFDD2 has expired - https://phabricator.wikimedia.org/T141400#2497211 (10greg) [22:37:51] ostriches: neat [22:38:04] ostriches: and if a change subject is modified, that is a new thread on gmail right ? [22:38:10] Yep [22:38:16] Which I think is the right behavior. [22:38:16] might be a good indication that the change has been repurposed somehow [22:38:20] Yeah [22:38:29] sounds good [22:38:42] Like I said, at best keeping it the same is misleading. At worst you're outright lying :p [22:38:43] oh many #Gerrit bug have you marked as resolved this week ? :] [22:38:49] Like 40 or so [22:38:54] nice [22:41:23] :) [22:54:33] 10MediaWiki-Releasing, 06Release-Engineering-Team, 06Operations, 10Parsoid: debian signing keyid E84AFDD2 has expired - https://phabricator.wikimedia.org/T141400#2497264 (10greg) We should probably update https://wikitech.wikimedia.org/wiki/Releases.wikimedia.org (or add a new page and link to it from [[Re... [22:55:53] * paladox it's 23:55pm lol. [23:11:56] thcipriani: for your reading pleasure -- https://wikitech.wikimedia.org/wiki/User:BryanDavis/Scap3_in_a_Labs_project [23:12:09] * thcipriani looks [23:14:43] the /Stage[main]/Scap::L10nupdate/File[/home/l10nupdate/.gitconfig]/ensure is the one that I couldn't seem to shake the last time I tried this. [23:15:38] even with code that I thought would correct the resource ordering :\ [23:49:22] 06Release-Engineering-Team (Deployment-Blockers), 05Release: MW-1.28.0-wmf.12 deployment blockers - https://phabricator.wikimedia.org/T139214#2497402 (10Jdforrester-WMF)