[02:34:34] Project mwext-phpunit-coverage-publish build #3804: 04FAILURE in 39 sec: https://integration.wikimedia.org/ci/job/mwext-phpunit-coverage-publish/3804/ [02:56:02] Project mwext-phpunit-coverage-publish build #3805: 04STILL FAILING in 35 sec: https://integration.wikimedia.org/ci/job/mwext-phpunit-coverage-publish/3805/ [02:56:32] Project mwext-phpunit-coverage-publish build #3806: 04STILL FAILING in 29 sec: https://integration.wikimedia.org/ci/job/mwext-phpunit-coverage-publish/3806/ [05:33:12] 10MediaWiki-Releasing, 10Collaboration-Team-Triage, 10Notifications, 10MW-1.31-release: Bundle Echo extension with MW 1.31 - https://phabricator.wikimedia.org/T191738#4160125 (10Legoktm) [05:39:38] 10Release-Engineering-Team (Kanban), 10Scap, 10Operations: mwscript rebuildLocalisationCache.php takes 40 minutes - https://phabricator.wikimedia.org/T191921#4121650 (10Legoktm) >>! In T191921#4122327, @Joe wrote: > What are the blockers for the use of PHP7? > > All I see on the ticket mentioned is the memc... [05:56:14] Yippee, build fixed! [05:56:15] Project mwext-phpunit-coverage-publish build #3807: 09FIXED in 1 min 30 sec: https://integration.wikimedia.org/ci/job/mwext-phpunit-coverage-publish/3807/ [10:10:29] (03PS1) 10Hashar: Pass PATH when launching chromedriver [integration/quibble] - 10https://gerrit.wikimedia.org/r/429159 [10:25:14] eddiegp: logstash-beta is not recording anything since April 20th, 2018. Normal or something going wrong? [10:25:41] I don't think total silence is good. There's a lot of 'info' messages daily and none appears in the default '*' view. [10:34:31] 10Beta-Cluster-Infrastructure, 10Puppet: Puppet broken on deployment-cpjobqueue - https://phabricator.wikimedia.org/T193127#4160536 (10MarcoAurelio) [10:36:28] (03PS1) 10Hashar: Fix README git clone example [integration/quibble] - 10https://gerrit.wikimedia.org/r/429161 (https://phabricator.wikimedia.org/T192239) [10:37:16] (03CR) 10Zfilipin: [C: 032] Fix README git clone example [integration/quibble] - 10https://gerrit.wikimedia.org/r/429161 (https://phabricator.wikimedia.org/T192239) (owner: 10Hashar) [10:37:42] (03Merged) 10jenkins-bot: Fix README git clone example [integration/quibble] - 10https://gerrit.wikimedia.org/r/429161 (https://phabricator.wikimedia.org/T192239) (owner: 10Hashar) [10:41:10] 10Beta-Cluster-Infrastructure, 10Puppet: Puppet broken on deployment-cpjobqueue - https://phabricator.wikimedia.org/T193127#4160592 (10MarcoAurelio) p:05Triage>03High [10:43:03] 10Beta-Cluster-Infrastructure, 10Wikimedia-Logstash: logstash-beta not recording since April, 20th 2018 - https://phabricator.wikimedia.org/T193133#4160614 (10MarcoAurelio) [10:45:30] 10Beta-Cluster-Infrastructure, 10Wikimedia-Logstash: logstash-beta not recording since April, 20th 2018 - https://phabricator.wikimedia.org/T193133#4160627 (10MarcoAurelio) [10:55:44] * TabbyCat wonders how bad is deployment-cpjobrunner that even when trying to cd it errors with 'no space' [10:59:02] 10Beta-Cluster-Infrastructure, 10Puppet: Puppet broken on deployment-cpjobqueue - https://phabricator.wikimedia.org/T193127#4160845 (10MarcoAurelio) ``` maurelio@deployment-cpjobqueue:/$ sudo du -sh / du: cannot access ‘/proc/32319/task/32319/fd/3’: No such file or directory du: cannot access ‘/proc/32319/task... [10:59:23] (03CR) 10Zfilipin: [C: 032] Pass PATH when launching chromedriver [integration/quibble] - 10https://gerrit.wikimedia.org/r/429159 (owner: 10Hashar) [10:59:56] (03Merged) 10jenkins-bot: Pass PATH when launching chromedriver [integration/quibble] - 10https://gerrit.wikimedia.org/r/429159 (owner: 10Hashar) [11:14:32] 10Release-Engineering-Team, 10MediaWiki-Core-Tests, 10User-zeljkofilipin: Q4 Selenium framework improvements - https://phabricator.wikimedia.org/T190994#4160877 (10zeljkofilipin) [11:14:44] 10Beta-Cluster-Infrastructure, 10Puppet: Puppet broken on deployment-cpjobqueue - https://phabricator.wikimedia.org/T193127#4160890 (10MarcoAurelio) @Joe Any idea how to fix this? Delete `/var/vda3`? Maybe some work is also needed on `tmpfs`? [11:17:49] (03PS1) 10Hashar: Pass PATH when launching npm selenium-test [integration/quibble] - 10https://gerrit.wikimedia.org/r/429168 (https://phabricator.wikimedia.org/T193131) [11:18:49] (03CR) 10Hashar: [C: 032] Pass PATH when launching npm selenium-test [integration/quibble] - 10https://gerrit.wikimedia.org/r/429168 (https://phabricator.wikimedia.org/T193131) (owner: 10Hashar) [11:19:16] (03Merged) 10jenkins-bot: Pass PATH when launching npm selenium-test [integration/quibble] - 10https://gerrit.wikimedia.org/r/429168 (https://phabricator.wikimedia.org/T193131) (owner: 10Hashar) [11:27:56] 10Beta-Cluster-Infrastructure, 10Puppet: Puppet broken on deployment-cpjobqueue - https://phabricator.wikimedia.org/T193127#4160939 (10MarcoAurelio) File's plently of: ``` {"name":"cpjobqueue","hostname":"deployment-cpjobqueue","pid":136,"level":50,"err":{"message":"KafkaConsumer is not connected","name":"cpj... [11:28:39] 10Project-Admins, 10wikiba.se: permit a phabricator page for the FactGrid project - https://phabricator.wikimedia.org/T193071#4160941 (10Aklapper) >>! In T193071#4159248, @Olaf_Simons wrote: > I think I need a project to create workboards - should be a proect then. In short: One [sub]project has one workboard... [11:54:23] 10Project-Admins, 10wikiba.se: permit a phabricator page for the FactGrid project - https://phabricator.wikimedia.org/T193071#4160955 (10Olaf_Simons) cool, I will try to find out how to do things properly. [13:54:19] (03PS1) 10Hashar: Allow more recent GitPython version [integration/quibble] - 10https://gerrit.wikimedia.org/r/429195 (https://phabricator.wikimedia.org/T193057) [13:54:39] 10Release-Engineering-Team (Kanban), 10Quibble, 10Patch-For-Review: `quibble` fails with git 2.15: Type of packed-Refs not understood: '# pack-refs with: peeled fully-peeled sorted' - https://phabricator.wikimedia.org/T193057#4161230 (10hashar) a:03hashar [13:56:37] (03CR) 10Zfilipin: [C: 032] Allow more recent GitPython version [integration/quibble] - 10https://gerrit.wikimedia.org/r/429195 (https://phabricator.wikimedia.org/T193057) (owner: 10Hashar) [13:57:06] (03Merged) 10jenkins-bot: Allow more recent GitPython version [integration/quibble] - 10https://gerrit.wikimedia.org/r/429195 (https://phabricator.wikimedia.org/T193057) (owner: 10Hashar) [15:49:36] (03PS3) 10Hashar: Convert doc to Sphinx [integration/quibble] - 10https://gerrit.wikimedia.org/r/428628 [16:00:10] (03CR) 10Zfilipin: Convert doc to Sphinx (031 comment) [integration/quibble] - 10https://gerrit.wikimedia.org/r/428628 (owner: 10Hashar) [16:23:42] 10Release-Engineering-Team (Kanban), 10Scap, 10Operations: mwscript rebuildLocalisationCache.php takes 40 minutes - https://phabricator.wikimedia.org/T191921#4161858 (10Krinkle) I've put a straw-man up at T176370#4161855. [16:44:09] 10Release-Engineering-Team (Kanban), 10Release, 10Train Deployments: 1.32.0-wmf.2 deployment blockers - https://phabricator.wikimedia.org/T191048#4161956 (10greg) FYI, T189552 is happening on Wednesday May 2nd with overlap with the train but the complications were not deemed blocking worthy (iow: no need to... [16:45:12] <3 jenkins: fatal: write error: No space left on device [16:45:18] https://integration.wikimedia.org/ci/job/mwext-CirrusSearch-whitespaces/3096/console [16:45:25] * greg-g looks [16:47:54] yup... thcipriani which workspaces shouldn't we delete? [16:48:34] * thcipriani marks as offline [16:48:38] https://phabricator.wikimedia.org/P7044 [16:49:08] !log tyler marked integration-slave-jessie-1001 as offline [16:49:10] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [16:49:39] 10Continuous-Integration-Infrastructure, 10Release-Engineering-Team (Kanban), 10Patch-For-Review: mwgate-php55lint workspaces are getting huge - https://phabricator.wikimedia.org/T179963#4161988 (10greg) And again... ``` gjg@integration-slave-jessie-1001:/srv/jenkins-workspace/workspace$ du -sh * 511M analy... [16:51:12] !log thcipriani@integration-slave-jessie-1001:/srv/jenkins-workspace/workspace$ sudo rm -rf * [16:51:14] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [16:51:33] * ebernhardson wonders if a stack of 4TB spinning rust would be cheaper than figuring out where the disk has gone :) [16:51:50] usually :) [16:51:54] !log mark integration-slave-jessie-1001 online [16:51:56] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [16:52:10] thcipriani: am I remembering incorrectly that you said not to delete some of the workspaces recently? [16:52:32] I don't remember saying anything about that [16:52:49] well fine then :) [16:53:00] I just looked in the sal logs to see what we'd done in the past: https://tools.wmflabs.org/sal/releng?p=0&q=1001&d= [16:53:05] ack [17:13:13] Question: does scap allow for different service names in different environments? [17:13:49] EG, we have a utility that has a cli component, and an API component. They've been deployed to the same host in the past, but that will soon no longer be the case. [17:14:16] Both are run as services [17:15:10] I'm wondering if it's possible to have scap do `sudo service CLISERVICE restart` on one host/set of hosts, and `sudo service APISERVICE restart` on a different host/set of hosts. [17:15:44] so you don't want the apiservice running on the cliservice hosts and vice versa? Or you just don't want scap to restart them? [17:17:21] we have one other thing that's kind of like this. if you mask the service on the hosts where scap shouldn't restart it and then add "require_valid_service: True" in scap.cfg scap won't try to restart the host that has the masked service. [17:18:41] with require_valid_service scap will do: systemctl show --property LoadState [service] to ensure it's not "masked" or "not-found". So if you mask it on hosts where it shouldn't start then scap won't start it. [17:22:31] example: https://github.com/wikimedia/mediawiki-services-jobrunner/blob/master/scap/scap.cfg#L10-L14 [17:22:50] thcipriani: You've got the use case. [17:22:55] That seems like it should work fine, thanks! [17:23:12] sure thing :) [17:36:29] 10Beta-Cluster-Infrastructure, 10Puppet: Puppet broken on deployment-cpjobqueue - https://phabricator.wikimedia.org/T193127#4162142 (10MarcoAurelio) Logs are also quite heavy: ``` maurelio@deployment-cpjobqueue:/srv/log/cpjobqueue$ sudo ls -lash * 11G -rw-r--r-- 1 cpjobqueue cpjobqueue 11G Apr 25 00:57 main... [17:37:29] greg-g / Reedy: hi, deployment-cpjobqueue disk is full and something needs to be done, not sure what to delete though. [17:37:38] everything [17:37:45] orly? [17:37:52] 10Gerrit, 10ORES, 10Operations, 10Patch-For-Review, 10Scoring-platform-team (Current): Plan migration of ORES repos to git-lfs - https://phabricator.wikimedia.org/T181678#4162148 (10awight) [17:38:00] sudo -u root rm -Rf * ? :P [17:38:35] mobrovac: ^ what's up with the jobqueue changes, who should we get to work on a full disk on deployment-cpjobqueue? [17:39:07] see sudo du details at https://phabricator.wikimedia.org/T193127 [17:39:13] -rw-r--r-- 1 cpjobqueue cpjobqueue 11036110848 Apr 25 00:57 main.log [17:39:14] -rw-r--r-- 1 cpjobqueue cpjobqueue 5408579584 Apr 25 06:36 main.log.1 [17:39:17] 10G log file [17:39:21] sigh [17:39:23] ok, looking [17:39:25] Do we know if we need the old ones? [17:39:32] Cause might aswell rm them, gzip the 5G one.. [17:39:34] and var/vcd3 is 19G [17:39:54] I think this also caused T193133 [17:39:55] T193133: logstash-beta not recording since April, 20th 2018 - https://phabricator.wikimedia.org/T193133 [17:40:27] ./dev/vda3 19G 19G 0 100% / [17:40:53] waaat main.log 11G? logrotate seems to be at fault here [17:40:53] Purged some old kernels [17:40:55] ok, fixing [17:41:09] -rw-r--r-- 1 cpjobqueue cpjobqueue 11036110848 Apr 25 00:57 main.log [17:41:10] -rw-r--r-- 1 cpjobqueue cpjobqueue 5408579584 Apr 25 06:36 main.log.1 [17:41:15] It looks like it rotated today [17:41:19] But today is spammy AF [17:41:37] !log stopped cpjobqueue and purging logs [17:41:39] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [17:42:55] ok all good now, sorry for the inconvenience [17:43:04] we are doing some changes to kafka there [17:43:18] the logs are not spammy any more [17:43:25] running puppet now [17:43:36] /dev/vda3 19G 2.4G 16G 14% / [17:43:47] mobrovac: and puppet fails :| [17:43:56] looking [17:44:04] (03CR) 10Chad: [C: 032] Resurrect ArticleCreationWorkflow [tools/release] - 10https://gerrit.wikimedia.org/r/429102 (https://phabricator.wikimedia.org/T192455) (owner: 10MaxSem) [17:44:04] Error: Could not update: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install ldap-utils' returned 100: Reading package lists... [17:44:20] (03PS2) 10Chad: Resurrect ArticleCreationWorkflow [tools/release] - 10https://gerrit.wikimedia.org/r/429102 (https://phabricator.wikimedia.org/T192455) (owner: 10MaxSem) [17:44:40] ldap-utils? [17:44:49] what's up with that? [17:44:53] lemme run apt-get manually [17:45:06] sure, puppet is your expertise :) [17:48:19] ok puppet succeeds now after some apt-get foo [17:48:36] mobrovac: thanks :) [17:48:45] np Hauskatze, sorry for the trouble [17:48:54] no probs [17:51:35] 10Beta-Cluster-Infrastructure, 10Puppet: Puppet broken on deployment-cpjobqueue - https://phabricator.wikimedia.org/T193127#4162202 (10MarcoAurelio) 05Open>03Resolved a:03mobrovac Fixed by @mobrovac. Thanks. [17:52:05] any idea why logstash is silent? [17:52:14] I though full disk here might be the cause [17:55:00] greg-g: Can't wait for this Developer Productivity program. Every single engineer has had Vagrant completely break on them in the past week :( [17:55:27] (on my team) [17:56:26] 10Beta-Cluster-Infrastructure, 10Wikimedia-Logstash: logstash-beta not recording since April, 20th 2018 - https://phabricator.wikimedia.org/T193133#4162209 (10MarcoAurelio) p:05Triage>03Unbreak! Without logstash we cannot really see if there are errors on the latests releases of MediaWiki and its extension... [17:57:05] 10Beta-Cluster-Infrastructure, 10Release-Engineering-Team, 10Wikimedia-Logstash: logstash-beta not recording since April, 20th 2018 - https://phabricator.wikimedia.org/T193133#4162214 (10MarcoAurelio) [17:57:23] RoanKattouw: Have you seen some of the shit vagrant has been doing upstream recently? [17:57:40] No? [17:58:43] mooeypoo and I switched to vagrant-lxc recently, which is amazingly fast, but now she's run into bugs where vagrant doesn't find the lxc VM it starts and breaks in various places with Ruby stack traces because the VM ID is Nil (one symptom is it trying to cat /var/lib/lxc/config rather than /var/lib/lxc/VMID/config) [17:59:22] https://github.com/hashicorp/vagrant/issues/9442#issuecomment-374785457 [17:59:31] 10Beta-Cluster-Infrastructure, 10Analytics, 10ChangeProp, 10EventBus, and 3 others: Puppet broken on deployment-cpjobqueue - https://phabricator.wikimedia.org/T193127#4162216 (10mobrovac) [17:59:50] I keep getting pings on github where my hack is getting propogated around the internet [17:59:50] fwiw i've been using vagrant-lxc for months and it was also a *huge* improvement. And similarly sometimes it mucks up and i have to use lxc commands to delete the old instance and build a new one [18:04:03] And permission issues now, and failures of systemd in Vagrant. This is frustrating. [18:05:21] oh yes the file permissions ... the /vagrant dir keeps whatever uid/gid you have on your machine. Better hope you have account 1000 on your laptop :P [18:08:06] I don't know what that means -.- [18:08:06] But I can probably fix permission issues... Now the entire process seems to just not work. I'll destroy again and try to see if that helps [18:09:35] * bd808 worries that mooeypoo's computer is cursed [18:10:15] mooeypoo: for permissions, i just mean that whatever uid owns the files in your regular OS keeps ownership inside LXC. So the usernames are different inside LXC but all the id's are still the same. Typically the first account on a machine is 1000, so if you create the first account on your laptop and `vagrant` is the first user created in lxc, things juts magically work [18:10:19] I reinstalled Ubuntu from scratch yesterday. This really shouldn't happen. [18:10:36] My next recourse is dancing around it with incense and chants [18:10:39] but if you create a second user on your laptop and try to make a mwv with lxc ... /vagrant will all be owned by the wrong user [18:11:30] I don't have any users other than the first one I created for me, so it should work, it sounds like [18:11:36] And yet... [18:16:10] One of the issues I saw on your machine was that /var/lib/apt/cache ends up being owned by root and then it can't write there anymore somehow [18:19:47] ebernhardson, have you seen and have a solution for this: [18:19:52] https://www.irccloud.com/pastebin/Fz23sDNE/ [18:21:11] Other than that, I'm getting these errors now: [18:21:15] https://www.irccloud.com/pastebin/DrD2BLeq/ [18:21:17] mooeypoo: the owner/group thing i havn't found a solution for, but that's the permissions issue i mentioned [18:21:58] ebernhardson: I would ignore it, and redo the permissions for /cache thing every time I need to reprovision if THAT was the only problem. The second snippet is the killer :\ I was wondering if it is related,but it doesn't seem to be [18:22:24] I'm going to try and destroy again, see if that helps. [18:23:04] Should I be using NFS btw ? I had some initial issues with it, so I canceled it in the config, but maybe that's the issue? :\ [18:23:31] mooeypoo: it does seem to be about permisisons, but something different. the owner/group thing should only effect /vagrant. the other failures i dunno, it looks like it's certainly missing htings [18:23:47] :\ [18:23:58] * mooeypoo reinstalled Ubuntu yesterday. The machine is fresh. [18:24:04] I didn't even have time to tweak it yet. [18:24:20] Also, ironically, Ubuntu 18.04 is coming out today [18:24:43] mooeypoo: just double checked and i'm not using nfs, looks like the only custom change [18:24:55] sorry can't help more :S [18:24:58] And I can't wait... it replaces Unity with GNOME again w00t. I'm eagerly waiting to upgrade [18:25:12] ebernhardson: thanks though! [18:25:31] ok, knock-on-wood, vagrant destroy worked, and now vagrant up, so far, knock on wood x3 is working [18:26:12] It may have been somethng specific with that machine -- though, I have to say, having to periodically (often) destroy the machine is REALLY a hassle. but if that makes it work, I guess it's a better outcome than having nothing [18:26:23] I spoke too soon [18:26:39] i also try not to upgrade vagrant :P i'm using and mwv from september [18:26:45] ==> default: Error: /Stage[main]/Mwv::Cachefilesd/Service[cachefilesd]: Failed to call refresh: Systemd restart for cachefilesd failed! [18:26:45] ==> default: journalctl log for cachefilesd: [18:26:48] ^ same issue [18:27:14] Well, I pulled it yesterday so... [18:27:33] ebernhardson: what was it you said about running the lxc command to recreate the machine? [18:27:43] mooeypoo: oh lxc was just to delete, i create with `vagrant up` [18:27:53] mooeypoo: after vagrant somehow couldn't find the lxc image on `vagrant up` [18:27:55] At this point, I'll try anything [18:28:32] ok after those errors above, the provisioning is continuing ... maybe... it will still... work....? :\ [18:29:25] mooeypoo: basically `lxc-ls` will list images, and you can delete them with `lxc-destroy --name `. But that should be no different from `vagrant destroy` as long as it finds it [18:29:35] you might have to sudo for the lxc commands to show anything [18:31:31] 10Release-Engineering-Team (Kanban), 10Quibble, 10Patch-For-Review: `quibble` fails with git 2.15: Type of packed-Refs not understood: '# pack-refs with: peeled fully-peeled sorted' - https://phabricator.wikimedia.org/T193057#4162318 (10hashar) 05Open>03Resolved Fixed by https://gerrit.wikimedia.org/r/#/... [18:31:55] ok so the provisioning ended with an SSH error (probably because of the above error) BUT the wiki works [18:32:13] ... there's probably some internal issues I will have to deal with though at some point [18:32:15] good enough i suppose :) [18:32:31] Seeing as I have actual work to do that requires bug fixing, that will have to do [18:33:46] and btw, the only roles I have are echo, scribunto, and kartographer. Those are relatively simple extensions (no relying on services or anything like that running) so I can't **wait** to try enabling VisualEditor or ORES :\ [18:34:02] lol [18:34:19] Something has to be done about making a better dev environment experience [18:34:39] If nothing else, any volunteer going through what I am, is giving up, for sure. [18:35:47] mooeypoo: I have evil_plans.txt! [18:36:00] And by evil I mean totally awesome [18:36:08] fwiw, I usually run “vagrant provision” a few times after creating a new mw-vagrant VM. flush twice. [18:39:10] mooeypoo: a better dev env is more or less in the pipe [18:52:06] hashar: you have no idea how happy this makes me [18:52:24] (03PS1) 10Hashar: docker: quibble 0.0.11 [integration/config] - 10https://gerrit.wikimedia.org/r/429255 [18:52:27] if there's anything I can do to help test it, please let me know. [18:53:25] no_justification: My plans are getting to the stage of planning an exorcism on my laptop, so I think any evil_plans.txt option is welcome ... [18:54:18] mooeypoo: I hope to have an early demo available for people to screw around with soon :) [18:57:47] (03CR) 10Hashar: [C: 032] docker: quibble 0.0.11 [integration/config] - 10https://gerrit.wikimedia.org/r/429255 (owner: 10Hashar) [18:59:11] (03Merged) 10jenkins-bot: docker: quibble 0.0.11 [integration/config] - 10https://gerrit.wikimedia.org/r/429255 (owner: 10Hashar) [19:00:23] !log Building releng/quibble 0.0.11 docker images [19:00:25] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [19:01:37] no_justification upstream seemed to have finally fix gitiles links (easier to find them now). I think they may have done that by mistake during them updating the ui. [19:03:27] (03PS1) 10Hashar: Bump quibble to 0.0.11 [integration/config] - 10https://gerrit.wikimedia.org/r/429259 [19:04:00] oh [19:04:16] nvm, i guess if the link is called gitiles, that's when it will show it in the wierd place) [19:12:12] 10Release-Engineering-Team (Kanban), 10Release, 10Train Deployments: 1.32.0-wmf.1 deployment blockers - https://phabricator.wikimedia.org/T191047#4162443 (10Reedy) [19:30:51] pfff Error: 1071 Specified key was too long; max key length is 767 bytes [19:30:58] and I swear I encountered that one a few weeks ago [19:31:17] https://gerrit.wikimedia.org/r/#/c/419858/ ! [20:19:00] (03CR) 10Hashar: [C: 032] Bump quibble to 0.0.11 [integration/config] - 10https://gerrit.wikimedia.org/r/429259 (owner: 10Hashar) [20:19:21] !log bumped quibble Jenkins jobs to 0.0.11 [20:19:23] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:21:45] (03Merged) 10jenkins-bot: Bump quibble to 0.0.11 [integration/config] - 10https://gerrit.wikimedia.org/r/429259 (owner: 10Hashar) [20:40:45] (03PS1) 10Hashar: Drop mediawiki-extensions-* jobs from experimental [integration/config] - 10https://gerrit.wikimedia.org/r/429340 [20:42:08] (03PS1) 10Hashar: Drop mwext-qunit-composer-jessie from experimental [integration/config] - 10https://gerrit.wikimedia.org/r/429341 [20:43:04] (03PS2) 10Hashar: Drop mwext-qunit-composer-jessie from experimental [integration/config] - 10https://gerrit.wikimedia.org/r/429341 [20:43:16] (03CR) 10Hashar: [C: 032] Drop mediawiki-extensions-* jobs from experimental [integration/config] - 10https://gerrit.wikimedia.org/r/429340 (owner: 10Hashar) [20:43:19] (03CR) 10Hashar: [C: 032] Drop mwext-qunit-composer-jessie from experimental [integration/config] - 10https://gerrit.wikimedia.org/r/429341 (owner: 10Hashar) [20:44:52] (03Merged) 10jenkins-bot: Drop mediawiki-extensions-* jobs from experimental [integration/config] - 10https://gerrit.wikimedia.org/r/429340 (owner: 10Hashar) [20:44:54] (03Merged) 10jenkins-bot: Drop mwext-qunit-composer-jessie from experimental [integration/config] - 10https://gerrit.wikimedia.org/r/429341 (owner: 10Hashar) [20:50:19] 10Continuous-Integration-Config, 10Security-Team, 10phan-taint-check-plugin, 10MW-1.31-release, and 2 others: Make jenkins run phan-taint-check-plugin non-voting and then voting - https://phabricator.wikimedia.org/T182599#3828329 (10demon) >>! In T182599#4098733, @Legoktm wrote: > I'd like to aim to get th... [20:50:54] 10MediaWiki-Releasing, 10Release-Engineering-Team, 10MW-1.31-release: Upgrade patches for tarball releases don't apply cleanly to tarball installation - https://phabricator.wikimedia.org/T73379#748261 (10demon) >>! In T73379#4138204, @MaxSem wrote: > Are patches even useful these days when download speed is... [20:53:25] 10MediaWiki-Releasing, 10Release-Engineering-Team, 10MW-1.31-release: Upgrade patches for tarball releases don't apply cleanly to tarball installation - https://phabricator.wikimedia.org/T73379#4162693 (10demon) >>! In T73379#4113574, @Legoktm wrote: > I think the ideal long term fix is to generate the patch... [21:04:29] 10Release-Engineering-Team (Kanban): Request access for deployment-prep - https://phabricator.wikimedia.org/T193202#4162729 (10NHarateh_WMF) [21:12:25] Project mwext-phpunit-coverage-publish build #3841: 04FAILURE in 2 min 22 sec: https://integration.wikimedia.org/ci/job/mwext-phpunit-coverage-publish/3841/ [21:15:30] Project mwext-phpunit-coverage-publish build #3842: 04STILL FAILING in 3.4 sec: https://integration.wikimedia.org/ci/job/mwext-phpunit-coverage-publish/3842/ [21:16:17] Project mwext-phpunit-coverage-publish build #3843: 04STILL FAILING in 30 sec: https://integration.wikimedia.org/ci/job/mwext-phpunit-coverage-publish/3843/ [21:22:58] mobrovac: hi, I added https://gerrit.wikimedia.org/r/#/c/429343/ for restbase/deploy regards :) [21:37:09] Yippee, build fixed! [21:37:09] Project mwext-phpunit-coverage-publish build #3844: 09FIXED in 1 min 46 sec: https://integration.wikimedia.org/ci/job/mwext-phpunit-coverage-publish/3844/ [21:43:34] 10Beta-Cluster-Infrastructure, 10MW-1.32-release-notes (WMF-deploy-2018-04-24 (1.32.0-wmf.1)), 10Patch-For-Review, 10Puppet: deployment-prep has jobqueue issues - https://phabricator.wikimedia.org/T192473#4162818 (10MarcoAurelio) @EddieGP deployment-cpjobqueue puppet was broken due to disk full; this was f... [21:51:42] 10Beta-Cluster-Infrastructure, 10MW-1.32-release-notes (WMF-deploy-2018-04-24 (1.32.0-wmf.1)), 10Patch-For-Review, 10Puppet: deployment-prep has jobqueue issues - https://phabricator.wikimedia.org/T192473#4162850 (10MarcoAurelio) Also, there are stalled jobs in the `job` table: ``` wikiadmin@deployment-db... [22:15:54] 10Release-Engineering-Team (Kanban), 10MediaWiki-SWAT-deployments, 10User-greg, 10User-zeljkofilipin: Proposal: Effective immediately, disallow multi-sync patch deployment - https://phabricator.wikimedia.org/T187761#4162895 (10greg) 05Open>03Resolved a:03greg https://lists.wikimedia.org/pipermail/wik... [22:16:33] 10Release-Engineering-Team (Kanban), 10User-greg: Create q3 quarterly check in slides - https://phabricator.wikimedia.org/T192935#4162900 (10greg) Work in-progress. TP1 done. TP3 and TP6 TODO. [23:36:15] 10Continuous-Integration-Config: Don't run mwext-php70-phan-docker on older extension branches - https://phabricator.wikimedia.org/T193212#4163026 (10Reedy) [23:40:50] 10Continuous-Integration-Infrastructure, 10Release-Engineering-Team (Kanban), 10Patch-For-Review: mwgate-php55lint workspaces are getting huge - https://phabricator.wikimedia.org/T179963#3742118 (10Krinkle) Could this possibly be the cause of fetch/lock failures I'm seeing? (03Draft2) 10Reedy: Don't run PdfHandler phan jobs on older branches [integration/config] - 10https://gerrit.wikimedia.org/r/429365 (https://phabricator.wikimedia.org/T193212) [23:43:16] (03CR) 10jerkins-bot: [V: 04-1] Don't run PdfHandler phan jobs on older branches [integration/config] - 10https://gerrit.wikimedia.org/r/429365 (https://phabricator.wikimedia.org/T193212) (owner: 10Reedy) [23:43:23] 10Continuous-Integration-Config, 10Patch-For-Review: Don't run mwext-php70-phan-docker on older extension branches - https://phabricator.wikimedia.org/T193212#4163052 (10Reedy) I would suspect this is a wider issue; any extension running phan tests... Patches to older branches will fail :( [23:45:23] (03PS3) 10Reedy: Don't run PdfHandler phan jobs on older branches [integration/config] - 10https://gerrit.wikimedia.org/r/429365 (https://phabricator.wikimedia.org/T193212) [23:46:21] (03PS4) 10Reedy: Don't run PdfHandler phan jobs on older branches [integration/config] - 10https://gerrit.wikimedia.org/r/429365 (https://phabricator.wikimedia.org/T193212) [23:47:39] (03CR) 10jerkins-bot: [V: 04-1] Don't run PdfHandler phan jobs on older branches [integration/config] - 10https://gerrit.wikimedia.org/r/429365 (https://phabricator.wikimedia.org/T193212) (owner: 10Reedy) [23:48:27] 10Continuous-Integration-Config, 10Patch-For-Review: Don't run mwext-php70-phan-docker on older extension branches - https://phabricator.wikimedia.org/T193212#4163053 (10Reedy) ``` reedy@ubuntu64-web-esxi:~/integration-config/zuul$ grep "name: extension-phan-generic" layout.yaml -B 5 | grep mediawiki/extension... [23:54:35] (03Abandoned) 10Reedy: Don't run PdfHandler phan jobs on older branches [integration/config] - 10https://gerrit.wikimedia.org/r/429365 (https://phabricator.wikimedia.org/T193212) (owner: 10Reedy) [23:58:24] (03Draft2) 10Reedy: Don't run mwext-php70-phan-docker if no phan.php exists on the branch [integration/config] - 10https://gerrit.wikimedia.org/r/429367 (https://phabricator.wikimedia.org/T193212)