[04:19:03] Project selenium-MultimediaViewer » firefox,beta,Linux,contintLabsSlave && UbuntuTrusty build #296: 04FAILURE in 23 min: https://integration.wikimedia.org/ci/job/selenium-MultimediaViewer/BROWSER=firefox,MEDIAWIKI_ENVIRONMENT=beta,PLATFORM=Linux,label=contintLabsSlave%20&&%20UbuntuTrusty/296/ [09:25:47] !log Changing Jenkins slave contint1001 working dir from /srv/ssd/jenkins-slave to /srv/jenkins-slave ( https://gerrit.wikimedia.org/r/#/c/337286/ ) [09:25:49] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [09:46:16] 10Continuous-Integration-Infrastructure, 06Release-Engineering-Team, 10Wikimedia-Logstash, 07Jenkins: Send Jenkins daemon logs to logstash - https://phabricator.wikimedia.org/T143733#3021152 (10hashar) In Jenkins service default file, we can set `JENKINS_LOG` to a syslog facility eg: `daemon.info`. The ini... [09:54:17] Yippee, build fixed! [09:54:17] Project selenium-Core » firefox,beta,Linux,contintLabsSlave && UbuntuTrusty build #313: 09FIXED in 6 min 22 sec: https://integration.wikimedia.org/ci/job/selenium-Core/BROWSER=firefox,MEDIAWIKI_ENVIRONMENT=beta,PLATFORM=Linux,label=contintLabsSlave%20&&%20UbuntuTrusty/313/ [11:02:00] (03CR) 10Hashar: [C: 032] .gitreview: swap defaultbranch for track [integration/uprightdiff] (debian) - 10https://gerrit.wikimedia.org/r/326821 (owner: 10Hashar) [11:11:43] (03Merged) 10jenkins-bot: .gitreview: swap defaultbranch for track [integration/uprightdiff] (debian) - 10https://gerrit.wikimedia.org/r/326821 (owner: 10Hashar) [11:57:34] (03CR) 10WMDE-Fisch: [C: 031] Add extension-unittests-generic to FileImporter [integration/config] - 10https://gerrit.wikimedia.org/r/337050 (owner: 10Addshore) [11:57:46] hashar: ^^ would be cool to get that one in now :) [12:52:45] 10Gerrit, 07Upstream: Gerrit should allow a published patch to be set to "draft" again. - https://phabricator.wikimedia.org/T63124#3021600 (10Paladox) I have created this https://gerrit-review.googlesource.com/#/c/97062/ to allow us to delete changes, so really you could do it from scratch if we allow users to... [12:53:47] (03PS2) 10Hashar: Add extension-unittests-generic to FileImporter [integration/config] - 10https://gerrit.wikimedia.org/r/337050 (owner: 10Addshore) [12:53:59] (03CR) 10Hashar: [C: 032] Add extension-unittests-generic to FileImporter [integration/config] - 10https://gerrit.wikimedia.org/r/337050 (owner: 10Addshore) [12:55:42] (03Merged) 10jenkins-bot: Add extension-unittests-generic to FileImporter [integration/config] - 10https://gerrit.wikimedia.org/r/337050 (owner: 10Addshore) [12:55:48] epic! ;0 [12:55:48] ty [12:57:48] addshore: I should migrate the deployment to scap / tin.eqiad.wmnet :-} [12:57:55] yeeeehhh! [12:57:56] I think the blocker was that scap can't reload a service [12:57:57] addshore: I have deployed that one [12:58:08] thanks! and you totally should! then I could do it ;) [13:43:00] hashar: sorry to bug you again, but any idea about the failure here? https://gerrit.wikimedia.org/r/#/c/337227/7 everything looks good to me... [13:43:13] looking [13:43:19] ah yeah [13:43:36] there is now a hardlimiit to the number of files you can change in a single patch [13:43:43] whut O_o [13:43:50] * hashar giggles [13:43:53] xD [13:44:14] ohhhh [13:44:19] addshore: I would guess sort order [13:44:34] https://integration.wikimedia.org/ci/job/mwext-testextension-hhvm/37079/testReport/junit/(root)/AutoLoaderTest/testAutoLoadConfig/ [13:44:37] bu the other looks right O_o or is there something I am missing? [13:44:41] both expected and actual have the class (with the path mangled) [13:45:01] the thing listed as Actual is where it actually is, I have no idea where expected came from O_o [13:45:04] but expected seems to use an array sorted by key [13:45:11] tis auto generated I guess [13:45:23] I can't remember how that AutoLoaderTest::testAutoLoadConfig works [13:45:33] but most probably it just find ^class in all php files [13:45:46] generated an array of class name => file [13:46:04] the array being sorted by key ( probably sortk() or something like that) [13:46:04] then we query the autoloader [13:46:11] and do a diff [13:46:44] * hashar awards a "ultimate pedanticness linting badge" to Jenkins [13:47:11] okay, I'llk shuffle the order around a bit after a cup of tea and swat :P [13:49:17] addshore: $expected = $wgAutoloadLocalClasses + $wgAutoloadClasses; [13:49:22] and it is passed via array_unique() [13:49:48] > Note that keys are preserved. array_unique() sorts the values treated as string at first, [13:49:51] so yeah they are sorted [14:23:23] 10Deployment-Systems, 06Release-Engineering-Team, 10Scap, 06Operations, 15User-Addshore: cannot delete non-empty directory: php-1.29.0-wmf.3 messages on 'scap sync' on mwdebug1002 - https://phabricator.wikimedia.org/T157030#3021831 (10Addshore) 05Resolved>03Open I just saw this again today in EU swat. [14:34:08] 06Release-Engineering-Team, 10Scap, 10scap2: Cleanup old cache dirs if still around - https://phabricator.wikimedia.org/T157743#3021899 (10Reedy) [14:34:11] 10Deployment-Systems, 06Release-Engineering-Team, 10Scap, 06Operations, 15User-Addshore: cannot delete non-empty directory: php-1.29.0-wmf.3 messages on 'scap sync' on mwdebug1002 - https://phabricator.wikimedia.org/T157030#3021897 (10Reedy) [15:19:45] 10Deployment-Systems, 06Release-Engineering-Team, 10Scap, 06Operations, 15User-Addshore: cannot delete non-empty directory: php-1.29.0-wmf.3 messages on 'scap sync' on mwdebug1002 - https://phabricator.wikimedia.org/T157030#3021971 (10mmodell) these are generally caused by some files owned by l10nupdate... [16:52:03] hashar: you around? [16:55:42] arlolra: almost. Have a team call in 4 minutes [16:55:57] when you get a chance, can you take a look at https://gerrit.wikimedia.org/r/#/c/336873/ [16:56:35] and why that extension isn't registering [16:56:40] ahh I seen that last week [16:56:52] forgot to look at it today sorry. I was swamped with puppet patches [16:57:18] no problem, just making sure it's on your radar [17:01:01] guess I will have to reproduce locall [17:01:03] y [17:01:18] can't tell what " This test suite requires the 'ref' hook extension, skipping." is [17:03:57] i think it means the cite extension isn't registered [17:04:07] if i comment out, // wfLoadExtension( 'Cite' ); [17:04:25] then run [17:04:25] php tests/parser/parserTests.php --file extensions/Cite/tests/parser/citeParserTests.txt [17:04:33] i get [17:04:34] Running parser tests from "extensions/Cite/tests/parser/citeParserTests.txt"... [17:04:34] This test suite requires the 'ref' hook extension, skipping. [17:04:51] so, i'm probably missing something w/ "extensions_load.txt" [17:06:24] mv deps.txt src/mediawiki/core/extensions_load.txt [17:06:39] that extensions_load.txt file is read by a LocalSettings.php snippet [17:07:02] which is in integration/jenkins.git mediawiki/conf.d/50_mw_ext_loader.php [17:07:24] which supposedly uses $loadFile = $IP . '/extensions_load.txt'; [17:07:44] export MW_INSTALL_PATH='src/mediawiki/core' [17:07:45] with $IP supposedly coming from MW_INSTALL_PATH [17:08:10] ah [17:08:31] but the command that run parserTests.php are in a different shell env (line 42: -shell: | [17:08:37] which lacks the MW_INSTALL_PATH [17:08:48] so maybe try exporting it just after the find commands? [17:08:53] and before the php parserTests.php [17:09:02] then hopefully $IP will be set properly [17:09:14] sound plausible, ok [17:09:17] that might be the corner case you are hitting [17:09:23] do you know how to push jjb jobs? [17:09:24] i tend to do MW_INSTALL_PATH=/var/www/wiki/mediawiki/core php maintenance/update.php [17:09:26] or similar [17:09:33] or that yeah [17:10:09] no, lego was pushing for me when i was testing before ... he pointed to the docs though [17:11:40] https://www.mediawiki.org/wiki/Continuous_integration/Jenkins_job_builder should have everything [17:12:05] arlolra: the alternative hack is to do it directly via the Jenkins web interface :} [17:12:07] (03PS6) 10Arlolra: Run parsoid's cite parser tests as well [integration/config] - 10https://gerrit.wikimedia.org/r/336873 (https://phabricator.wikimedia.org/T114256) [17:13:01] arlolra: https://integration.wikimedia.org/ci/job/parsoidsvc-hhvm-parsertests-jessie/ [17:13:05] login with your wmflabs account [17:13:08] use Configure [17:13:20] then somewhere at the bottom there should be the shell commands [17:15:00] i see [17:16:54] though that will reflect on immediately for patches being triggered from Gerrit :-} [17:18:36] arlolra: since you cd src/mediawiki/core , you probably want an absolute path for MW_INSTALL_PATH [17:18:45] MW_INSTALL_PATH="$WORKSPACE/src/mediawiki/core" [17:19:01] else I guess it is trying to include .../src/mediawiki/core/src/mediawiki/core/extensions_load.txt [17:19:26] good point [17:20:50] Fatal error: File not found: src/mediawiki/core/includes/AutoLoader.php in [17:20:50] :( [17:21:27] (03PS7) 10Arlolra: Run parsoid's cite parser tests as well [integration/config] - 10https://gerrit.wikimedia.org/r/336873 (https://phabricator.wikimedia.org/T114256) [17:22:12] arlolra: would need double quotes to allow expansion [17:22:23] '$FOO/bar' would be the literal string $FOO/bar [17:23:01] $ echo '$FOO/bar' [17:23:03] $FOO/bar [17:23:05] $ FOO=really bash -c 'echo "$FOO/bar"' [17:23:05] really/bar [17:24:01] (03PS8) 10Arlolra: Run parsoid's cite parser tests as well [integration/config] - 10https://gerrit.wikimedia.org/r/336873 (https://phabricator.wikimedia.org/T114256) [17:25:33] what was this all about? [17:25:33] https://integration.wikimedia.org/ci/job/parsoidsvc-hhvm-parsertests-jessie/lastBuild/artifact/log/mw-debug-cli.log [17:26:46] no clue [17:26:50] l [17:26:52] ok [17:26:52] that is probably just the debug log for update.php [17:27:13] but last build on https://integration.wikimedia.org/ci/job/parsoidsvc-hhvm-parsertests-jessie/1347/console [17:27:20] has the fatal: Fatal error: File not found: src/mediawiki/core/includes/AutoLoader.php in [17:32:26] hmm, well that didn't seem to have made any difference [17:32:27] https://integration.wikimedia.org/ci/job/parsoidsvc-hhvm-parsertests-jessie/lastBuild/console [17:32:47] :( [17:33:16] so I was wrong sorry :(((( [17:34:10] no problem ... i'm sure it's something like that [17:35:01] 17:31:07 Proceeding '50_mw_ext_loader.php'... [17:35:09] so that is running at least [17:35:11] yeah that inject that code snippet to LocalSettings.php [17:35:55] I am pretty sure the extension is loaded [17:36:01] oh no sorry [17:36:05] I am not sure [17:36:36] maybe i should cat LocalSettings.php to see what it looks like [17:37:17] maybe the hook is not properly registered. The test script seems to run $wgParser->firstCallInit(); [17:37:42] ahh [17:37:57] you can even ask Jenkins to attach it to the build [17:38:43] just copy it to $WORKSPACE/log [17:38:54] eg after mw-apply-settings.php : [17:39:23] cp "$MW_INSTALL_PATH/LocalSettings.php" "$WORKSPACE/log/LocalSettings" [17:39:27] cp "$MW_INSTALL_PATH/LocalSettings.php" "$WORKSPACE/log/LocalSettings.php" [17:39:39] Jenkins basically grab whatever is under /log/ [17:40:01] but it is probably easier to try to reproduce locally. Will try tonight after my last meeting [17:40:04] else that will be for tomorrow :/ [17:41:24] right, it's late in the day for you. sure, again, whenever you have time. no rush [17:41:34] thanks for helping [17:44:25] from a quick local test [17:44:32] they pass just fine :-} [17:44:55] hmm [17:45:11] https://integration.wikimedia.org/ci/job/parsoidsvc-hhvm-parsertests-jessie/lastSuccessfulBuild/artifact/log/LocalSettings.php/*view*/ [17:45:31] neat [17:45:54] I also have no idea where the extension is actually cloned [17:48:23] arlolra: yeah it is a huge mess really [17:48:26] I gotta debug it properly [17:48:39] ha, ok [17:48:46] the zuul-parsoid-cloner macro has a map of Gerrit repo name ---> some path on file system [17:49:02] which is defined by etc/zuul-parsoid-clonemap.yaml in integration/jenkins.git [17:49:11] and that does not root mediawiki/extensions/* repo [17:49:34] so it probably ends up at mediawiki/extensions/Cite [17:49:57] or god knows where. Maybe a debug statement such as : find -name Cite [17:50:01] would help finding it :} [17:50:16] arlolra: focusing on end of audio, committing back home, dinner, meetings and I will see what I can do after :} [17:50:43] 17:44:39 + xargs -0 md5sum -b [17:50:43] 17:44:39 a9e3701f5e5c6d68d92b42f803b1c0c1 *./mediawiki/extensions/Cite/tests/parser/citeParserTests.txt [17:50:43] 17:44:39 0c6d29758cd08278827d354a3dd9f687 *./parsoidsvc/tests/citeParserTests.txt [17:51:13] oh [17:51:26] so maybe it clones them at the proper place bah [17:51:29] looks like it is indeed in $WORKSPACE/mediawiki/extensions/Cite [17:51:31] :/ [17:52:03] and then [17:52:03] MW_INSTALL_PATH="$WORKSPACE/src/mediawiki/core" [17:52:11] ahhh [17:52:16] we're off by /src [17:52:18] so gotta tweak the clone map [17:52:30] or so ... [17:52:31] yeah [17:52:42] ok, at least we know some things [17:52:49] have a good meeting / end of day [17:55:12] (03PS1) 10Hashar: Parsoid clone map need extensions routing [integration/jenkins] - 10https://gerrit.wikimedia.org/r/337430 [17:55:20] arlolra: https://gerrit.wikimedia.org/r/337430 Parsoid clone map need extensions routing [17:55:27] but I can't deploy it right now [17:55:37] gotta refresh the CI images to include that new file [17:55:53] which is rather easy to do but might require a rollback if something screw up [17:56:58] ok, à demain, ou plutard [17:57:28] (03CR) 10Hashar: "Once merged we need to update the Nodepool Jessie image based on the doc at https://wikitech.wikimedia.org/wiki/Nodepool#Manually_generate" [integration/jenkins] - 10https://gerrit.wikimedia.org/r/337430 (owner: 10Hashar) [17:59:30] 06Release-Engineering-Team, 10Elasticsearch, 10Phabricator (Search): phab+elasticsearch: support multiple elasticsearch clusters / datacenters - https://phabricator.wikimedia.org/T157156#3022469 (10mmodell) [18:17:59] 10Continuous-Integration-Infrastructure, 06Release-Engineering-Team, 06Operations, 10hardware-requests, 13Patch-For-Review: Phase out scandium.eqiad.wmnet - https://phabricator.wikimedia.org/T150936#3022502 (10Dzahn) - 10:07 < mutante> !log scandium - ex-zuul merger - removing from puppet, revoking puppe... [18:32:36] 10Continuous-Integration-Infrastructure, 06Release-Engineering-Team, 06Operations, 10hardware-requests, 13Patch-For-Review: Phase out scandium.eqiad.wmnet - https://phabricator.wikimedia.org/T150936#3022562 (10RobH) [18:38:43] 10Continuous-Integration-Infrastructure, 06Release-Engineering-Team, 06Operations, 10hardware-requests, 13Patch-For-Review: Phase out scandium.eqiad.wmnet - https://phabricator.wikimedia.org/T150936#3022594 (10Dzahn) [18:41:03] 10Continuous-Integration-Infrastructure, 06Release-Engineering-Team, 06Operations, 10hardware-requests: Phase out scandium.eqiad.wmnet - https://phabricator.wikimedia.org/T150936#3022600 (10Dzahn) [18:44:57] RainbowSprinkles: Hey! I had a (very short) look into the gerrit GC logs (as suggested by Erik). [18:45:14] Ah, yes :) [18:46:12] only very basic observation atm... I'm surpised to see a heap of 28Go and a used heap after GC bewteen 2 and 6 Go. It looks like the heap is very much oversized... [18:46:35] any idea if there is a reason for that? Do we have specific operations where we know they require a lot more heap? [18:47:15] So, the idea was that when we got a larger machine with more ram, we bumped the heap to take more use of the memory we had. [18:47:31] Ideally, we wanted to bump up more of our config to make use of it (higher git cache data, etc etc) [18:48:07] Granted, gerrit's internal stats report higher usage... eg from just now: [18:48:09] Mem: 25.34g total = 18.50g used + 6.16g free + 702.10m buffers [18:48:28] yeah, but that's most probably non GCed memory [18:48:48] memory stat matter only after GC... [18:50:47] quick analysis with gceasy: http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTcvMDIvMTMvLS1nYy50YXIuZ3otLTE4LTQ5LTUx [18:51:55] that's just from after the latest restart, but some older logs show a similar behaviour [18:52:43] the Interactive Graphs / Heap after GC seem to indicate a fairly low actual heap usage... [18:54:58] RainbowSprinkles: is there an open phab ticket on which I should add a few notes? [18:55:25] Lemme find it :) [18:55:51] T148478 ? [18:55:51] T148478: Investigate seemingly random Gerrit slow-downs - https://phabricator.wikimedia.org/T148478 [18:56:15] https://phabricator.wikimedia.org/T148478 [18:56:16] Yep that's it [18:56:57] Our working theory has been that the slowdowns we see are pauses during GC, but the underlying cause of why we're doing such a crappy GC is unknown [18:56:59] Ok, I'll add a few lines, not sure it will bring much more than what already is there... [18:57:11] (also: curious about moving to G1 since we're on java8 now) [18:59:31] I have limited (and fairly good) experience with G1, but I know that elastic is strongly against it atm, they have tests showing data corruption in Lucene, related to G1... [19:00:11] Ok, taking a break, back soon... [19:00:14] Ugh, we use lucene :( [19:02:18] ElasticSearch is now supported in gerrit :) [19:10:55] Yes but not until the new version, which I'm in no rush to upgrade to [19:24:52] 10Continuous-Integration-Infrastructure: Investigate installing php5.3 on trusty and/or debian instance - https://phabricator.wikimedia.org/T103786#3022794 (10Legoktm) >>! In T103786#3022786, @Stashbot wrote: > {nav icon=file, name=Mentioned in SAL (#wikimedia-operations), href=https://tools.wmflabs.org/sal/log/... [19:25:44] 10Gerrit, 06Operations, 13Patch-For-Review: Investigate seemingly random Gerrit slow-downs - https://phabricator.wikimedia.org/T148478#2739193 (10Gehel) 2 observations after doing a short dive into GC logs: * it looks like most of the time (looking at logs over the past several days) the heap after GC is be... [19:56:50] https://integration.wikimedia.org/ci/job/mediawiki-releases/ !!! [19:56:54] I got one green bubble :] [19:58:42] (03CR) 10Hashar: [C: 032] Parsoid clone map need extensions routing [integration/jenkins] - 10https://gerrit.wikimedia.org/r/337430 (owner: 10Hashar) [19:59:00] (03CR) 10Hashar: "Need to update the zuul-cloner map for paroisd https://gerrit.wikimedia.org/r/#/c/337430/" [integration/config] - 10https://gerrit.wikimedia.org/r/336873 (https://phabricator.wikimedia.org/T114256) (owner: 10Arlolra) [19:59:43] (03Merged) 10jenkins-bot: Parsoid clone map need extensions routing [integration/jenkins] - 10https://gerrit.wikimedia.org/r/337430 (owner: 10Hashar) [20:01:03] !log Updating Nodepool Jessie snapshot to update the Parsoid zuul-cloner map ( https://gerrit.wikimedia.org/r/#/c/337430/ ) [20:01:07] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:01:07] arlolra: ^^ :] [20:01:10] it is in progress [20:02:24] :D [20:10:50] 10Continuous-Integration-Infrastructure, 06Release-Engineering-Team, 06Operations, 10hardware-requests: Phase out scandium.eqiad.wmnet - https://phabricator.wikimedia.org/T150936#3023070 (10hashar) [20:10:52] 10Continuous-Integration-Infrastructure, 06Release-Engineering-Team, 13Patch-For-Review: zuul-merger git-daemon process is not start properly by systemd ? - https://phabricator.wikimedia.org/T157785#3023071 (10hashar) [20:13:58] PROBLEM - Puppet run on repository is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [20:15:54] !log Image snapshot-ci-jessie-1487016035 in wmflabs-eqiad is ready [20:15:55] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:24:23] hashar: you did it! [20:24:23] 20:24:02 Running parser tests from "/home/jenkins/workspace/parsoidsvc-hhvm-parsertests-jessie/parsoidsvc/tests/citeParserTests.txt"... [20:24:24] 20:24:03 [20:24:24] 20:24:03 Passed 33 of 33 tests (100%)... ALL TESTS PASSED! [20:25:10] oh my god! [20:25:21] teammmm work! [20:25:34] o/ [20:25:36] there are some other extensions with parsertests [20:26:27] yeah, i'm going to do TimedMediaHandler next [20:26:27] https://phabricator.wikimedia.org/T64270#2998083 [20:26:52] so from the change you did, in theory it should be all about adding the repo to the list [20:26:55] and maybe that will just work [20:27:02] with the delta that some extensions have dependencies :/ [20:27:48] right [20:28:39] can we merge https://gerrit.wikimedia.org/r/#/c/336873/ [20:28:57] 10Gerrit, 06Operations, 13Patch-For-Review: Investigate seemingly random Gerrit slow-downs - https://phabricator.wikimedia.org/T148478#3023122 (10Gehel) Next step of investigation might be to collect regular thread dumps and see if we can see something strange during the next issue. [20:42:07] PROBLEM - Puppet run on integration-slave-precise-1002 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:44:17] arlolra: I guess? :] [20:44:26] (03CR) 10Hashar: [C: 032] Run parsoid's cite parser tests as well [integration/config] - 10https://gerrit.wikimedia.org/r/336873 (https://phabricator.wikimedia.org/T114256) (owner: 10Arlolra) [20:44:51] arlolra: it is a bit of spaghetti code but it works so that is good enough [20:46:00] (03Merged) 10jenkins-bot: Run parsoid's cite parser tests as well [integration/config] - 10https://gerrit.wikimedia.org/r/336873 (https://phabricator.wikimedia.org/T114256) (owner: 10Arlolra) [20:46:13] yeah i think we should switch to making that job use all the regular mediawiki stuff. the only thing it needs from parsoid is the parserTests.txt file, so we should optimize for the ease of mediawiki reuse [20:47:11] potentially we could use the magic macro "prepare-mediawiki-zuul-project-no-vendor" [20:47:14] which more or less do the same [20:47:27] and change to use the same file hierarchy as other jobs [20:47:33] eg src/ <-- mediawiki [20:47:37] src/extensions [20:47:41] and parsoid somewhere [20:47:50] yes, exactly [20:47:56] maybe I will clean up tomorrow [20:48:04] ultimately [20:48:06] k, thanks :) [20:48:12] I want all of that cruft to be in a standalone repo [20:48:30] that developers can clone locally and thus reproduce an installation of mediawiki + configuration as CI does it [20:48:41] btw, not sure if you saw T157730 [20:48:41] T157730: parsoidsvc-hhvm-parsertests-jessie doesn't run tidy tests - https://phabricator.wikimedia.org/T157730 [20:48:41] so one would one day eventually git clone mediawikitestrunner [20:48:48] ./mediawikitestrunner/doit.sh --please [20:49:08] arhrgh tidy [20:49:24] yeah, that's kind of important for us [20:49:27] to run the tidy test [20:49:28] s [20:49:48] well ... tidy is going away, but til then [20:50:10] maybe because the job runs on hhvm [20:50:37] I wonder whether the PHPUnit job that runs on the Cite extension or mediawiki do have tidy [20:50:41] or whether it is just skip [20:51:13] (my english is crap) [20:52:02] ça va [20:53:00] i haven't looked into tidy yet [20:53:14] just noticed it as i was doing the cite stuff [20:53:54] RECOVERY - Puppet run on repository is OK: OK: Less than 1.00% above the threshold [0.0] [20:54:09] a run of tests on MediaWiki core shows 118 tests being skipped https://integration.wikimedia.org/ci/job/mediawiki-phpunit-hhvm-jessie/3235/console :( [20:54:18] and they are not shown [20:55:33] :( [20:59:21] 10Continuous-Integration-Config, 10Parsoid: parsoidsvc-hhvm-parsertests-jessie doesn't run tidy tests - https://phabricator.wikimedia.org/T157730#3023152 (10hashar) The CI instances are provisioned by the same Puppet manifests Wikimedia uses in production. I checked on the Nodepool Jessie instances: they have... [20:59:34] arlolra: maybe mediawiki somehow fail to detects hhvm-tidy [21:00:08] $wgUseTidy = false; [21:00:09] $wgTidyConfig = null; [21:00:14] they are the default in mediawiki [21:00:43] though the test suite supposedly use it regardless of the setting [21:00:47] maybe that is no more the case [21:01:41] the test suite was refactored lately [21:02:11] production hhvm definitely is using tidy [21:02:40] so, if it's available in CI, it's probably a config issue [21:02:45] on prod there is $wgUseTidy [21:02:52] on CI we have the default from Mediawiki/core [21:03:11] most probably the parser tests suite does not enable tidy when the extension is available [21:03:17] or whatever else [21:03:26] guess one can easily reproduce with a fresh mw install [21:06:50] oh [21:07:18] arlolra: on my local machine I have the same warning about tidy [21:08:07] If we have tidy on the test images (we should?), then we should enable it for tests :D [21:08:25] Granted, tidy's going away? ;-) [21:08:30] well [21:09:06] on my setup tidy is not enabled in mw [21:09:20] running tests/parser/parserTests.php with Zend it find them [21:09:25] 10Deployment-Systems, 06Release-Engineering-Team, 10Scap, 10scap2, and 2 others: Error after "Finished deploy": xrange() arg 3 must not be zero - https://phabricator.wikimedia.org/T157136#3023174 (10dduvall) [21:09:28] with HHVM (and having hhvm-tidy installed) it does not [21:09:38] interesting [21:09:40] * RainbowSprinkles shrugs [21:09:43] I haven't looked at it in some time [21:10:13] it is a whole new world! [21:10:22] Tim went through them and nicely refactored the decade of legacy code [21:10:28] more than doubling their speed [21:13:09] what happens when you give it --use-tidy-config [21:20:29] arlolra: same [21:20:59] did hhvm -d Eval.Jit=false parserTests.php --quiet --use-tidy-config [21:21:43] 10Continuous-Integration-Config, 10Parsoid: parsoidsvc-hhvm-parsertests-jessie doesn't run tidy tests - https://phabricator.wikimedia.org/T157730#3023209 (10hashar) Luckily I reproduce on my machine. As far as I can tell $wgTidy and $wgConfTidy are the default values (false and null). I have php-tidy and hhvm... [21:22:04] RECOVERY - Puppet run on integration-slave-precise-1002 is OK: OK: Less than 1.00% above the threshold [0.0] [21:24:54] arlolra: `class_exists( 'tidy' ) ` yields false. :( [21:25:34] hmm [21:25:58] TimStarling: any thoughts on https://phabricator.wikimedia.org/T157730#3023209 [21:25:58] !log Update mobileapps to 3af473f [21:26:02] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [21:26:04] 10Continuous-Integration-Config, 10Parsoid: parsoidsvc-hhvm-parsertests-jessie doesn't run tidy tests - https://phabricator.wikimedia.org/T157730#3023233 (10hashar) In TidySupport.php we have: ``` if ( extension_loaded( 'tidy' ) && class_exists( 'tidy' ) ) { $this->config['driver']... [21:26:14] maybe hhvm-tidy is broken [21:26:55] $ hhvm --modules [21:26:55] Segmentation fault [21:26:55] :D [21:26:56] hehe [21:26:59] my isntall is borked [21:27:08] :) [21:29:00] hmm [21:31:10] guess that is the best I can do for now [21:31:39] it's ok, i can dig into later and bug tim about it when he's around [21:33:15] Louis_XIV un petit coup de guillotine comme tes descendants ? :D [21:34:01] non, merci [21:34:05] ;D [21:34:15] Je sérait un bon Roi [21:34:42] arlolra: same on tin.eqiad.wmnet [21:36:20] 10Continuous-Integration-Config, 10Parsoid, 07HHVM: parsoidsvc-hhvm-parsertests-jessie doesn't run tidy tests - https://phabricator.wikimedia.org/T157730#3023253 (10hashar) [21:39:16] what happens if you remove that "class_exists( 'tidy' )" from the conditional [21:40:20] cause there's that check here [21:40:21] https://github.com/wikimedia/mediawiki/blob/master/includes/tidy/RaggettInternalPHP.php [21:40:23] but not here [21:40:28] https://github.com/wikimedia/mediawiki/blob/master/includes/tidy/RaggettInternalHHVM.php [21:40:37] maybe it's unnecessary [21:41:59] damn I am tired [21:42:09] I was trying to figure out how I could use the HHVM debugger to remove the debugger [21:42:15] to remove the conditional [21:42:51] if you're tired, you should stop for the day [21:43:12] if I remove class_exists( 'tidy' ) [21:43:15] that pass :] [21:43:20] Passed 1372 of 1372 tests (100%)... ALL TESTS PASSED! [21:43:28] guess you can report on the task [21:43:46] propose a patch to TidySupport and grab review? [21:43:54] then maybe CI will show that more tests are run [21:44:05] maybe it needs to be && (class_exists( 'tidy' ) && !wfIsHHVM()) or something [21:44:14] maybe [21:44:51] yeah, i'll put a patch up in a bit [21:45:08] Given the Ragtime file had Tim/Ori/Ebernardshon has contributors [21:45:14] it is not in my league for sure [21:46:53] have a good evening! [21:48:06] you too, thanks for all the help today [21:52:49] 10Gerrit, 06Release-Engineering-Team: Covert Velocity templates to Closure Templates - https://phabricator.wikimedia.org/T158008#3023302 (10Paladox) [21:53:07] 10Gerrit, 06Release-Engineering-Team: Covert Velocity templates to Closure Templates - https://phabricator.wikimedia.org/T158008#3023314 (10Paladox) p:05Triage>03Low [21:53:19] 10Gerrit, 06Release-Engineering-Team: Covert Velocity templates to Closure Templates - https://phabricator.wikimedia.org/T158008#3023302 (10Paladox) p:05Low>03Lowest [21:53:33] 10Gerrit, 06Release-Engineering-Team: Gerrit: Covert Velocity templates to Closure Templates - https://phabricator.wikimedia.org/T158008#3023302 (10Paladox) [21:54:11] 10Gerrit, 06Release-Engineering-Team: Gerrit: Covert Velocity templates to Closure Templates - https://phabricator.wikimedia.org/T158008#3023302 (10Paladox) [21:59:03] Project selenium-Core » firefox,beta,Linux,contintLabsSlave && UbuntuTrusty build #314: 04FAILURE in 7 min 1 sec: https://integration.wikimedia.org/ci/job/selenium-Core/BROWSER=firefox,MEDIAWIKI_ENVIRONMENT=beta,PLATFORM=Linux,label=contintLabsSlave%20&&%20UbuntuTrusty/314/ [22:09:27] 10Gerrit, 06Release-Engineering-Team: Gerrit: Covert Velocity templates to Closure Templates - https://phabricator.wikimedia.org/T158008#3023412 (10demon) Here's the 3 velocity templates we use: demon@cobalt /var/lib/gerrit2/review_site/etc$ find . -name '*.vm' ./its/templates/DraftPublished.vm ./its/tem... [22:11:46] 10Gerrit: Gerrit: Covert Velocity templates to Closure Templates - https://phabricator.wikimedia.org/T158008#3023450 (10greg) [22:12:38] 10Gerrit: Gerrit: Covert Velocity templates to Closure Templates - https://phabricator.wikimedia.org/T158008#3023452 (10Paladox) >>! In T158008#3023412, @demon wrote: > Here's the 3 velocity templates we use: > demon@cobalt /var/lib/gerrit2/review_site/etc$ find . -name '*.vm' > ./its/templates/DraftPublishe... [22:15:06] 10Gerrit: Gerrit: Covert Velocity templates to Closure Templates - https://phabricator.wikimedia.org/T158008#3023463 (10Paladox) its-phabricator will need to be updated for the new templates https://github.com/GerritCodeReview/plugins_its-base/search?utf8=✓&q=vm&type=Code [22:17:14] 06Release-Engineering-Team, 06Wikipedia-Android-App-Backlog, 07Spike, 07Technical-Debt: Investigate how to improve CI performance and stability - https://phabricator.wikimedia.org/T158014#3023483 (10Fjalapeno) Also adding @greg to the ticket since I spoke to him earlier about this issue. [22:18:22] 06Release-Engineering-Team, 06Operations, 06Wikipedia-Android-App-Backlog, 07Jenkins, and 2 others: Investigate how to improve CI performance and stability - https://phabricator.wikimedia.org/T158014#3023491 (10Fjalapeno) [22:19:01] 06Release-Engineering-Team, 06Operations, 06Wikipedia-Android-App-Backlog, 07Jenkins, and 2 others: Investigate how to improve Android CI performance and stability - https://phabricator.wikimedia.org/T158014#3023493 (10greg) [22:19:50] 10Gerrit: Gerrit: Covert Velocity templates to Closure Templates - https://phabricator.wikimedia.org/T158008#3023498 (10demon) > Here is a patch converting some of them https://github.com/gerrit-review/gerrit/commit/42639e43149197bc3a2e31e973ec53fee5ac93dc and https://github.com/gerrit-review/gerrit/commit/93ccc... [22:21:44] 10Gerrit: Gerrit: Covert Velocity templates to Closure Templates - https://phabricator.wikimedia.org/T158008#3023501 (10demon) >>! In T158008#3023498, @demon wrote: > For the ITS ones, the structure will probably be similar. But do any of its-* plugins use Soy yet? Answering my own question: no, they don't [22:22:58] 06Release-Engineering-Team, 06Operations, 07HHVM, 07Wikimedia-Incident: 2016-10-17 API cluster overload - https://phabricator.wikimedia.org/T148652#3023511 (10greg) So, this never happened (this == the investigation). It is now almost 4 months later, with a new year in between. There was no incident report... [22:24:01] 10Gerrit: Gerrit: Covert Velocity templates to Closure Templates - https://phabricator.wikimedia.org/T158008#3023515 (10Paladox) its-base doesn't look like it uses any of the templates. So its-phabricator will need to be the only one updated to support this :). [22:24:56] RainbowSprinkles they are removing drafts in https://gerrit-review.googlesource.com/#/c/97230/ [22:26:58] So long and good riddance [22:27:18] lol, they are only replacing it with the same thing, though less confusing. [22:27:31] Yeah, the newer workflows make far more sense. [22:27:44] refs/drafts/* was a terrible anti-pattern [22:27:50] And gave horrible sense of security [22:27:52] And performance hit [22:27:54] Yep + you can now delete your own changes as long as your admin. But soon to be all users [22:28:22] in https://gerrit-review.googlesource.com/#/c/97062/ [22:28:24] I don't think deleting is super important, but the rest of it is an improvement :) [22:28:27] Deleting things breaks urls ;-) [22:29:07] Yep, well if we can clean up some patches like patches that have made no edit's just a blank patch. That will speed up reindexing. [22:29:19] Oh [22:29:50] They are also adding support to delete comments too but same only for admins. But that is still going through the review strange. [22:29:53] The # of deleted items I think will be outweighed by the stuff we don't delete. [22:29:59] The improvement to indexing speed probably won't be much ;-) [22:30:38] oh [22:30:52] Actually, deleting is nice for cleaning up after idiot spammers [22:31:02] Right now it requires ugly DB wrangling [22:31:19] They also have an assignee field now. Though i have no idea why someone wants to create a patch only to assign it to someone else. [22:31:26] yep [22:31:45] I have a live test at the moment from master branch here, https://gerrit-new.wmflabs.org/ [22:32:35] I would really like it if my patch for adding a tag gui in gerrit like branches makes it in the gerrit 2.14 release :) [22:39:57] Assigning to someone else is kind of nice if you're like "I'm definitely not working on this anymore, So-and-so is" [22:40:07] So if you or I come along, we know who to talk to now [22:41:57] Oh' [22:42:18] Ah i get it now. I was thinking short term not long term. :) :) [22:51:47] 10Continuous-Integration-Config, 06Release-Engineering-Team, 06Operations, 06Wikipedia-Android-App-Backlog, and 2 others: Investigate how to improve Android CI performance and stability - https://phabricator.wikimedia.org/T158014#3023564 (10hashar) The history of recents builds on https://integration.wikim... [23:41:58] 10Continuous-Integration-Infrastructure: Jenkins: Assert no PHP errors (notices, warnings) were raised or exceptions were thrown - https://phabricator.wikimedia.org/T50002#516358 (10greg) >>! In T50002#2681380, @Krinkle wrote: > It took three years, but we're error-log-free on MediaWiki core! > > `mw-dberror.lo... [23:48:21] 10Continuous-Integration-Config, 06Release-Engineering-Team, 06Operations, 06Wikipedia-Android-App-Backlog, and 2 others: Investigate how to improve Android CI performance and stability - https://phabricator.wikimedia.org/T158014#3023688 (10Niedzielski) FWIW, here's the corresponding device log for [[ http... [23:48:38] /15/8 [23:48:52] 10Deployment-Systems, 06Operations, 10Stashbot: [[wikitech:Server_admin_log]] should not rely on freenode irc for logmsgbot entries - https://phabricator.wikimedia.org/T46791#480032 (10demon) >>! In T46791#480052, @greg wrote: > There's two separate use cases here: > > 1) Someone in -operations !log'ing som... [23:49:02] bd808, greg-g: ^ [23:50:12] RainbowSprinkles: "subsumed by wm-bot" ? Stashbot already does it [23:50:46] Ah, misunderstanding that piece. Anyway...otherwise an ok proposal? [23:50:59] I thought there was still a special bot for it [23:51:19] The "easy" thing to do would be to give stashbot a web api and move it to the prod kubernetes grid as soon as it exists :) [23:51:44] That works too, and kinda does what I'm asking for the prod side [23:51:51] there is some completely different bug where I outlined all the parts that exist today [23:51:58] "Let scap et all call some service instead of pretending to speak IRC" [23:52:02] it was about scap !logs in beta [23:52:43] And "irc and prod tools should call the same service instead of DIY so we protect against logstash/wikitech/whatever changes" [23:52:45] they do call a service actually, but yeah it has goofy "must start with !log" requirements [23:52:53] If that's via stashbot, fine by me :) [23:55:07] If stashbot exposed a web entry (we could get away with simple http auth, probably), that'd make this problem trivially fixable in prod [23:55:45] Guess the question is "is SAL a prod service if we're running all the infra on labs?" [23:55:46] :) [23:56:52] 10Deployment-Systems, 06Operations, 10Stashbot: [[wikitech:Server_admin_log]] should not rely on freenode irc for logmsgbot entries - https://phabricator.wikimedia.org/T46791#3023711 (10bd808) Some related thoughts/explanations on {T156079}. wm-bot already does too many things IMO, and Stashbot currently ow... [23:56:54] Ooooh, easier for callers of web service. Stashbot just outputs the log entry to IRC (if it came from web), configurable by query param itself [23:57:09] Then caller doesn't have to know IRC *at all* even if it wants to *duplicate* the message to IRC [23:57:29] Web service for stashbot seems best! [23:57:29] yeah. something like that shouldn't be too hard [23:58:00] * bd808 has to go to a meeting IRL and outside his house now [23:58:05] No worries. Thx for insight [23:58:42] sure :) be careful poking around this. I'll add you as a Stashbot maintainer [23:58:54] and then remove myself ;)