[00:06:14] Reedy: "If an ideal marketing launch sequence is 0-100 (everyone gets the new features at once), and the ideal reliability engineer launch sequence is 0-0 (no change means no problems), the “right” launch sequence for an app is inevitably a matter of negotiation. " :) [00:20:06] PROBLEM - Puppet run on deployment-phab02 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [00:21:04] RECOVERY - Puppet run on integration-slave-trusty-1003 is OK: OK: Less than 1.00% above the threshold [0.0] [00:21:21] Project beta-update-databases-eqiad build #16120: 04FAILURE in 1 min 20 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/16120/ [00:22:55] PROBLEM - Puppet run on deployment-phab01 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [00:25:07] eh? [01:15:59] * Reedy makes jerkins do some work [01:16:30] That's not fixed itself has it? [01:17:19] 00:21:17 Exception: ('command: ', '/usr/local/bin/mwscript update.php --wiki=zerowiki --quick', 'output: ', "#!/usr/bin/env php\nPHP Fatal error: Uncaught exception 'Exception' with message 'LoginNotify requires Echo to be installed.' [01:17:20] Curious [01:17:26] zerowiki does have Echo too [01:21:21] Project beta-update-databases-eqiad build #16121: 04STILL FAILING in 1 min 21 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/16121/ [01:25:00] 10Deployment-Systems, 10Revision-Scoring-As-A-Service-Backlog, 10Wikilabels, 07Chinese-Sites, 07I18n: Please pull and deploy the latest version of zh-hans/hant translations for Wikilabels on wmflabs - https://phabricator.wikimedia.org/T162108#3152836 (10Arthur2e5) [01:25:57] 10Deployment-Systems, 10Revision-Scoring-As-A-Service-Backlog, 10Wikilabels, 07Chinese-Sites, 07I18n: Please pull and deploy the latest version of zh-hans/hant translations for Wikilabels on wmflabs - https://phabricator.wikimedia.org/T162108#3152849 (10Arthur2e5) [01:27:32] 10Deployment-Systems, 10Revision-Scoring-As-A-Service-Backlog, 10Wikilabels, 07Chinese-Sites, 07I18n: Please pull and deploy the latest version of zh-hans/hant translations for Wikilabels on wmflabs - https://phabricator.wikimedia.org/T162108#3152836 (10Arthur2e5) Subscribing shizhao as he participated i... [01:33:55] 10Deployment-Systems, 10Revision-Scoring-As-A-Service-Backlog, 10Wikilabels, 07Chinese-Sites, 07I18n: Please pull and deploy the latest version of zh-hans/hant translations for Wikilabels on wmflabs - https://phabricator.wikimedia.org/T162108#3152875 (10Liuxinyu970226) @Ladsgroup It seems that your previ... [01:36:00] 10Deployment-Systems, 10Revision-Scoring-As-A-Service-Backlog, 10Wikilabels, 07Chinese-Sites, 07I18n: Deploy latest zh-hans/hant translations for Wikilabels on wmflabs - https://phabricator.wikimedia.org/T162108#3152887 (10Arthur2e5) [01:37:39] 10Deployment-Systems, 10Revision-Scoring-As-A-Service-Backlog, 10Wikilabels, 07Chinese-Sites, 07I18n: Deploy latest zh-hans/hant translations for Wikilabels on wmflabs - https://phabricator.wikimedia.org/T162108#3152836 (10Arthur2e5) I believe I am seeing simplified characters right now. I am currently u... [01:40:33] Yippee, build fixed! [01:40:33] Project beta-update-databases-eqiad build #16122: 09FIXED in 1 min 20 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/16122/ [01:40:42] sweet [01:45:13] 10Deployment-Systems, 10Revision-Scoring-As-A-Service-Backlog, 10Wikilabels, 07Chinese-Sites, 07I18n: Deploy latest zh-hans/hant translations for Wikilabels on wmflabs - https://phabricator.wikimedia.org/T162108#3152900 (10Arthur2e5) Confirming that both https://github.com/wiki-ai/wikilabels-wmflabs-depl... [01:51:12] 10Deployment-Systems, 10Revision-Scoring-As-A-Service-Backlog, 10Wikilabels, 07Chinese-Sites, 07I18n: Deploy latest zh-hans/hant translations for Wikilabels on wmflabs - https://phabricator.wikimedia.org/T162108#3152906 (10Arthur2e5) Submitted https://github.com/wiki-ai/wikilabels-wmflabs-deploy/pull/34. [01:52:23] 10Deployment-Systems, 10Revision-Scoring-As-A-Service-Backlog, 10Wikilabels, 07Chinese-Sites, 07I18n: Deploy latest zh-hans/hant translations for Wikilabels on wmflabs - https://phabricator.wikimedia.org/T162108#3152908 (10Arthur2e5) [02:17:20] 10Deployment-Systems, 10Revision-Scoring-As-A-Service-Backlog, 10Wikilabels, 07Chinese-Sites, 07I18n: Deploy latest zh-hans/hant translations for Wikilabels on wmflabs - https://phabricator.wikimedia.org/T162108#3152836 (10revi) It takes time for translations from translatewiki.net to the site, so you'll... [03:04:54] 10Continuous-Integration-Config, 10MediaWiki-extensions-Scribunto, 10Wikidata: [Task] Add Scribunto to extension-gate in CI - https://phabricator.wikimedia.org/T125050#1973302 (10Krinkle) 10Deployment-Systems, 10Revision-Scoring-As-A-Service-Backlog, 10Wikilabels, 07Chinese-Sites, 07I18n: Deploy latest zh-hans/hant translations for Wikilabels on wmflabs - https://phabricator.wikimedia.org/T162108#3152940 (10Arthur2e5) 05Open>03Resolved a:03Arthur2e5 Looks like fixed with that PR. Cl... [04:00:22] 10Deployment-Systems, 10Revision-Scoring-As-A-Service-Backlog, 10Wikilabels, 07Chinese-Sites, 07I18n: Deploy latest zh-hans/hant translations for Wikilabels on wmflabs - https://phabricator.wikimedia.org/T162108#3152950 (10Arthur2e5) 05Resolved>03Open Patience patience... Deletion not yet in effect f... [06:21:50] Project selenium-Wikibase » chrome,test,Linux,BrowserTests build #320: 04FAILURE in 1 hr 41 min: https://integration.wikimedia.org/ci/job/selenium-Wikibase/BROWSER=chrome,MEDIAWIKI_ENVIRONMENT=test,PLATFORM=Linux,label=BrowserTests/320/ [07:30:28] 10Scap (Scap3-Adoption-Phase1), 10RESTBase, 13Patch-For-Review, 06Services (doing), 15User-mobrovac: Deploy RESTBase with scap3 - https://phabricator.wikimedia.org/T116335#3153104 (10mobrovac) [07:54:35] hashar: o/ - qq for you.. what is the diff between the file that you changed yesterday and https://github.com/wikimedia/mediawiki/blob/wmf/1.29.0-wmf.12/includes/libs/redis/RedisConnectionPool.php ? [07:55:01] elukey: good morning [07:55:16] morning!!! [07:55:31] if you don't have time today I'll just shut up don't worry :) [07:55:35] MediaWiki enqueue the async tasks / jobs in a redis queue [07:55:39] and would use the file you linked above [07:55:44] so once you get bunch of tasks [07:55:56] we have an entirely different system mediawiki/services/jobrunner [07:56:07] which is more or less a while loop that fetch tasks from redis [07:56:17] and use the Redis class from phpredis or hhvm [07:56:47] and then that jobrunner invokes MediaWiki rpc entry point [07:59:47] ahhh okok.. because I can see an explicit close() in the Mediawiki enq code, and nothing in the jobrunner one [08:00:08] let me draw something [08:02:05] elukey: https://docs.google.com/drawings/d/1EgmdC-VsvbbVXZnUROR__YCDy-6KDEWC9hzwMb8RLLU/edit [08:02:15] my terrible drawing of I understand the beast [08:02:56] iirc we used to have the job stored in mysql and a shell while loop that was running maintenance/runJobs.php [08:03:13] the job store got moved to a complicated redis system (with I think multi master and multi slaves) [08:03:15] wow thanks! 4) is definitely new to me [08:03:29] I thought that the jobrunners were executing the jobs [08:03:34] and a daemon the jobrunner written in plain PHP that consumes the entries from Redis [08:03:36] but really [08:03:36] 10Continuous-Integration-Config, 10MediaWiki-extensions-Scribunto, 10Wikidata: [Task] Add Scribunto to extension-gate in CI - https://phabricator.wikimedia.org/T125050#3153146 (10hoo) >>! In T125050#3152305, @hashar wrote: > What this show is that at the point I took the capture, there were several Scribunto... [08:04:19] hashar: so jobrunners are basically "dispatchers", they don't really do much heavy lifting [08:04:20] all what jobrunner does is just curl out to invoke MediaWiki rpc [08:04:29] and that is the mediawiki app server that actually process the jobs [08:04:35] and I assume the Mediawiki api cluster [08:04:40] most probably [08:05:04] now I completely get why simply shutting off the jobrunner is enough to perform maintenance [08:05:33] but note that is how **I** understand it [08:05:38] and it might just be all wrong :-) [08:05:56] nono that makes perfect sense [08:05:58] oh [08:06:03] and that is not the api cluster [08:06:18] the jobrunner machines have both the jobrunner and a working MediaWiki install served by HHVM [08:06:33] so the jobrunner grab tasks, then runs a curl or something against a localhost MediaWiki install [08:07:13] ahhhh okok [08:07:18] now it makes even more sense ok [08:07:19] :) [08:07:27] so 4) is a localhost call [08:07:32] okok better [08:07:44] in puppet the jobrunner conf is ./modules/mediawiki/templates/jobrunner/jobrunner.conf.erb [08:07:50] the command used to run job is in ./modules/mediawiki/templates/jobrunner/dispatcher.erb [08:07:56] curl -XPOST -s -a 'http:\/\/127.0.0.1:<%= @port %>\/rpc\/RunJobs.php?wiki=%(db)u&type=%(type)u&maxtime=%(maxtime)u&maxmem=%(maxmem)u' [08:08:00] and here you get the rpc entry point [08:08:03] yes yes now it all comes to mind, coffee levels too low [08:08:14] so the jobrunner grab stuff from redis then curl to localhost mediawiki and invoke /rpc/RunJobs.php ... [08:08:50] then HHVM serves that and process the task using the mediawiki stack which might well rely on redis somehow [08:08:58] hashar: I am wondering if the RST are due to what happens in https://github.com/wikimedia/mediawiki/blob/wmf/1.29.0-wmf.12/includes/libs/redis/RedisConnectionPool.php [08:09:18] so if the patch would be needed in that class [08:09:27] probably [08:09:42] BUT I have no idea whether the /rpc/RunJobs.php point ends up using that redis pool [08:10:09] cause really I don't know what grabs the actual jobs [08:10:14] is that the jobrunner listing the jobs to run [08:10:30] or is that MediaWiki RunJobs.php that actually does hit the jobs redis to get the exact tasks to run [08:10:37] I am tempted to say the later [08:11:06] since the dispatcher only ask for something like: run refreshLinks for enwiki for up to 30seconds and/or 1GB memory (random numbers) [08:11:43] * elukey nods [08:11:47] so most probably rpc/RunJobs.php connect to the redis database to find out which jobs exactly are needed. Probably doing connections on multiple slaves (hence the redisconnection pool) [08:12:12] end you up with those $conn->close() [08:12:20] It would make sense since RedisConnectionPool.php is the only one that calls Redis.php close, that uses conn->close() [08:12:20] https://github.com/wikimedia/mediawiki/blob/wmf/1.29.0-wmf.12/includes/libs/redis/RedisConnectionPool.php#L230 : $conn = new Redis(); [08:12:23] exactly :) [08:12:29] good finding bravo! [08:12:45] * hashar migrates everything to Gearman and Jenkins [08:13:42] so with the monkey patch from yesterday, the RST from the jobrunner / jobchron should be gone [08:14:08] what would be left is the ones you found via redisconnectionpool [08:14:17] hashar: https://phabricator.wikimedia.org/T125050#3153146 [08:14:24] Could we try bumping the limit? [08:14:28] both would be solved/fixed by patching HHVM to stop sending QUIT [08:14:44] OR hack Redis server to just close socket and skip sending OK :-D [08:14:56] hahahaah [08:15:12] elukey: skipping OK would be a violation of the protocol though [08:15:19] hoo: ah good morning. So I actually looked at the memory issue and really I don't get it [08:15:39] :/ [08:15:42] hashar: whenever you have time (even later on this week), would you mind to apply another monkey patch to RedisConnectionPool on jobrunner02 to remove the quit? [08:15:58] or maybe letting me know how to do it without burning the world [08:16:05] hashar: Don't get it in which regard? [08:16:14] hoo: there is definitely some amount of free memory available on the instance. BUT php process is already at 1.8GBytes of usage and I haven't looked at what the proc_open() invokes exactly [08:16:41] proc_open is using lua standalone here (not luasandbox) [08:17:08] elukey: I guess inside includes/libs/redis/RedisConnectionPool.php you could copy paste the RedisMonkeyPatched class I added for the jobrunner [08:17:37] elukey: and replace new Redis() by new RedisMonkeyPatched() then apply that directly on jobunner02 in /srv/mediawiki/php-master/.... restart hhvm and see what happens [08:18:28] hoo: though apparently that is Scribunto proc_open() that fails to fork entirely [08:18:34] hashar: ahh okok thanks :) [08:19:21] hoo: or maybe that is the shell 'exec' that fails. Potentially we could create a patch for Scribunto that raise the memory limit to some level, then send a patch to Wikibase that Depends-On that and see what happens (by commenting 'check php5') [08:19:36] hashar: Sounds good [08:19:38] will do that [08:19:59] my terrible assumption yesterday is that it is a 1.8GBytes php process that does the proc_open() [08:20:03] which invokes Unix fork() behind the scene [08:20:30] and on Linux that copy the whole memory albeit using copy on write (so the child fork does not consume 1.8GBytes of memory) [08:21:00] it just that memory pages are marked as being used by both parent and child process, if one tries to write to it, the memory page is duplicated [08:21:13] but maybe the fork fails because 1.8GBytes + 1.8GBytes => not enough memory [08:21:22] though it is supposed to overcommit just fine [08:21:51] hm [08:21:53] I need someone that master the art of system programming [08:22:06] for me this really sounds like something hitting the ulimit [08:22:31] so lets try bumping the mem limit [08:27:41] Tests are running… let's see how this goes [08:29:43] hoo: patches to Scribunto itself do not fail do they? [08:29:57] No, they don't have Wikibase [08:31:56] https://integration.wikimedia.org/ci/job/mwext-testextension-php55-composer-trusty/278/ [08:31:59] holding that instance [08:34:45] another thing I noticed is the bunch of lua standalone binaries that are left around [08:36:26] hoo: lua_ulimit.sh' '30' '31' '68359' [08:36:28] so you managed to bump it properly [08:36:43] it worked! [08:37:03] ah, wait [08:37:13] php5 not in yet, just the hhvm ones [08:37:18] they run at the end iirc [08:37:26] I am watching https://integration.wikimedia.org/ci/job/mwext-testextension-php55-composer-trusty/278/consoleFull [08:37:52] they have errored [08:39:15] vmstat showed in megabytes: swap: 0 free: 604 buff: 44 cache: 944 [08:39:47] and once php has completed free went from 604 to 2442 or 1838 Mbytes freed [08:40:07] which is more or less consistent with the last line of debug log: [08:40:07] 98.6041 1803.5M Scribunto_LuaStandaloneInterpreter::terminate: terminating [08:47:12] I have another idea [08:49:44] 10Continuous-Integration-Config, 10MediaWiki-extensions-Scribunto, 10Wikidata: [Task] Add Scribunto to extension-gate in CI - https://phabricator.wikimedia.org/T125050#3153193 (10hashar) Gave it a try directly on the instance running solely the Scribunto tests after the memory limit got raised to 70M which... [08:49:53] commented on the task about the manual run I did [08:50:03] hoo: from the instance that had the failing build I ran the scribunto tests [08:50:06] and they pass all fine [08:50:15] so most probably that is the PHP proc_open() that fails [09:07:41] hoo: I am trying without ulimit -v [09:17:30] and now trying without junit reporting which should save a lot of memory [09:30:28] 10Continuous-Integration-Config, 10MediaWiki-extensions-Scribunto, 10Wikidata: [Task] Add Scribunto to extension-gate in CI - https://phabricator.wikimedia.org/T125050#3153282 (10hashar) Ran the whole suite again with the ulimit in place but without `--log-junit` which is memory heavy. The whole suite pass... [09:37:26] Nice to know [09:37:54] is it running into some limit or is it actually running out of memory [09:47:06] hoo: definitely out of memory [09:47:26] my guess is that the proc_open() invokes fork() [09:47:34] which somehow tries to allocate the php phpunit memory [09:47:39] which is 1.8GBytes [09:47:42] that does not fit [09:48:18] or whatever else [09:48:26] really I don't quite understand what happens [09:48:35] but most probably lua_limit.sh is not in cause [09:51:42] Ok, that makes sense… do you have a way forward? [10:10:10] hoo: none :( [10:17:03] PROBLEM - Puppet run on integration-slave-trusty-1003 is CRITICAL: CRITICAL: 77.78% of data above the critical threshold [0.0] [10:52:04] RECOVERY - Puppet run on integration-slave-trusty-1003 is OK: OK: Less than 1.00% above the threshold [0.0] [11:42:55] 10Beta-Cluster-Infrastructure: Request for "Flow"ing pages on zhwp Beta-cluster - https://phabricator.wikimedia.org/T162131#3153441 (10vjudge404) [11:47:26] 10Continuous-Integration-Config, 10MediaWiki-extensions-Scribunto, 10Wikidata: [Task] Add Scribunto to extension-gate in CI - https://phabricator.wikimedia.org/T125050#3153462 (10hoo) @hashar If we can't allocate more resources, can we disable `--log-junit` for just this job for now? [12:07:04] 10Beta-Cluster-Infrastructure, 07Chinese-Sites: Request for "Flow"ing pages on zhwp Beta-cluster - https://phabricator.wikimedia.org/T162131#3153525 (10Liuxinyu970226) [12:24:38] 10Continuous-Integration-Config, 10MediaWiki-extensions-Scribunto, 10Wikidata: [Task] Add Scribunto to extension-gate in CI - https://phabricator.wikimedia.org/T125050#3153574 (10hashar) --log-junit is what fails the build so no. What fork() is doing is that it tries to allocate the whole virtual memory fro... [12:43:45] hoo: so in short. MediaWiki phpunit tests takes 2GB of RAM [12:44:08] that prevents ones from invoking proc_open() at that point because that invokes the system fork() (really clone()) [12:44:24] which tries to allocate 2GB and that is rejected by Linux (not enough memory left) [12:44:34] even if the proc_open() just invokes /bin/true [13:31:26] (03PS1) 10Aude: Update Wikidata to wmf/1.29.0-wmf.19 [tools/release] - 10https://gerrit.wikimedia.org/r/346289 [13:38:53] hashar hi it seems postmerge are stuck [13:38:54] https://integration.wikimedia.org/zuul/ [13:38:58] 23 hours [13:39:12] mediawiki/services/parsoid [13:41:26] 10Continuous-Integration-Config, 10BlueSpice, 13Patch-For-Review: Enable unit tests on BlueSpice* repos - https://phabricator.wikimedia.org/T130811#3153727 (10Paladox) @Osnard hi, it seems the tests are failing on 13:40:21 Fatal error: Call to undefined method Interwiki::selectFields() in /home/jenkins/work... [13:42:00] They've all been merged [13:45:12] 10Continuous-Integration-Config, 10BlueSpice, 13Patch-For-Review: Enable unit tests on BlueSpice* repos - https://phabricator.wikimedia.org/T130811#3153743 (10Paladox) Caused by https://github.com/wikimedia/mediawiki/commit/025f15a208a75de47a71d3d8515e4b2b975fae1d#diff-22968113713eff7aed0c8c3c7212241a but i... [13:48:19] Reedy hi, how do i call selectFields in the interwiki class as it seems it was moved and changed into a private here https://github.com/wikimedia/mediawiki/commit/025f15a208a75de47a71d3d8515e4b2b975fae1d#diff-22968113713eff7aed0c8c3c7212241a please? [13:48:28] https://phabricator.wikimedia.org/T130811#3153727 [13:49:12] It's gone in master [13:50:00] paladox: IT wasn't made private, it was removed in that commit [13:50:08] no it wasent [13:50:23] it was added to includes/interwiki/ClassicInterwikiLookup.php [13:50:36] private static function selectFields() { [13:50:54] Well, it was removed from that class [13:51:11] https://github.com/wikimedia/mediawiki/blob/9ac29c74edda2f457814a1ed634a19f9a44fd0a5/includes/interwiki/ClassicInterwikiLookup.php#L442 [13:51:28] i will just make it public again [13:51:41] No [13:52:06] I guess the point is you should be using the wrapper classes to make an interwiki object [13:53:28] Reedy this one https://github.com/wikimedia/mediawiki/blob/master/includes/interwiki/InterwikiLookup.php ? [13:55:44] Reedy i doint see how one could call that function using the wrapper class [13:56:31] paladox: Well, you're looking too specifically [13:56:43] The function in BlueSpice is [13:56:43] protected function interWikiLinkExists( $sPrefix ) { [13:56:49] yep [13:57:44] Using ClassicInterwikiLookup's fetch function does the same thing [13:57:50] With hte added bonus of using various caches that are available to it [13:57:52] (03CR) 10Aude: [C: 032] Update Wikidata to wmf/1.29.0-wmf.19 [tools/release] - 10https://gerrit.wikimedia.org/r/346289 (owner: 10Aude) [13:58:27] oh [13:59:37] Though, more specifically than that [13:59:57] isValidInterwiki [14:00:01] * @return bool Whether it exists [14:00:21] MediaWikiServices::getInstance()->getInterwikiLookup()->isValidInterwiki( $prefix ); [14:00:29] oh thanks [14:00:42] Noting that's since 1.28 [14:00:47] i change Interwiki::selectFields(), -> MediaWikiServices::getInstance()->getInterwikiLookup()->isValidInterwiki( $prefix ); [14:00:50] So if they're expecting back compat, you might need to make it conditional [14:00:53] No [14:01:12] It doesn't return a list of fields [14:01:15] Read what the function does [14:01:18] Read it's name [14:01:20] Break it into words [14:01:33] Have a look at the function documentation comment [14:01:45] ok [14:01:57] It returns the interwiki prefix if it exists [14:02:12] (03CR) 10Hashar: Cache node_modules (031 comment) [integration/config] - 10https://gerrit.wikimedia.org/r/346152 (https://phabricator.wikimedia.org/T159591) (owner: 10Hashar) [14:02:24] No, it just returns it *if* it exists [14:02:34] We already know what the prefix is, as we told MW to tell us if it exists [14:02:40] yep [14:04:31] ah, so calling the MediaWikiServices::getInstance()->getInterwikiLookup()->isValidInterwiki( $prefix ); will do what $row = $this->getDB()->selectRow( does [14:05:33] Pretty much [14:06:00] You probably don't need most of that function anymore [14:06:18] Thanks :) [14:06:26] $this->aIWLexists acts as a local cache. Which is only questionably useful still at that point [14:06:50] hashar: Ok, we need a way forward though [14:07:13] If we can't disable log-junit we either have to make the thing non-voting [14:07:24] or drop Scribunto from it (which is fine for Wikibase) [14:09:12] oh [14:09:57] Reedy https://gerrit.wikimedia.org/r/#/c/346297/ :) [14:12:46] Reedy something like https://phabricator.wikimedia.org/P5200 [14:13:09] Yes [14:13:41] paladox: You probably need a use statement at the top of the file too [14:13:58] Like the Interwiki.php file does now [14:14:08] what is a statement at the top of the file? [14:14:29] look at interwiki.php [14:14:34] (03Merged) 10jenkins-bot: Update Wikidata to wmf/1.29.0-wmf.19 [tools/release] - 10https://gerrit.wikimedia.org/r/346289 (owner: 10Aude) [14:14:37] ok [14:14:39] you should see a line at the top with something along the lines of use MediaWikiServices [14:15:24] oh [14:15:25] ah [14:15:27] thanks [14:15:28] use MediaWiki\MediaWikiServices; [14:16:26] I wonder how backwards compatible that is [14:16:56] Reedy done :) [14:16:57] Probably not [14:16:58] oh [14:17:21] \MediaWiki\MediaWikiServices::getInstance()->getInterwikiLookup() [14:17:27] And remove the use line [14:17:37] ok [14:17:37] I don't know how well PHP would work with the use at the top if it didn't exist [14:17:39] thanks [14:18:12] Done :) [14:23:50] PROBLEM - Puppet run on integration-slave-jessie-1001 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [14:24:45] 10Continuous-Integration-Config, 10BlueSpice, 13Patch-For-Review: Enable unit tests on BlueSpice* repos - https://phabricator.wikimedia.org/T130811#3153899 (10Paladox) @Osnard i've fixed that in https://gerrit.wikimedia.org/r/#/c/346297/ :) [14:35:32] Reedy that fixes it :) [14:35:35] now i get this error [14:35:36] 14:35:16 Fatal error: Cannot access private property TestUser::$user in /home/jenkins/workspace/mwext-testextension-hhvm-composer-jessie-non-voting/src/extensions/BlueSpiceExtensions/UserManager/tests/phpunit/BSApiTasksUserManagerTest.php on line 39 [14:35:40] https://integration.wikimedia.org/ci/job/mwext-testextension-hhvm-composer-jessie-non-voting/131/console [14:36:46] ah that should be changed into TestUser::getUser() [14:38:36] Looks like it, yup [14:40:18] paladox: There's multiple ->user-> in that file that need replacing [14:40:26] yep [14:41:20] Reedy how do i convert self::$users[ 'uploader' ]->user->getId(), to TestUser::getUser() [14:41:27] can i do TestUser::getUser[ 'uploader' ]->user->getId() [14:41:31] No [14:41:34] oh [14:41:43] self::$users[ 'uploader' ]->getUser()->getId(), [14:41:47] without the , at the end [14:42:23] i though $users was a private? [14:42:34] No, user is [14:42:44] oh ok [14:42:45] thanks [14:44:31] Reedy https://gerrit.wikimedia.org/r/#/c/346300/ :) [14:53:50] RECOVERY - Puppet run on integration-slave-jessie-1001 is OK: OK: Less than 1.00% above the threshold [0.0] [14:56:27] 10Beta-Cluster-Infrastructure, 07Chinese-Sites, 15User-Luke081515: Request for "Flow"ing pages on zhwp Beta-cluster - https://phabricator.wikimedia.org/T162131#3153968 (10Luke081515) a:03Luke081515 [15:22:58] 10Beta-Cluster-Infrastructure, 07Chinese-Sites, 15User-Luke081515: Request for "Flow"ing pages on zhwp Beta-cluster - https://phabricator.wikimedia.org/T162131#3154073 (10Luke081515) 05Open>03Resolved [16:18:37] 10Continuous-Integration-Config, 10MediaWiki-extensions-Scribunto, 10Wikidata: [Task] Add Scribunto to extension-gate in CI - https://phabricator.wikimedia.org/T125050#3154216 (10hashar) Note to self, diskimage-builder wipe out /etc/fstab: ``` name=elements/debootstrap/install.d/15-cleanup-debootstrap,lang=s... [16:40:48] 10Beta-Cluster-Infrastructure: Request for static config file on the beta cluster returns a 403 - https://phabricator.wikimedia.org/T162164#3154252 (10Mholloway) [17:04:09] 10Continuous-Integration-Config, 06Labs, 10MediaWiki-extensions-Scribunto, 10Wikidata: For contintcloud either add RAM or Swap to the instances - https://phabricator.wikimedia.org/T162166#3154301 (10hashar) [17:04:30] 10Continuous-Integration-Config, 06Labs, 10MediaWiki-extensions-Scribunto, 10Wikidata: For contintcloud either add RAM or Swap to the instances - https://phabricator.wikimedia.org/T162166#3154301 (10hashar) [17:06:21] 10Continuous-Integration-Config, 06Labs, 10MediaWiki-extensions-Scribunto, 10Wikidata: For contintcloud either add RAM or Swap to the instances - https://phabricator.wikimedia.org/T162166#3154316 (10hashar) [17:08:42] 10Continuous-Integration-Config, 06Labs, 10MediaWiki-extensions-Scribunto, 10Wikidata: For contintcloud either add RAM or Swap to the instances - https://phabricator.wikimedia.org/T162166#3154319 (10hashar) [17:36:04] 10Scap: scap clean not removing staging dirs - https://phabricator.wikimedia.org/T161643#3154383 (10demon) > I suppose the branch was already removed so I got a bunch of `Failed to prune submodule branch for [whatever]` Nope, they were not pruned yet...when I just ran it for wmf.10, it worked fine. Permissions... [17:37:45] RainbowSprinkles apparently the elasticsearch implementation is not complete so they removed it from the release notes. [17:38:02] * RainbowSprinkles shrugs [17:38:05] But they are hopping to complete it in 2.14 at one of there upcomming hacking thingy [17:39:48] 10Scap: scap clean not removing staging dirs - https://phabricator.wikimedia.org/T161643#3154389 (10demon) >>! In T161643#3154383, @demon wrote: >> I suppose the branch was already removed so I got a bunch of `Failed to prune submodule branch for [whatever]` > > Nope, they were not pruned yet...when I just ran... [17:40:18] RainbowSprinkles but i think we can still use it. It will fallback to lucene where elasticsearch isen't implemented i think. [17:41:33] I'm in no rush [17:41:45] I'm not using half-baked features [17:43:11] ok [17:48:52] RainbowSprinkles good news though upstream fixed support prefixed urls for polygerrit in the background. They are doing a follow up i belive or thats what they say to fix the links i think :). [18:13:04] PROBLEM - Puppet run on integration-slave-trusty-1003 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [18:53:04] RECOVERY - Puppet run on integration-slave-trusty-1003 is OK: OK: Less than 1.00% above the threshold [0.0] [18:55:43] 10Scap, 13Patch-For-Review: Automatically clean up unused wmfXX versions - https://phabricator.wikimedia.org/T73313#3154689 (10demon) `scap clean` does all this. New bugs should be opened if there's issues with it. [18:55:50] 10Scap, 13Patch-For-Review: Automatically clean up unused wmfXX versions - https://phabricator.wikimedia.org/T73313#3154692 (10demon) 05Open>03Resolved a:03demon [18:56:23] 10Scap, 13Patch-For-Review: scap clean not removing staging dirs - https://phabricator.wikimedia.org/T161643#3154696 (10demon) 05Open>03Resolved a:03demon Fixed this up. It's still a little wonky on repeated runs, but that's minor [19:03:37] PROBLEM - Puppet run on deployment-db03 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [19:18:45] (03PS2) 10EddieGP: Remove old USERINFO tool [tools/release] - 10https://gerrit.wikimedia.org/r/344418 [19:20:03] Project beta-update-databases-eqiad build #16140: 04FAILURE in 2 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/16140/ [19:20:34] ^ that should be transient [19:20:45] just many merges [19:21:10] Who's doing the train deploy today? [19:21:25] Chad is [19:21:58] RainbowSprinkles: Can I get https://gerrit.wikimedia.org/r/#/c/346341/ in before the train goes, or right after? It's a UBN fix for a bug that was reported right after the cut :/ [19:22:07] Train already went [19:22:15] (is done, technically) [19:22:27] Oh look at that [19:22:35] OK, so shall I just deploy that patch myself then? [19:23:00] If you'd like, I'm watching logstash and making lunch [19:23:06] OK, on it [19:43:34] RECOVERY - Puppet run on deployment-db03 is OK: OK: Less than 1.00% above the threshold [0.0] [19:48:47] (03PS1) 10Legoktm: Add tests for mediawiki/libs/etcd [integration/config] - 10https://gerrit.wikimedia.org/r/346348 [19:51:49] It seems im doing the follow up to fix the polygerrit url (final fix using the implementation they just added :)) got feedback. [19:59:06] 10Continuous-Integration-Config, 07Technical-Debt: test-requirements.txt in ci-config still points to precise deb - https://phabricator.wikimedia.org/T162191#3154967 (10Legoktm) [19:59:44] (03CR) 10Legoktm: [C: 032] Add tests for mediawiki/libs/etcd [integration/config] - 10https://gerrit.wikimedia.org/r/346348 (owner: 10Legoktm) [20:00:58] (03Merged) 10jenkins-bot: Add tests for mediawiki/libs/etcd [integration/config] - 10https://gerrit.wikimedia.org/r/346348 (owner: 10Legoktm) [20:02:02] !log deploying https://gerrit.wikimedia.org/r/346348 [20:02:05] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:08:16] (03Draft1) 10Paladox: Update test-requirement to use jessie rather then precise [integration/config] - 10https://gerrit.wikimedia.org/r/346360 (https://phabricator.wikimedia.org/T162191) [20:08:18] (03PS2) 10Paladox: Update test-requirement to use jessie rather then precise [integration/config] - 10https://gerrit.wikimedia.org/r/346360 (https://phabricator.wikimedia.org/T162191) [20:10:48] (03Abandoned) 10Hashar: Update test-requirement to use jessie rather then precise [integration/config] - 10https://gerrit.wikimedia.org/r/346360 (https://phabricator.wikimedia.org/T162191) (owner: 10Paladox) [20:11:37] (03CR) 10Paladox: "but there is debian/jessie-wikimedia which i was about to do. Just noticed the patch-queue too." [integration/config] - 10https://gerrit.wikimedia.org/r/346360 (https://phabricator.wikimedia.org/T162191) (owner: 10Paladox) [20:17:22] 10Continuous-Integration-Config, 07Technical-Debt, 07Zuul: test-requirements.txt in ci-config still points to precise deb - https://phabricator.wikimedia.org/T162191#3155011 (10hashar) The integration/zuul.git repository is a mess right now and the Debian packaging needs documentation T140912 In short: | br... [20:21:28] Yippee, build fixed! [20:21:28] Project beta-update-databases-eqiad build #16141: 09FIXED in 1 min 27 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/16141/ [20:29:30] 10Beta-Cluster-Infrastructure, 06Collaboration-Team-Triage, 10Flow, 07Chinese-Sites, 15User-Luke081515: Request for "Flow"ing pages on zhwp Beta-cluster - https://phabricator.wikimedia.org/T162131#3155052 (10hashar) @Trizek-WMF @Danny_B Chinese projects seems to have an interest in #flow :-] [20:33:59] legoktm: paladox: sorry for the integration/zuul branches.. It is really in a bad shape right now :( [20:34:22] Oh, could we just do a copy and paste of precise patch-* [20:34:24] legoktm: paladox at least Precise is gone so that is one less distribution to maintain. It is probably not so much effort to clean it out [20:34:30] yep :) [20:34:38] it is a bit more complicated unfortunately [20:34:47] oh [20:48:15] RainbowSprinkles: Sorry, forgot to actually deploy that patch once it merged because I went to lunch, syncing it now [20:48:30] PROBLEM - Puppet run on deployment-urldownloader is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:48:38] RoanKattouw: okie dokie [20:48:50] And synced [20:48:59] Sorry for spacing out there [20:50:05] It happens [20:51:58] PROBLEM - Puppet run on deployment-mira is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [20:54:03] ^ that one is me, fixing [20:54:26] PROBLEM - Puppet run on deployment-tin is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:58:22] !log rebased integration puppet master [20:58:24] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:58:51] !log integration: purging precise cow images from integration-slave-jessie-1001 and integration-slave-jessie-1002 ( https://gerrit.wikimedia.org/r/#/c/345836/ ) [20:58:54] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [20:59:21] PROBLEM - Puppet run on integration-slave-trusty-1006 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [21:16:59] > Sorry, user root is not allowed to execute '/bin/echo hi' as trebuchet:wikidev on deployment-mira.deployment-prep.eqiad.wmflabs. [21:17:05] seems...abnormal [21:17:28] PROBLEM - Puppet run on deployment-mediawiki05 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [21:17:59] thcipriani: maybe wikitech sudo rules are gone [21:18:03] or trebuchet lost :( [21:18:20] try: scap say "hi" [21:19:22] RECOVERY - Puppet run on integration-slave-trusty-1006 is OK: OK: Less than 1.00% above the threshold [0.0] [21:19:36] sudo -u trebuchet -- scap say hi works and sudo -u thcipriani -g wikidev -- scap say hi work, but not sudo -u trebuchet -g wikidev -- scap say hi [21:21:01] hashar https://github.com/jenkinsci/trilead-ssh2/pull/14/:) [21:21:06] https://github.com/jenkinsci/trilead-ssh2/pull/14/ [21:21:30] hashar: quick q [21:22:03] hashar: https://gerrit.wikimedia.org/r/#/c/346449/ - jenkins can't even start there even though I based the pathset off of master. any clues? [21:23:06] thcipriani: maybe because trebuchet gid is 604 [21:23:15] thcipriani: and there is no name for 604 [21:23:23] though the trebuchet user has wikidev(500) as a secondary group [21:24:21] hrm. got rid of the trebuchet group a while ago since nothing was using it...doesn't explain why I can't execute something as root:wikidev tho [21:24:25] mobrovac: the patch apparently has a conflict? [21:24:48] hashar: no, it's based off of master and there's a one-liner change, no possibility of conflict [21:24:53] mobrovac: let me check on the zuul-merger. there is probably a path conflict [21:25:06] oh ok [21:25:07] thnx hashar [21:25:49] contint2001 has a clone of /srv/zuul/git/mediawiki/services/graphoid/deploy/ [21:26:01] and /srv/zuul/git/mediawiki/services/graphoid is an empty dir (beside /deploy/ sub dir ) [21:26:04] so yeah git clone fails :( [21:26:47] !log contint2001 : rm -fR /srv/zuul/git/mediawiki/services/graphoid/deploy due to T157818 [21:26:50] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [21:26:51] T157818: zuul-merger fails when repository names overlaps - https://phabricator.wikimedia.org/T157818 [21:27:15] 2017-04-04 21:27:03,229 DEBUG zuul.Repo: CreateZuulRef master/Z2b56634260ba47c39642d414be6e0f5b at 95e38d2dfbf452a4495da933b813b8baaf1254bf on [21:27:24] hmmm [21:27:36] still failing it seems [21:28:38] hmm [21:28:45] it definitely merged it [21:29:17] !log contint1001 : rm -fR /srv/zuul/git/mediawiki/services/graphoid/deploy due to T157818 [21:29:20] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [21:29:54] damn round robin [21:30:08] mobrovac: it is enqueued [21:30:25] it is a corner case issue in CI sorry :-( :-( [21:30:48] I tried to fix it once for good but went hitting a wall eventually [21:30:59] heh [21:31:26] thcipriani: maybe tries strace ? :( [21:31:28] err [21:31:32] my english went craps [21:32:00] RECOVERY - Puppet run on deployment-mira is OK: OK: Less than 1.00% above the threshold [0.0] [21:38:22] 10Deployment-Systems, 10Revision-Scoring-As-A-Service-Backlog, 10Wikilabels, 07Chinese-Sites, 07I18n: Deploy latest zh-hans/hant translations for Wikilabels on wmflabs - https://phabricator.wikimedia.org/T162108#3155348 (10Arthur2e5) Apparently OAuth is broken with `TypeError: mw.Uri is not a constructor... [21:41:11] hashar: yay, now it passed! thnx! [21:43:32] paladox: does your scroll wheel work on gerrit or is it just mine that wont work [21:43:58] Zppix, my mac dosen't have a mouse wheel [21:44:08] i use the touch mouse [21:44:18] this is why folks we dont use macs when developing xD [21:44:25] on my windows pc i use the mouse to touch the scroll bar [21:46:09] on the mac my mouse is my scroll bar :) [21:46:53] thcipriani: sendto(11, "<81>Apr 4 21:45:53 sudo: root : command not allowed ; TTY=pts/1 ; PWD=/root ; USER=trebuchet ; GROUP=wikidev ; COMMAND=/bin/echo hi", 136, MSG_NOSIGNAL, NULL, 0) = 136 [21:47:36] hashar: whats that from? [21:49:07] trebuchet [21:49:25] scap succeds trebuchet [21:49:37] ah that is just a message to /dev/log bah [21:49:40] hashar: I don't know what socket is open at fd 11 is, but that seems like a log message. I can tell it's connecting to ldap, and I can see it loads a bunch of pam stuff, and I can see it failing [21:49:47] connect(11, {sa_family=AF_LOCAL, sun_path="/dev/log"}, 110) = 0 [21:49:59] ah yarp [21:50:25] I see it connect to seaborgium [21:50:36] the some pam shared objects get loaded [21:50:54] and I guess the LDAP payload is encrypted [21:52:12] i'd sure hope so [21:52:26] RECOVERY - Puppet run on deployment-mediawiki05 is OK: OK: Less than 1.00% above the threshold [0.0] [21:54:00] hashar i wonder is java 8 installed on the jenkins server? [21:54:17] If not it should be as they just changed the min java requirement to java 8 in jenkins [21:54:18] https://github.com/jenkinsci/jenkins/commit/09cfe3bda60341edb07ade226e24196a3f875019 [21:54:52] thcipriani: I am really tempted to blame the lack of user group in ldap for 604 [21:55:39] hashar https://jenkins.io/blog/2017/01/17/Jenkins-is-upgrading-to-Java-8/ [21:56:31] 06Release-Engineering-Team (Deployment-Blockers), 05Release: MW-1.29.0-wmf.19 deployment blockers - https://phabricator.wikimedia.org/T160551#3155385 (10Krinkle) [21:56:45] though I added in /etc/group trebuchet 604 and that does not change anything bah :D [21:58:41] ah found it [21:58:41] https://github.com/wikimedia/puppet/blob/30d5dce884a7e5480c319630e5260eb1f292547e/modules/jenkins/manifests/init.pp#L57 [21:58:48] nope we run jenkins under 1.7. [21:59:02] We will want to upgrade to 1.8 to recive any more jenkins updates [21:59:03] paladox: yeah we do [21:59:39] I will submit a patch to update to 1.8. [21:59:51] paladox: please dont [21:59:56] oh [22:00:01] there are enough patches floating around :D [22:00:15] we are far from being able to upgrade jenkins/java [22:00:25] so whenever java 8 is actually required we will craft the puppet patch to bump it [22:00:37] ok [22:01:31] paladox: and the Gerrit patch to drop the sshd algorithms using md5 looks good CI wise [22:01:41] Yep :) [22:01:56] hashar anyways theres movement upstream in adding newer macs to trilead :) [22:02:15] yeah thanks for that [22:02:23] and Jenkins will probably need to update for the next Debian release [22:02:32] your welcome :) [22:02:33] oh [22:02:37] I would assume that Debian stretch would drop a few of the older algo [22:02:45] like we did on wikimedia cluster [22:02:48] yep most likly would [22:04:06] hashar though these java packages actually are seperate from any os [22:04:26] RECOVERY - Puppet run on deployment-tin is OK: OK: Less than 1.00% above the threshold [0.0] [22:04:44] for example if openssh dropped for example sha1 and apache sshd still supported it. Then it will still be supported in the application [22:24:49] (03PS2) 10Krinkle: qunit: Remove obsolete 'tac|tac' hack [integration/config] - 10https://gerrit.wikimedia.org/r/345392 (https://phabricator.wikimedia.org/T153597) [22:24:52] I am escaping *wave* [22:25:06] who knew nodejs is having an outage downloading from there site [22:25:07] https://nodejs.org/dist/v4.0.0/node-v4.0.0-linux-x64.tar.gz [22:28:42] (03CR) 10Krinkle: [C: 032] "Recompiled and pushed:" [integration/config] - 10https://gerrit.wikimedia.org/r/345392 (https://phabricator.wikimedia.org/T153597) (owner: 10Krinkle) [22:30:20] (03Merged) 10jenkins-bot: qunit: Remove obsolete 'tac|tac' hack [integration/config] - 10https://gerrit.wikimedia.org/r/345392 (https://phabricator.wikimedia.org/T153597) (owner: 10Krinkle) [22:31:51] 10Continuous-Integration-Config, 13Patch-For-Review: JJB qunit macro ignores curl error exit code - https://phabricator.wikimedia.org/T99854#3155490 (10Krinkle) 05Open>03Resolved a:03Krinkle [22:36:45] paladox: mirrors exist for reason [22:37:05] Zppix it's not that easy with jenkins [22:37:15] 10Continuous-Integration-Config: some jjb inline bash snippets might miss set -eu - https://phabricator.wikimedia.org/T106384#1467039 (10Krinkle) Convention is to use `bash -eu` or `bash -eux`. Most of our scripts do that. Here are the ones that don't, currently. ``` integration/$ ack 'bin/bash' config/ jenkins... [22:37:16] jenkins uses pom (maven) [22:37:35] ah i see, you could in theory, put it on locally then ftp it over no? [22:37:38] anyways others have reported it here https://github.com/nodejs/nodejs.org/issues/1191 [22:37:40] 10Continuous-Integration-Config, 10MediaWiki-Unit-tests, 10Wikidata: [Task] run phpunit ResourcesTest from core for wikibase - https://phabricator.wikimedia.org/T93404#3155525 (10Krinkle) 05Open>03Resolved a:03Krinkle [22:37:50] Zppix not that easy either too [22:38:15] that would probaly be the hardest. As i have no idea how to do ftp over local network. Nor how to do it in maven. [22:40:24] oh i thought you meant to wmf instance of jenkins [22:40:42] 10Continuous-Integration-Config, 07Technical-Debt: Move setting of DOC_* variables from Zuul function to inside the build scripts - https://phabricator.wikimedia.org/T113250#3155529 (10Krinkle) [22:42:04] Zppix im building it on a wmf instance. [22:43:59] 10Continuous-Integration-Config, 10MediaWiki-General-or-Unknown, 10MediaWiki-extensions-General-or-Unknown: Add grunt-concurrent to mediawiki/core and decide whether to add it to extensions (Improve grunt performance) - https://phabricator.wikimedia.org/T116988#3155533 (10Krinkle) 05Open>03declined QUnit... [22:48:52] 10Continuous-Integration-Config, 10Deployment-Systems, 06Release-Engineering-Team: Skip running `test` pipeline for wmf branches (gate-and-submit is enough) - https://phabricator.wikimedia.org/T129719#3155543 (10Krinkle) p:05Triage>03Low (Going through some old tasks in the backlog) > That would cut the... [23:01:04] (03CR) 10Reedy: [C: 032] Remove old USERINFO tool [tools/release] - 10https://gerrit.wikimedia.org/r/344418 (owner: 10EddieGP) [23:02:50] (03Merged) 10jenkins-bot: Remove old USERINFO tool [tools/release] - 10https://gerrit.wikimedia.org/r/344418 (owner: 10EddieGP) [23:04:11] 10Continuous-Integration-Config, 10Deployment-Systems, 06Release-Engineering-Team: Skip running `test` pipeline for wmf branches (gate-and-submit is enough) - https://phabricator.wikimedia.org/T129719#2113847 (10Zppix) I don't agree with this at all. If the only time youll get feedback from the change is whe... [23:15:42] 10Scap: When "scap pull" does a (slow) CDB rebuild, it should tell me that that's what it's doing - https://phabricator.wikimedia.org/T162207#3155641 (10Catrope) [23:21:17] 10Scap: sync-* scripts should have ability to limit target nodes - https://phabricator.wikimedia.org/T162208#3155673 (10demon) [23:24:43] 10Scap: Figure out how node limitation interacts with proxies - https://phabricator.wikimedia.org/T162209#3155697 (10demon) [23:34:53] I need to deploy a change to ParserMigration but there is a change to the Quiz extension pending [23:41:25] and the Quiz change was self-merged, both in master and the cherry pick [23:44:33] * TimStarling is just reviewing it [23:46:20] It's still the SWAT window [23:47:03] cherry pick wasn't self merged https://gerrit.wikimedia.org/r/#/c/346274/ [23:47:37] RoanKattouw: ^ [23:47:47] right, sorry [23:48:22] Not sure if he's forgotten about it though after CR+2'ing it [23:49:14] He's been idle 20 minutes on tin, so I guess so [23:50:19] * Reedy deploys the quiz revert [23:50:41] Argh sorry [23:50:42] I did forget [23:50:57] No worries [23:51:05] I got all tangled up in what was and wasn't merged and on mwdebug with the config changes in the SWAT that I totally forgot about the Quiz change [23:51:44] so can/should the ParserMigration change go out now too? [23:52:25] Yeah. I can do it if you want, or you can [23:52:44] I'll do it [23:52:48] Looks like you will want to cherry pick to .19 too [23:52:59] I think it is there already [23:53:31] Ah, yeah. Gerrits Updated 27 minutes ago slightly confuses issues [23:55:16] gerrit updated includes reviewers being added