[22:20:05] DanielK_WMDE_: I merged the WIkibase / inject RC bug fix (~ another refactor) [22:20:31] DanielK_WMDE_: Can you fast-track its forwarding to Wikidata extension so that we can test it in Beta Cluster? [22:20:55] I'd like to get in a stable shape on this before next Tuesday and have maximum exposure until then. [23:16:32] bd808: RainbowSprinkles jobrunners are still spewing ukwikimedia errors per https://phabricator.wikimedia.org/T171371 [23:17:03] did anyone try hacking at the redis queues? [23:17:32] I haven't, no, I'd like to make sure the actions are documented this time. [23:17:39] Especially with regards to proper remove-wiki procedure [23:17:56] It seems at least the mainenance script approach is flawed given it requires the wiki to 'exist' according to multiversion [23:18:04] so the weird part of that is that wiki had be gone for years [23:18:09] So we probably need a generic WikimediaMaintenance script that is able to take wiki as run-time parameter. [23:18:47] It also has the benefit of being able to run after decom, which is better, given that page views and logins can still create various jobs and such (as well as purging still being allowed on readonly wikis) [23:19:02] So it should always happen after closing the wiki from public http access. [23:19:30] This become more tricky after jobqueue started using HHVM/HTTP to access runJobs.php [23:19:45] But as long as the solution is able to operate on Redis directly in some way, it's fine I guess. [23:22:27] *nod* to clear from the redis side you need to find all the ukwikimedia:jobqueue things [23:22:54] I haven't poked at that in a really long time so I don't remember all the redis magic [23:23:11] Trying to use JobQueueGroup::singleton($wiki) parameter looked promising [23:23:19] but still fails when it tries to SiteConfiguration->getConfig('ukwikimedia', 'wgJobClasses') [23:23:31] (e.g. from aawiki eval.php) [23:24:11] yeah. I do remember that we had to go in with redis-cli to fix before. there was no php side magic that cleaned it up [23:24:46] at least from jobqueue:aggregator:h-ready-queues:v2 [23:29:49] I think I got an ugly way [23:30:00] the only thing it needed from wmf-config is wgJObClasses, which are the same across all wikis I think. [23:30:06] $ mwscript eval.php --wiki=aawiki --group [23:30:06] > $wgDBname ='ukwikimedia'; [23:30:06] > $group = JobQueueGroup::singleton('ukwikimedia'); [23:30:06] > foreach( $group->getQueueTypes() as $type ) { $q = $group->get($type); $size = $q->getSize(); if($size) { echo "$type: size=$size\n"; } } [23:30:06] htmlCacheUpdate: size=7 [23:30:14] There is a $q->delete() method - also used by manageJobs.php [23:30:22] Which should do it, but I won't run it yet. [23:31:24] We could create a wmf-specific maintenance script with the caveat that it assumes all wikis have the same job config, so you can run it from any wikis context, but will operate on the specified wiki id for basic operations, such as in our case, deleting all queues. [23:31:45] Basically core's manageJobs.php but in a way that works better for farms and also works when the wiki-id is no longer registered. [23:36:41] *nod* seems like it could be useful once in a while