[00:00:21] Reedy: what made you look at msg resource tables by the way? Is that purge finishes now? All wikis? [00:02:06] All wikis, and yeah, it's done. I !logged it after [00:02:10] Scap took about 10 minute [00:02:20] but we noticed localisationupdate took about 3.5 hours [00:02:36] Most of it was doing refreshMessageBlobs.php [00:03:11] bd808 made a joke about wfWaitForSlave() which I added in a stupid way (though, it's a while ago, and I can't remember if Iooked at the number of rows) [00:03:57] And then I was looking what was in the database as msg_resources was the main thing the script touches (plus a load of stat-ing etc) [00:04:06] And then was like, yeah, it's full of shit [00:04:48] * bd808 renames the table to msg_colon [00:04:53] lol [00:05:34] But the amount of crap languages, when we're iterating over a lot of the rows in the DB [00:05:44] just seemed like easy low hanging fruit to clean up [00:05:51] Yeah [00:05:58] I purged it last month [00:06:04] really? [00:06:09] Yep [00:06:19] It's really easily filled up [00:06:19] I knocked about 25% of the rows out of the enwiki table [00:06:28] hey thanks for those reviews Reedy [00:06:29] commons had even more [00:06:52] np :) [00:07:18] I wonder if it's worth getting that waitforslave fix to refreshMessageBlobs pushed through [00:07:30] Or just see if we get any improvement [00:08:07] I merged it. would be easy to backport [00:08:34] * Reedy presses cherry pick [00:11:35] Is it easy to deploy though? ;) [00:12:01] I guess if we can kill the script completely when we remove the table too... that'd be pretty sweet [00:13:16] Reedy: you'd need to sweet talk greg-g to deploy it I guess [00:14:45] heh [00:38:06] hmm? [00:55:26] TimStarling: I think it's restricted because we default to everything being restricted until it's agreed to be enabled. [00:55:45] greg-g: not really a big deal, but Reedy has a backport to WikimediaMaintenance that might make the l10nupdate processes a bit faster -- https://gerrit.wikimedia.org/r/#/c/256864/ [00:56:31] * bd808 was guilty of a context free ping [01:01:05] Reedy: update commit message to say you're chunking in batches of 1000 and it should be fine, but... it's thursday at 5pm :) [10:27:05] hashar: wonder what a work around for https://gerrit.wikimedia.org/r/#/c/256875/2 is [11:07:43] AaronSchulz: The error is that the table doesn't exist or jenkins doens't have permission [11:07:50] AaronSchulz: May need to include it in the phpunit temporary tables [11:07:56] it's inphpunit.php or MediaWikiTestCase.php [11:08:05] (assuming the table does exist) [11:08:48] Oh wait, this is not about the table, it's about an entire database [11:09:02] How does that work for normal stock installs? [11:09:13] Maybe we can prefix it in the main database? [11:09:20] Seems like a problem we need to solve outside Jenkins [11:17:32] Krinkle: yes, I can read jenkins errors ;) [11:17:43] it works in my vagrant, but my user has lots of privs [11:17:51] (the tests that is) [11:54:36] Krinkle: ugh. Commonswiki gets filled with shit easily [11:54:56] Reedy: Don't wory, it's been that way for 5 years. [11:55:11] haha [11:55:12] Doesn't stop it annoying the hell out of me :) [11:57:18] https://commons.wikimedia.org/wiki/Category:Commons_maintenance_content [11:57:22] Backlogs of backlogs [11:57:47] Reedy: https://github.com/wikimedia/mediawiki/commit/3b829b4385eda5b986baf9fb41e740e563193a0c [11:58:01] aha [11:58:11] Reedy: Are you seeing new entires that are not [a-Z-] [11:58:12] ? [11:58:13] That's gonna be in wmf.7? [11:58:19] *8 [11:58:20] I've not checked [11:58:22] I think this is live already [11:58:43] Tep [11:58:46] It's in w7 [11:59:12] nope, they all look like valid language codes on enwiki at least [11:59:18] k [11:59:28] it'll stll allow things like frownwork and chickichicki [11:59:28] but many blobs are {} [11:59:36] Yeah [11:59:45] It'll stop the ones that look like sql injection ;) [11:59:57] Modules without get an empty object and that fact is cached [12:00:04] The new system detects this earlier at run time [12:00:15] since the keys of the blob are available at runtime, it's just the message values we want from cache [12:00:26] e.g. RLModule::getMessages() is statically available from php config [12:00:30] so we know it's empty [12:00:36] this is another improement [12:00:36] * Krinkle documents [12:00:53] https://gerrit.wikimedia.org/r/#/c/253656/ [14:08:42] Reedy: Hm.. curious why ResourceLoader cache refresh took so muhc longer than usual this week [14:08:46] I guess it is because lack of train [14:08:49] so it has more time to build up [14:08:59] Assuming that normally localisation cache updates cause it to invalidate [14:11:17] Yeah, so was I [14:11:20] Hence starting digging :) [20:11:39] bd808: https://gerrit.wikimedia.org/r/#/c/256645/ [20:15:40] ori: interesting. I'll look at it (have an afternoon of meetings) [20:15:49] k [20:23:58] hey Reedy are you around? I've got a "how do you do this?" question -- what do you do to clone all the extensions? Do you use the mediawiki/extensions.git repo or do you do something else? [20:25:24] It's a bit of a mess... [20:25:49] I use the mediawiki/extensions.git repo [20:25:51] If you want to guarantee you have ALL the extensions (sometimes the extensions.git doesn't have everything) you can use https://github.com/wikimedia/mediawiki-tools-code-utils/blob/master/clone-all.php [20:25:52] ori: can this be correct? -- https://github.com/search?l=&q=MWHookException+user%3Awikimedia&ref=advsearch&type=Code [20:25:54] (and skins) [20:26:14] bd808: yes [20:26:57] legoktm: my problem with the combined repo is that it updates hella slow and doesn't leave me with clones I can push from [20:27:34] Yay, I got 3 commits merged to https://github.com/hhvm/oss-performance [20:27:35] updates slow in what sense? it takes me ~2 min to git pull / git submodule update it every day [20:27:43] and git submodule foreach "git review -s" [20:27:51] I run that once a month or something [20:28:03] or ostriches would tell you to use https :P [20:28:18] hmmm... maybe I did something wrong somewhere along the line [20:28:21] I use a modified version of clone-all.php from above [20:28:22] http://p.defau.lt/?dcXtKmoOAtHEG36kBL1ahw [20:28:42] (I'm also on SSD, new-ish computer, etc.) [20:29:10] I only have an i7 with SSD ;) [20:29:35] * legoktm bbl lunch [20:32:45] ori: I went splunking to find where that all came from -- https://github.com/wikimedia/mediawiki/commit/f479715fd13fdb116eb6c0ce95725d31a683acad [20:33:28] looks like it was originally all for the call_user_func_array issue with pass by ref [20:33:51] heh [20:34:03] https://bugs.php.net/bug.php?id=47554 [20:35:12] it may not match the docs, but it's what we want [20:35:21] 'false' is a special value for hook processing; 'null' is not [20:40:45] we could follow your patch up with a change in the mode of https://secure.php.net/manual/en/function.call-user-func-array.php#91503 to work around this "Passing by value when the function expects a parameter by reference results in a warning and having call_user_func() return FALSE" [20:41:34] actually that should probably be in your patch [20:41:45] because of the false [20:41:55] which will abort the hook chain [20:43:51] bd808: what should be in my patch? [20:45:07] copying all of the $hook_args into an array by reference and using that as the input to call_user_func_array [20:45:43] so that the hook handler is satisified if it expects a ref [20:45:56] even if that will be ignored on the calling side [20:46:39] you don't think that will be abused? [20:47:01] someone will come to depend on that sooner or later [20:47:18] because since php 5.4 the behavior of call_user_func_array() has been to abort with a warning and false when the called method expects a ref and doesn't explictly get one [20:47:54] but if we drop turning that into an exception then it will happen mostly silently and one bad hook could mess up the whole hook call chain [20:48:19] I agree that it *should* be seen in dev but it likely won't be [20:48:29] (the warning) [20:50:05] but we don't suppress warnings in prod [20:50:14] so it wouldn't be mostly silent [20:50:40] true [20:51:16] but it will be broken until the hook handler signature gets fixed [20:51:30] and how broken depends on the order hooks are registered [20:51:50] assuming that we have some hooks that are used by multiple extensions [20:51:55] broken things are usually broken until they are fixed; it's the attempt to make it otherwise that got us into this mess in the first place. [20:52:56] suppose the hook function signature was wrong for a different reason -- it expected a parameter to have a different type [20:52:59] that would just be an exception [20:53:19] and that would halt further processing of the request, including any other extensions [20:53:22] that's the norm [20:53:37] yeah, you're right [20:54:28] so this was all about protecting against one particular error breaking the hook stack when there are lots of other ways to break the hook stack [20:55:44] yes, and the way it did it was not even correct, per referenced bug [20:59:25] ori: fyi, you'll get PM'd abuse [21:00:50] oh, it'a wil? [21:01:19] just a guess [21:05:43] AaronSchulz: I really like that the aggregator redis is now a separate instance that doesn't share the same keyspace as the queues; it makes debugging easier. [21:08:40] the aggregator only has a dozen or so keys (the jobqueue:aggregator ones and RedisChronJobService locks) so it's fine to run 'keys *' to remind yourself of the layout of data, and the ability to MONITOR aggregator activity is useful too [21:20:38] [12:47:54] but if we drop turning that into an exception then it will happen mostly silently and one bad hook could mess up the whole hook call chain <-- do you mean like https://phabricator.wikimedia.org/T97384 ? [21:41:34] wheee [21:41:38] 4 patches into oss-performance [21:42:19] https://github.com/hhvm/oss-performance/pull/57#issuecomment-162071366 [21:42:31] I suspect the time for them to test it would be quicker than for me to get setup the first time [23:02:30] gwicke: If I want to scope out a CentralAuth password-authentication service with someone from your team, who would be the best person for that? You? Marco? [23:03:44] csteipp: both, probably [23:03:59] are you considering to tackle this next quarter? [23:04:43] gwicke: Yeah. I want to make sure it's reasonable before we commit. Probably just a sanity check over email at this point. [23:04:57] you can send to services@ [23:05:06] even better [23:05:06] or cc us on a task (probably better) [23:05:18] That works too. Will do. [23:05:54] also, yay for moving in that direction! [23:06:45] csteipp: you can't start working on that until you finish security reviews for AuthManager ;) [23:09:17] we just need to clone csteipp [23:09:37] Yes please [23:10:11] Or maybe I am a clone that the real csteipp is making do the work... [23:10:23] there's dapatrick too [23:11:05] that would be confusing, with all the csteipps supervising each other [23:11:44] well if we used erlang we could have supervision trees but nooooooo, has to be nodejs [23:12:14] supervision trees imply a hierarchy, which means that the clones aren't perfect [23:32:39] anomie: which do you think is the best interface for deleting credentials in AuthenticationProviders? [23:33:04] 1) have a providerRemoveAuthenticationData($req) method and duplicate all the providerAllows... stuff [23:33:49] 2) like 2) put don't duplicate the Allows methods and assume that a provider never needs to be able to prevent another provider from deleting its own credentials [23:34:24] 3) try to keep the number of methods on the interface low and use some kind of flag on the request [23:34:52] * bd808 doesn't expect Brad to answer at 18:30 on a Friday [23:36:07] me neither but I expect he'll see it at some point [23:37:23] some point between now and Monday I mean