[00:02:05] TimStarling: so I've been thinking about using extension registration in prod...I don't think it's going to be feasible to get every single WMF deployed extension converted all at once, and making that large of a change kinda scares me....do you think we'd see any negative performance if we converted like 10 at a time? We'd lose the O(1) array replacement for $wgMessageDirs and friends... [00:02:54] I think that sounds fine as a migration technique [00:03:12] presumably we would aim to have O(1) array replacement at some point in the future? [00:05:10] yeah, once all extensions are loaded via registration the globals should be empty to begin with so we'd have O(1) replacement. [00:06:18] I've also been thinking of having the registry not put it into globals and instead use it as an attribute so we wouldn't even have to do any array replacement [00:07:08] that would work well for globals that are only accessed in limited places, but might break things that expect globals to be there... [00:09:55] what would that look like? what sort of attribute? [00:10:50] ExtensionRegistry::getInstance()->getAttribute('ResourceModules'); [00:11:43] (https://www.mediawiki.org/wiki/Manual:Extension_registration#Attributes) [00:14:00] so these attributes would be merged before caching? sounds fine [00:15:23] yeah [00:17:54] TimStarling: is https://gerrit.wikimedia.org/r/#/c/187074/ still in review? [00:18:21] not this instant, no [00:19:06] AaronSchulz: regarding use case, would the parser cache use it? [00:19:16] wouldn't need to [00:20:27] it's only for stuff that needs explicit purge when objects change (this is noted in the RfC, as is parser cache) [00:20:53] I see you gave LocalFile as an example [00:21:06] MessageCache also has explicit purging [00:21:23] ori: :) [00:21:27] Krinkle: keep in mind that I could be wrong, because this is tricky. But here's how I understand it: [00:22:13] scenario 1: , followed by , followed by [00:22:50] browser has to evaluate the code in the script tag before it may fetch the image. but it doesn't have to wait for the stylesheet to be retrieved, parsed and applied. [00:23:10] scenario 2: , followed by , followed by [00:23:32] browser has to evaluate the script tag before it may fetch the image, but it may not evaluate the script tag before the stylesheet has been retrieved, parsed and applied [00:23:56] AaronSchulz: I was mostly trying to imagine it from a performance perspective [00:24:09] request rates, value sizes, etc. [00:24:19] so the insertion of a script between the external CSS and the external resource made the stylesheet request, which would have otherwise not been blocking, blocking. [00:25:32] does that make sense? [00:26:07] AaronSchulz: I said I wasn't finished, but I meant in depth rather than just reading it, I have read it all a couple of times now [00:26:58] if it's going to be used everywhere, that is a good reason to think carefully about interface usability [00:27:03] TimStarling: I can remove the poolcounter stuff and tweak prefixes...but I wanted to wait for the first-pass of CR first [00:27:48] yeah, go ahead and start tweaking [00:28:09] Erik has been agitating about the HHVM output buffering thing so I have been working on that for the last couple of days [00:28:23] plus I have this scap thing to finish [00:28:32] TimStarling: have you considered just increasing the max mem limit? [00:28:45] it wouldn't fix the problem, but it could make it a lot less urgent [00:28:53] ori: we will want thumb.php to use hhvm sometime [00:29:02] ori: I not only considered it, I suggested it on the bug report weeks ago [00:29:20] also to allow 5.4 features [00:29:28] I guess zend could just be upgraded for the later though... [00:29:45] anyway, it's too late for half-measures, the real thing is happening now [00:29:47] TimStarling: shall we do it now? Why don't I submit a patch to increase it from 300 to 500. [00:29:52] OK. [00:29:55] if hhvm can't stream, that's pretty bad for scalars [00:30:14] AaronSchulz: again, I wasn't suggesting that increasing the limit is tantamount to a fix. [00:30:15] note that it will store the whole output at least twice [00:30:15] *scalers [00:30:39] ori: sure, but if tim wants to do it now, lets not second guess ;) [00:30:50] so you probably need a memory limit that is double the output size [00:30:55] yeah, agreed [00:31:00] Krinkle: ping? [00:31:25] ori: reading [00:32:26] ori: Interesting. [00:33:10] ori: And that's why external script is an exception. [00:33:35] Krinkle: why's that? That's actually the bit I'm unclear about. [00:33:50] ori: Because external script would have to be downloaded first [00:33:54] whereas the inline is already there [00:34:12] But I'm not seeing it yet in the network in Chrome [00:34:22] Will check again in a minute [00:35:07] ori: Wanna involve upstream maybe before we forge ahead? [00:35:17] E.g. one of the Chrome pauls [00:35:43] I nagged him last time. Want to ping him this time? :P [00:35:49] k [00:35:51] which one :P [00:36:11] Irish or..? [00:36:13] who's the other? [00:36:17] haha [00:36:42] Paul Lewis https://www.youtube.com/watch?v=RCFQu0hK6bU [00:36:52] the two Developer advocates [00:36:59] the other being Irish [00:37:00] ah [00:37:03] TimStarling: oh one other thing. We had talked about using $wgExtensionInfoMTime = filemtime( 'CommonSettings.php' ); and having deployers touch that whenever they needed to invalidate the cache...but that seems like that could get annoying and confusing pretty fast. [00:38:02] Krinkle: how about applying the patch as a live hack on mw1017 so we can compare test.wm.o? [00:38:10] yeah, maybe it is premature optimisation [00:38:18] we could deploy and see if we really need that [00:38:52] TimStarling: shall we do it now? Why don't I submit a patch to increase it from 300 to 500. [00:39:04] you may as well do that anyway [00:39:28] * ori does [00:39:29] when I said it's too late, I really just mean that we can't defer the development task [00:39:33] alright [00:48:27] ori: Harr [00:48:29] Alrighty [00:49:36] Krinkle: pushing out a delicate HHVM config change now, so I'll do mw1017 later [00:49:42] k [01:30:58] > $jobQueueGroup = JobQueueGroup::singleton( 'wikidatawiki' ); [01:30:58] > var_dump( $jobQueueGroup->get( 'UpdateRepoOnMove' )->delayedJobsEnabled() ); [01:30:58] bool(false) [01:31:16] me... where's Aaron? :/ [01:31:19] * meh [03:25:53] interesting: http://mujs.com/ <-Javascript interpreter designed for embedding in the Lua sense of embedding [03:26:49] written by Artifex (the Ghostscript people), presumably for use in some of their other products [06:47:46] 6MediaWiki-Core-Team, 10Datasets-General-or-Unknown, 6Services, 10Wikimedia-Hackathon-2015, 7Epic: Improve Wikimedia dumping infrastructure - https://phabricator.wikimedia.org/T88728#1124296 (10Nemo_bis) > It would be nice for example to still be able to get the langlinks tables and such from the time be... [09:45:40] 6MediaWiki-Core-Team, 10MediaWiki-API, 5Patch-For-Review: API: Add jsconfigvars to action=parse - https://phabricator.wikimedia.org/T67015#1124518 (10TheDJ) [12:34:49] 6MediaWiki-Core-Team, 10OCG-General-or-Unknown, 7HHVM, 7Wikimedia-log-errors: OOM reported at SpecialPage.php:534 due to large output from Special:Book - https://phabricator.wikimedia.org/T89918#1124956 (10Joe) Although the memory limit was raised, i still see memory exceeded errors on SpecialPage.php:534 [14:45:48] <_joe_> anomie: since I intend to merge https://gerrit.wikimedia.org/r/#/c/194830/ on tomorrow's morning swat, can I ask you a sanity check on it? [14:46:05] <_joe_> Aaron will review it too again probably [14:46:19] * anomie looks [14:48:03] <_joe_> thanks :) [15:02:53] _joe_: Does it matter that filebackend-codfw.php is pointing to eqiad swifts but using codfw redis (i.e. not shared with eqiad) for locking? [15:07:43] <_joe_> anomie: it does, I have to correct that, thanks [15:08:28] <_joe_> but I need an input from aaron about 'name' => 'shared-swift-eqiad', [15:08:38] <_joe_> if that's what you are referring to [15:11:02] <_joe_> btw, I am adding chaos over chaos there... I think I should include a -common file everywhere and include the realm specific file from it in each case [15:12:24] <_joe_> anomie: what is docroot/noc/createTxtFileSymlinks.sh used for in mediawiki-config? [15:14:08] _joe_: I'm guessing it makes symlinks so https://noc.wikimedia.org/ can refer to files outside the docroot. [15:15:22] <_joe_> uhm ok [15:15:31] <_joe_> so I'll need to update those too [15:15:33] <_joe_> meh [15:29:59] 6MediaWiki-Core-Team, 10Datasets-General-or-Unknown, 6Services, 10Wikimedia-Hackathon-2015, 7Epic: Improve Wikimedia dumping infrastructure - https://phabricator.wikimedia.org/T88728#1125443 (10Lydia_Pintscher) [16:00:20] does anyone know what "var _mwq = _mwq" is? that i see in script tags on test.wikidata? [16:00:23] e.g. https://test.wikidata.org/wiki/Special:Search [16:00:38] possibly something new since yesterday [16:01:22] stuff is wrapped with "var _mwq = _mwq || []; _mwq.push( function ( mw ) {" [16:02:19] so i do mw.config.get('wgPageContentLanguage'); on wikidata and get 'en' [16:02:29] on test wikidata, null [16:03:34] MatmaRex: Krenair ^ [16:05:03] aude: huh [16:05:08] other stuff like wgPageName is also not set [16:05:22] yeah [16:05:27] basically, no js [16:05:56] (the things set in OutputPage::getJSVars) [16:05:59] and i am sure it worked yesterday [16:06:04] Well mw.config.values shows some things [16:07:01] looks like extension variables [16:07:13] some core ones also [16:07:29] wgVisualEditorConfig is set [16:07:39] aude: It's a new JS thing that ori made [16:07:45] oooo [16:07:47] let me find the patch [16:08:01] why is it on test.wikidata only? [16:08:15] test2 doesn't have it nor does mediawiki.org [16:08:35] I bet it it there as a cherry-pick somehow [16:08:37] https://gerrit.wikimedia.org/r/#/c/196989/ [16:08:40] not merged yet [16:08:45] hm [16:09:35] i searched @wikimedia on github and didn't really find anything for 'mwq' [16:09:46] except some unrelated (i think) parsoid thing [16:09:54] I wonder if he's doing some testing there using global js [16:10:02] hm [16:10:04] https://gerrit.wikimedia.org/r/#/c/196989/7/includes/resourceloader/ResourceLoader.php,unified [16:10:05] I don't think it's cherry-picked [16:10:23] thanks [16:10:42] it's like it's pushed to a queue but never actually loaded [16:11:01] it will be executed by ResourceLoader [16:11:03] 1341 +» * only if the client has adequate support for MediaWiki JavaScript code. [16:13:42] oh [16:13:59] nevermind, thought maybe i was hitting mw1017 [16:14:13] i am [16:16:24] test.wikidata is always mw1017? [16:16:27] bd808: [16:16:56] looks like it [16:17:09] I thought only with the magic cookie? [16:17:23] i tried in firefox [16:17:30] i can't possibly have magic cookie there [16:17:38] and think lydia had the issue [16:17:48] bd808, testwiki is still mw1017 only, I think [16:18:01] you can send any other wiki to mw1017 by using the header [16:18:09] but yes [16:18:13] that change is live on mw1017, somehow [16:18:19] as long as this doesn't hit wikidata, i'm not as worried but we do rely on test.wikidata for testing things [16:19:00] Despite not being on tin. [16:19:03] Or approved code. [16:19:18] ಠ_ಠ [16:19:23] ooo [16:23:40] from SAL: 02:03 ori: applied I98d383a629 locally on mw1017 [16:26:21] it seems not to break test.wikipedia [16:26:40] i am looking at Special:Search which shouldn't have anythign wikibase specific [16:43:42] <_joe_> ^d: I reorganized things a little to be more consistent [16:44:58] <^d> +736, -120 [16:45:03] <^d> This is gonna be a fun merge :) [16:49:56] <_joe_> yes [16:49:58] <_joe_> :/ [16:50:18] <_joe_> I can split it up in a series of smaller patches if you think it's gonna be easier to merge/revert [16:50:33] <_joe_> I'm open to suggestions [16:50:53] <_joe_> those are all supposed to be no-ops [16:51:08] <_joe_> so I may want to split them up and push patches 1-by-1 [17:12:29] 6MediaWiki-Core-Team, 10SUL-Finalization, 10Wikimedia-Site-requests: Run migrateAccount.php --auto on all usernames with unattached accounts - https://phabricator.wikimedia.org/T89770#1125753 (10Keegan) 5Open>3Resolved [17:23:50] ori: I've reached out to Paul Irish regarding style/scripts ordering. He's on it. To hear back soon. [18:42:51] visual editor on mediawiki appears to be broken - e.g. inserts
s in random places inside headers... Is it a known issue? [18:43:22] legoktm: https://gerrit.wikimedia.org/r/#/c/197269/ [18:43:52] SMalyshev: ask in #mediawiki-visualeditor ? [18:44:21] so many channels [18:44:54] :P [18:47:15] I know [18:58:01] legoktm: I like how cloning MobileFrontend gives me conflicts... [19:01:02] ah, I see [19:01:34] aspiecat: what kind of conflicts? [19:01:53] no, that was just my fault [19:02:02] oh :P [19:04:08] legoktm: https://gerrit.wikimedia.org/r/#/c/197392/1/includes/skins/SkinMinerva.php [19:06:39] +2'd [19:06:55] I have the mediawiki/extensions meta repo checked out, so I don't have to bother cloning individual extensions [19:07:14] I use a script [19:07:18] though sometimes it misses a few [19:10:05] 6MediaWiki-Core-Team, 10Wikidata-Query-Service: Add bestRank to RDF export - https://phabricator.wikimedia.org/T92990#1126092 (10Manybubbles) 3NEW [19:10:13] 6MediaWiki-Core-Team, 10Wikidata-Query-Service: Add bestRank to RDF export - https://phabricator.wikimedia.org/T92990#1126100 (10Manybubbles) p:5Triage>3High [19:20:08] legoktm: are you the one flooding dberror log with trx notices? :) [19:20:17] errr [19:20:34] I'm a bit surprised that happens in CLI mode with autocommit...maybe a script wraps stuff in begin/commit or something [19:20:43] 6MediaWiki-Core-Team, 10Wikidata-Query-Service: Add bestRank to RDF export - https://phabricator.wikimedia.org/T92990#1126126 (10Smalyshev) p:5High>3Normal a:3Smalyshev [19:20:47] legoktm: probably not urgent though [19:21:17] aspiecat: oh yeah, that was me. CentralAuth wraps stuff in explicit transations after issues we had with accounts being broken [19:22:08] well whatever the outer trx-level is, it's getting committed beforehand anyway [19:22:40] that script stopped a few hours ago though, it's migrateAccount.php [19:22:48] 6MediaWiki-Core-Team, 10Wikidata-Query-Service: Add bestRank to RDF export - https://phabricator.wikimedia.org/T92990#1126136 (10Smalyshev) Not sure about usecase for notBestRank. I think FILTER NOT EXISTS (which is SPARQL expression of not having something) won't be faster than just checking for BestRank, but... [19:23:20] specifically CentralAuthUser::storeAndMigrate() [19:23:58] soo. apparently L10nupdate has been broken in at least three ways for the past month :/ I made two patches to fix two things: https://gerrit.wikimedia.org/r/#/c/197278/ [19:44:54] Nikerabbit: That patch looks good to me but I'm not sure I know how to set it up for testing before merge [19:46:05] bd808: install extension, set configuration variables, run the script ;) [19:46:17] oh! of course! ;) [19:46:50] there are also phpunit tests (I didn't check why they didn't catch the extension mistake...) [19:48:06] perhaps because the are not being run...? [19:49:32] I don't see a UnitTestsList hook? [19:51:19] yeah I guess it's missing [19:58:47] I'll add it [20:04:29] there you go [20:05:13] bd808: anyone else I could ask for review besides you? [20:10:33] Nikerabbit: I have to finish writing up an interview I did today and then your patch is next on my list. I want to see l10nupdate actually work again! [20:12:11] bd808: sounds good. if you find issues in review I will look into them tomorrow [20:18:44] ori: There's a few relevant threads at chromium loading-dev lately https://groups.google.com/a/chromium.org/forum/#!forum/loading-dev [20:18:51] https://groups.google.com/a/chromium.org/forum/#!msg/loading-dev/0Uqczae7S1g/pg4Rp-RhXcMJ [20:33:23] 6MediaWiki-Core-Team, 7Epic: Figure out a strategy for the BloomFilter caches - https://phabricator.wikimedia.org/T93006#1126431 (10aaron) 3NEW a:3aaron [20:33:38] 6MediaWiki-Core-Team: Figure out a strategy for the BloomFilter caches - https://phabricator.wikimedia.org/T93006#1126431 (10aaron) [20:43:23] 6MediaWiki-Core-Team, 10CirrusSearch: inefficient work of CirrusSearch in Russian Wikipedia - https://phabricator.wikimedia.org/T88724#1126480 (10Jdouglas) a:5Manybubbles>3Jdouglas [21:22:52] 6MediaWiki-Core-Team, 10CirrusSearch: Fix highlighting for phrase prefix queries - https://phabricator.wikimedia.org/T93014#1126597 (10Jdouglas) 3NEW a:3Jdouglas [21:25:55] 6MediaWiki-Core-Team, 10CirrusSearch: Fix highlighting for phrase prefix queries - https://phabricator.wikimedia.org/T93014#1126617 (10Jdouglas) Here's what I see with the current patch in https://gerrit.wikimedia.org/r/#/c/197397/. {F99597} Seaching for `"programming is ref*"` highlights the phrase //**prog... [21:28:27] KSmithTPG: your email has a 1x1 gif in it? [21:48:32] 6MediaWiki-Core-Team, 10MediaWiki-Configuration: Support "ResourceModuleSkinStyles" in ExtensionProcessor - https://phabricator.wikimedia.org/T91566#1126666 (10Legoktm) [21:50:46] legoktm: I have no idea where a 1x1 gif would have come from. If you mean the meeting minutes email, there was an etherpad link, but otherwise it was pure simple text. [21:51:03] Nikerabbit: does update.php run for a long time with no output? [21:51:25] KSmithTPG: hmm, this is what I see at the bottom of your email:

Kevin

[21:51:27] weird. [21:51:30] * legoktm blames the NSA [21:52:22] that's a google tracking bug of some sort [21:52:43] ok, yeah. I see it when i view the html source. ascii source is fine, of course [21:52:52] it can't carry more than your google auth status though [21:52:53] google or nsa. take your pick of who to blame. [21:53:02] no unique message id attached [21:53:30] referrer? [21:57:26] Here's a non-answer from GOOG -- https://productforums.google.com/d/msg/gmail/dEgRYxLWDjA/X57nhJOZz58J [22:00:31] 6MediaWiki-Core-Team, 10MediaWiki-Configuration: Support "ResourceModuleSkinStyles" in ExtensionProcessor - https://phabricator.wikimedia.org/T91566#1126683 (10Yurik) [22:30:36] legoktm: https://gerrit.wikimedia.org/r/#/c/195087/1 not sure who should look at that [22:30:45] trying to get rid of ignoreErrors calls [22:31:11] AaronSchulz: ashley I suppose, but since he hasn't I can review it in a bit [22:36:49] the other user is SMW...which is always impossible to deal with [22:37:06] bd808: Thanks for that link. Setting up a real signature is on my todo list, and it sounds like when I do that, the 1x1 will probably go away. It's weird that it shows up above my name, not below, but whatever. [22:46:40] AaronSchulz: have I asked you about the RFC meeting tomorrow? I was going to schedule your RFC "Master & slave datacenter strategy" [23:03:02] <_joe_> AaronSchulz: can you take a second look at my patch for codfw? [23:03:18] <_joe_> i have doubts about the filebackend part [23:15:35] ori: who is working on this now? https://www.mediawiki.org/wiki/Requests_for_comment/Support_for_user-specific_page_lists_in_core [23:17:19] TimStarling: AFAIK, Jon Robson and Yurik [23:18:14] MaxSem would know [23:19:31] yup, Jon and Yuri (the latter not for long though) [23:19:39] thanks [23:19:48] TimStarling: no, but that's fine [23:43:32] 6MediaWiki-Core-Team: [draft] MediaWiki Core Roadmap April - June 2015 (Q4 2014/2015) - https://phabricator.wikimedia.org/T93027#1126925 (10ksmith) 3NEW