[03:19:14] bd808: I've got profileinfo.php working again ($wgEnableProfileInfo = true; $wgProfiler 'ProfilerXhprof' / ProfilerOutputDb ) [03:19:32] bd808: But.. the nesting is almost entirely gone, except for a few manual ones [03:31:46] Krinkle: xhprof doesn't work like our hand rolled profiler did. My Xhprof class tries to recreate some nesting by examining the parent ==> child pairs that xhprof provides but I don't remember how that ended up being mapped into the db output that Aaron worked on [17:43:25] 10MediaWiki-Core-Team, 7Availability, 5MW-1.26-release, 5Patch-For-Review, 5WMF-deploy-2015-06-30_(1.26wmf12): Devise stashing strategy for multi-DC mediawiki - https://phabricator.wikimedia.org/T88493#1414507 (10aaron) [18:01:33] 10MediaWiki-Core-Team, 7Availability, 5MW-1.26-release, 5Patch-For-Review, 5WMF-deploy-2015-06-30_(1.26wmf12): Devise stashing strategy for multi-DC mediawiki - https://phabricator.wikimedia.org/T88493#1414565 (10Ciencia_Al_Poder) New $wg variables need documentation in MediaWiki.org and Release Notes [18:05:25] 10MediaWiki-Core-Team, 7Availability, 5MW-1.26-release, 5Patch-For-Review, 5WMF-deploy-2015-06-30_(1.26wmf12): Devise stashing strategy for multi-DC mediawiki - https://phabricator.wikimedia.org/T88493#1414572 (10aaron) [18:13:18] legoktm: bd808: Can I wtf to one of you about composer for MediaWiki core? :P [18:13:29] Our travis keeps failing and I have no clue why [18:13:40] hoo: wtf away [18:14:00] are you having the old version of composer on travis problem? [18:14:14] No, we run self update [18:14:23] I tested it locally with the very same composer version [18:14:25] Eg. https://travis-ci.org/wikimedia/mediawiki-extensions-Wikibase/jobs/68953275 [18:14:36] It always fails to autoload the Psr Log stuff [18:14:47] Composer for some reason doesn't pick it up [18:15:03] if I run it with -o on travis it doesn't end up in the class map [18:15:11] But that as well works like a charm locally [18:16:01] hoo: that actually sounds a bit like https://github.com/wikimedia/composer-merge-plugin/issues/41 [18:16:30] eg clasmap not built right on first install of plugin [18:16:49] ewk [18:17:06] So running another composer dump-autoload -o after the composer install would "fix" the problem for us? [18:17:25] (as it's only a problem during the initial installation, right?) [18:17:40] *nod* Yeah see if that helps [18:46:10] anomie: Not much to go on here, but seen in prod... [18:46:13] 2 LuaSandboxFunction::call(): unable to convert argument 1 to a lua value in /srv/mediawiki/php-1.26wmf11/extensions/Scribunto/engines/LuaSandbox/Engine.php on line 297 [18:46:13] 2 LuaSandboxFunction::call(): recursion detected in /srv/mediawiki/php-1.26wmf11/extensions/Scribunto/engines/LuaSandbox/Engine.php on line 297 [18:48:07] bd808: That actually worked [18:48:08] Wow [18:48:12] Thanks for the hint [18:48:22] hoo: cool [18:48:58] Unrelated, but also yucky and new... [18:49:00] 6 fatal error: Argument 1 passed to OutputPage::setTitle() must be an instance of Title, null given in /srv/mediawiki/php-1.26wmf11/includes/OutputPage.php on line 1025 [18:49:00] 5 header() expects parameter 3 to be integer, string given in /srv/mediawiki/php-1.26wmf11/includes/WebResponse.php on line 37 [18:51:11] bd808: Want to +2 https://gerrit.wikimedia.org/r/#/c/221903/1 ? [18:51:30] Maybe should mention the bug as a comment [18:52:02] yeah that might be good [18:52:09] hoo: cd phase3? [18:52:14] amended [18:52:22] legoktm: Jeroen apparently likes that :P [18:52:27] old school and stuff [18:52:28] >.< [18:55:04] anomie: around? i'm looking at https://gerrit.wikimedia.org/r/#/c/221740/ and would like to run some crazy ideas by you. [18:56:55] anomie: Parser.php actually special cases strip markers for
 tags not to wrap them in 

. we could trick it into thinking that our marker comes from
 and not  by adding 'marker' => … to the return array, where … is a magic value we can generate. how crazy is that?
[18:57:23] 	 (the special casing is line 2629 of Parser.php, in that regex, and the recipe to generate the value is in line 4194)
[18:59:16] 	 anomie: (we could also solve that in a number of slightly saner ways too, i guess)
[19:02:14] 	 ostriches: No idea. Would be nice if we knew the caller, or had a reproduction.
[19:03:25] 	 MatmaRex: That's pretty crazy.
[19:04:01] 	 we're using an API that depends on extract(). we're pretty far in the crazy territory no matter what we do. :P
[19:07:39] 	 MatmaRex: If we want to solve it, I'd suggest either always passing nowrap to pygments and adding the wrapping 
ourself (if that makes sense, not sure if it does), or else use a regex like /^(]*>)(.*)(<\/div>)$/s and paste together $m[1] . addNoWiki( $m[2] ) . $m[3] if it matched. [19:09:36] anomie: passing nowrap to pygments disables stuff like line numbering and highlighting, so we can't do that unless we fix pygments [19:09:55] I thought there might be something that would prevent it. [19:10:56] anomie: hmm, i'd rather not try to parse pygments' output, eh. although we might end up having to do this for other reasons (we want to be able to add 'id', 'class', 'style', 'dir' attributes; there's a bug somewhere) [19:27:39] legoktm: Re API and https://gerrit.wikimedia.org/r/#/c/207353/, I note you'll probably wind up refactoring the code you're adding in SpecialChangeContentModel to be more sanely usable from the API too. But if that's fine with you then ok. [19:29:57] anomie: yeah, I don't mind doing that later [19:33:11] legoktm: Should we care that http://localhost/wiki/Special:ChangeContentModel?pagetitle=TestChangeContentModel doesn't limit the "New content model" dropdown, or that if you hit Special:ChangeContentModel/Foo then change the title to Bar the dropdown won't match? [19:34:26] (and will continue not to match no matter how many times you hit Submit) [19:38:19] * anomie expands on that in code review [19:38:38] anomie: yes for the first, probably not for the second since we don't directly link to a subpage parameter. There are other forms that will show wrong stuff if you pass in a parameter and then change it, not sure if any are fields though (Special:Block shows previous blocks, etc.) [19:39:03] and I think we can fix it not matching by preferring the pagetitle parameter over $par [19:40:16] legoktm: The difference being that Special:Block updates based on the supplied username if you manage to get the form re-displayed. [19:40:42] And shows/hides the checkboxes if you change it to/from an IP [19:49:06] anomie: right. and we don't have an API for ContentHandler::canBeUsedOn() :( [19:57:01] anomie: updated, but I found a bug in OOUIHTMLForm :) [19:57:06] er, :/* [20:04:23] legoktm: You don't need to hop on the hangout, I just need to know when GUM is going into production, pretty please [20:04:55] Keegan: I was just figuring that out with gre.g in -releng. Thursday :) [20:05:05] Okay, thanks [20:07:21] legoktm: The bug being that 'readonly' doesn't make the field readonly? [20:07:54] anomie: yes, fix in https://gerrit.wikimedia.org/r/#/c/221973/1 [20:21:23] anomie: woo, thanks :D [21:00:08] dapatrick: I'm on my way to a room... forgot I'm in the office today. [21:00:34] k [21:21:47] bd808: Any news on fatal logs? :( [21:21:52] Just hit that again [21:22:00] Seeing fatals and no idea why something is null [21:22:23] hoo: needs an hhvm patch. I've thought about it but not looked into what it would take [21:22:41] I bet upstream would take it if I figured it out though [21:23:27] Can we push for that more? [21:23:36] push who? [21:23:39] me? [21:23:55] Well, whoever would end up fixing that [21:24:08] Think we have a few people that did HHVM upstream work [21:24:11] including you and Tim [21:24:44] I really didn't do any but yeah Tim and Ori and a couple others have [21:27:09] hm? [21:27:32] ori: backporting our zend magic to log stach traces on fatals [21:27:41] *stack [21:28:22] which we thought about last summer and then didn't do because Max thought that it was built into hhvm [21:28:43] but it's not. what they have is some stuff to get an interpreter (c++) trace on fatal [21:28:58] speaking of fatals, is anyone on ErrorException from line 264 of /srv/mediawiki/php-1.26wmf11/includes/exception/MWExceptionHandler.php: Fatal Error: Call to undefined method SearchResultSet::addInterwikiResults() ? [21:29:40] hoo: if we need that, can you file an upstream bug? [21:30:18] Yeah, can do that [21:31:57] bd808: what about the user handler for fatals that aren't interpreter crashes? [21:32:22] We have that but it doesn't get called before the stack is unwound [21:32:31] Mh [21:32:36] so it really gives no useful information [21:32:39] It might be better if bd808 filled that task [21:32:46] You have more clue here [21:32:55] oh right, f583067ead6a4ef184c517b5aa23eb8d5bf47986 [21:33:31] well, *that* seems like a bona fide bug, rather than a missing feature (which isn't available in zend either) [21:34:05] I guess it could be seen that way [21:34:19] I think it has a better chance of being fixed upstream [21:34:35] it doesn't work in zend either as I recall. That's why we had an extension that Tim wrote to do it [21:34:51] zend doesn't support user handlers for fatals [21:35:10] hhvm does, but doesn't give you the stack that triggered the fatal [21:36:40] right. besides your stuff I tried it in https://gerrit.wikimedia.org/r/#/c/177724/ as well [21:38:59] The stack that gives only has MWExceptionHandler::handleError() and {main} [21:40:10] the reason I think it is a defect is that presumably the ability to bind handlers to fatal errors exists to allow PHP code to do some useful reporting about its internal state [21:41:20] not unwinding the stack is probably not an option, but stashing a trace and returning it later if there is a call to debug_get_backtrace or whatever that function is called seems like the right idea [21:42:12] oooo... RuntimeOption::CallUserHandlerOnFatals [21:42:14] https://github.com/facebook/hhvm/blob/f9cc1f7102b8138e6b25dc8895368c02e2e5ee1d/hphp/runtime/base/execution-context.cpp#L826-L828 [21:42:21] yeah [21:42:29] modules/mediawiki/manifests/hhvm.pp:31: call_user_handler_on_fatals => true, [21:42:31] we set that [21:42:45] and the handler *does* get called [21:43:09] but the stack is empty? [21:43:11] only too late to get anything useful out of debug_print_backtrace() and co. [21:43:29] i think so. IIRC you were the one who realized it [21:44:07] https://gerrit.wikimedia.org/r/#/c/206003/ [21:45:36] If ExecutionContext::recordLastError took a snapshot of the stack... maybe that would work [21:46:11] * bd808 hunts around some more [21:50:40] nope. All of this is about c++ exceptions and handled outside of anything that knows about the PHP scope AFICT [21:54:17] well there is a php scope but it is the post-send scope [21:55:00] basically we'd want something to call createBacktrace() and stash it somewhere [21:55:06] or pass it to the handler [21:55:47] passing to the handler would be nicest really [22:01:54] bd808: (Trying to find my problem by logging it myself) What's the nicest way to log w/ backtrace now (I don't want to throw an exception) [22:04:39] so yeah. call_user_handler_on_fatals is what raises a signal that can be caught with set_error_handler() but at the point that it is raised the context that failed is gone and can't be traced [22:05:38] hoo: right no there's no magic better than debug_backtrace() [22:06:01] mh... might end up throwing an exception now [22:06:21] If it slaps us in the face to hard, I can take that out before we hit the real Wikidata tomorrow [22:06:53] There is some new magic in https://gerrit.wikimedia.org/r/#/c/213348/ but I can't seem to find reviewers who care [22:11:27] bd808: stupid random question: do we have an idea, order of magnitude, of how long it would take to generate a RA HHVM squashfs... thing ("binary", I guess)? If l10n cache complicates your answer, feel free to ignore it or me :) [22:11:38] fatals sometimes are logged: https://phabricator.wikimedia.org/T103716#1397193 [22:13:45] greg-g: runtime? Time to make l10n cdbs (~5-8m) + time to generate hhbc cache (which was ~5 minutes per branch as I recall on the old test server) [22:14:06] plus time to copy the static files I guess [22:14:24] call it ~20 minutes? [22:15:04] word, awesome, thank you sir [22:15:14] if we have enough ram+cpu+disk on the build box we might be able to speed that up [22:15:36] disk io is a big factor in l10n build time [22:15:47] lots and lots of stat calls [22:15:49] 20 is a good number to work from. For what I'm doing now, 5-30 minutes is all the same, really [22:16:09] *nod* 2030 should be easily doable [22:16:13] 20-30 [22:16:22] 5 is probably not [22:16:27] :) right [22:16:43] alright, done interrupting :) [22:18:29] greg-g: I have a scap-build-hhbc shell script laying around somewhere just itching to be ported to python and added to the build chain [22:19:26] throw that shit up in gerrit! [22:19:55] I know, that means you have to respond-ish to feedback/do things [22:21:47] bd808: there's a new tool for repoauth in the latest hhvm release too iirc [22:22:03] greg-g: https://github.com/bd808/bug-67168 [22:22:37] ori: yeah? I remember seeing somewhere that they were going to work on something to make it easier [22:25:32] hmm, right, two versions basically killing it... [22:25:51] alright, we've gotta get our cadence down to daily [22:29:50] greg-g: we might not see all the gains that we would with a single branch but it is still likely to be measurably faster than we are today [22:31:05] where "it" == RA on both branches? [22:31:59] yes [22:31:59] https://static-bugzilla.wikimedia.org/show_bug.cgi?id=67168#c4 [22:32:52] you're one of those static-BZ users... [22:33:15] I only had the bz number [22:33:32] https://phabricator.wikimedia.org/T69168#717357 [22:33:39] "The current plan is to go for RepoAuthoritative ASAP" [22:33:42] :) [22:33:50] It totally was [22:34:05] but then the last 10% took up ... still going actually [22:34:17] stupid last 10% [22:34:26] imagescalers are still php5 [22:35:31] what's delaying them? [22:35:47] legoktm: Well, but not in any case that is relevant to me [22:35:56] :( [22:35:58] staffing? testing? [22:36:23] there's an hhvm scaler in prod [22:36:29] depooled in pybal [22:37:14] I actually care more not about HHVM but about killing PHP 5.3 compatibility with fire [22:37:16] https://phabricator.wikimedia.org/T91468 [22:37:27] OuKB: amen [22:38:25] also, even hhvmified hosts have php5 installed and set as default php [22:38:44] 5.5 [22:39:04] hhvm is slow for one off scripts [22:39:12] like cron jobs [22:39:23] legoktm: are extensions expected to check the content type before using it or is that too weird for explicit error handling to be worth it? [22:39:35] not horribly so but noticable [22:40:20] tgr: I think they should yes. [22:40:54] legoktm: should I run the script or are you already working on it? [22:41:26] tgr: I'm not. The script needs adapting for UW's content type though [22:41:50] ok, will do that then