[00:14:12] ^d: In case you hadn't see this -- http://www.oscon.com/open-source-2015/public/cfp/360 (OSCON 2015 Call for Speakers) [00:16:16] ori: ^ you might be interested too. One of this years tracks is "Performance" [00:21:20] <^d> July, right when I plan to be moving. [00:21:47] moving where? [00:22:24] doh. It also starts the day after wikimania [00:23:00] <^d> the_nobodies: Out of my apartment and into a better one :) [00:23:11] <^d> Preferably one with a less terrible mgmt company running things :p [00:23:11] haha [00:24:41] <^d> bd808: I'm skipping out on WM2015. [00:24:56] <^d> I've been to south of the border. Awful tourist trap. ;-) [00:25:15] ^d: Are you going to go to the hackathon in the spring then? [00:26:06] If I have to choose one or the other I think I'll pick the hackathon [00:26:46] <^d> Yeah [00:26:55] <^d> I'd take Lyon over Mexico City :) [00:27:09] Seems like a reasonable choice [00:27:43] I felt like I got more done in Zürich than in London [00:27:54] <^d> I'm thinking of taking a week in Lyon to just eat everything. [00:28:00] +1 for that [00:29:45] <^d> Aww, no manybubbles. [00:43:08] bd808: cool tip, thanks [00:44:53] 3Wikimedia-General-or-Unknown, MediaWiki-Core-Team: History of Utilisateur:Binabik@frwiki missing after global rename - https://phabricator.wikimedia.org/T76979#839655 (10RobLa-WMF) a:3Legoktm [00:48:41] James_F: got a few minutes free to talk about ve metrics? [00:51:06] 3MediaWiki-Core-Team, wikidata-query-service: Investigate Titan for WDQ - https://phabricator.wikimedia.org/T1095#839752 (10RobLa-WMF) [00:55:33] csteipp: you don't have a Phab project for security reviews, do you? [00:55:45] No [00:55:58] I've been meaning to [00:56:13] phile a phab task [00:57:18] <^d> bd808: I amended the xhprof change for vagrant to keep the "here's how you use xhprof gui shits" in the role file. [00:57:36] oh cool. I should merge that then [00:57:54] * bd808 is going an interview task code review right now [00:59:23] csteipp: I'm happy to file on your behalf....I've got the form window open now [00:59:33] Go for it! [01:00:04] "Security-reviews" presumably [01:01:23] (er...."Security-Reviews") [01:10:08] csteipp: this can be a normal public project, right? (i.e. fully visible, joinable by everyone, wiki page editable) [01:10:33] Yeah, fully visible. Uhm... "Security-Reviews" works [01:11:32] https://phabricator.wikimedia.org/T78221 [01:15:49] 3MediaWiki-Core-Team, wikidata-query-service: Look into WDQ tool API language - https://phabricator.wikimedia.org/T78203#839885 (10RobLa-WMF) [01:21:33] 3MediaWiki-Core-Team: Allow to search tasks about MediaWiki core and core only - https://phabricator.wikimedia.org/T76942#839910 (10RobLa-WMF) I think a "MediaWiki Core" project distinct from the "MediaWiki Core Team" project is fine. It might be a little confusing, but survivable. [01:23:06] 3Librarization, MediaWiki-Core-Team: xhprof for MW - https://phabricator.wikimedia.org/T759#839918 (10bd808) [01:24:17] 3Librarization, MediaWiki-Core-Team: Get support for xhprof_frame_begin & xhprof_frame_end functions added to XHProf PECL package - https://phabricator.wikimedia.org/T1325#839920 (10bd808) No longer a blocker for {T759} as @aaron has implemented a pure PHP solution after finding bugs in the HHVM implementation. [01:25:09] "Server is shutdowning" [01:25:11] lol [01:47:22] ori: did you see the comments on https://gerrit.wikimedia.org/r/#/c/178768/ ? [02:05:50] AaronS: you mean Krinkle's comment about only needing to check e.which? He's right, so I'll just amend. [03:08:26] 3MediaWiki-Core-Team, MediaWiki-extensions-Flow: Use rc_source and drop RC_TYPE - https://phabricator.wikimedia.org/T74157#840171 (10EBernhardson) p:5Triage>3Volunteer? [03:10:38] 3MediaWiki-Core-Team, MediaWiki-extensions-Echo: Allow "article-linked" notifications for pages in a user defined list - https://phabricator.wikimedia.org/T66090#695988 (10EBernhardson) [04:27:38] TimStarling: do you mean to have https://gerrit.wikimedia.org/r/#/c/179064/ being dependent on a seemingly unrelated change? [04:28:47] I'll rebase it [06:56:03] 3Phabricator.org, MediaWiki-Core-Team: Allow to search tasks about MediaWiki core and core only - https://phabricator.wikimedia.org/T76942#840576 (10Nemo_bis) Please stop removing phabricator-related projects from phabricator-related reports. Searching tickets on this site is hard enough. [08:41:26] 3Project-Creators, Phabricator.org, MediaWiki-Core-Team: Allow to search tasks about MediaWiki core and core only - https://phabricator.wikimedia.org/T76942#840836 (10Qgil) For what is worth... "Phabricator.org" project is for upstream requests, and this is clearly not one. "Phabricator" project is for specifi... [08:57:47] 3Phabricator.org, MediaWiki-Core-Team, Project-Creators: Allow to search tasks about MediaWiki core and core only - https://phabricator.wikimedia.org/T76942#840912 (10Nemo_bis) > "Phabricator.org" project is for upstream requests, and this is clearly not one. Debatable, but that debate is happening on other rep... [09:57:06] 3Wikidata, MediaWiki-Core-Team, wikidata-query-service: Look into WDQ tool API language - https://phabricator.wikimedia.org/T78203#841071 (10Lydia_Pintscher) [09:58:37] 3Wikidata, MediaWiki-Core-Team, wikidata-query-service: Investigate Titan for WDQ - https://phabricator.wikimedia.org/T1095#841074 (10Lydia_Pintscher) [11:55:53] 3Phabricator.org, Project-Creators, MediaWiki-Core-Team: Allow to search tasks about MediaWiki core and core only - https://phabricator.wikimedia.org/T76942#841347 (10Qgil) p:5Triage>3Normal [12:54:43] 3Phabricator, Project-Creators, MediaWiki-Core-Team: Allow to search tasks about MediaWiki core and core only - https://phabricator.wikimedia.org/T76942#841429 (10Aklapper) [12:55:45] 3Phabricator, Project-Creators, MediaWiki-Core-Team: Allow to search tasks about MediaWiki core and core only - https://phabricator.wikimedia.org/T76942#823264 (10Aklapper) [Project organization needs to primarily work for the project owners and Quim explained above the difference. Feel free to search for items... [14:48:52] Hmm. Our phab board is timing out. [14:56:18] <_joe_> bd808|BUFFER: whenever you're here, ping [15:03:53] anomie: https://phabricator.wikimedia.org/T78208 [15:21:42] _joe_: I could use an hhvm upstream patch on our version to fix a failing test. Not sure what is the process to push a new hhvm version though [15:22:15] <_joe_> hashar: say that again? [15:22:31] <_joe_> you need an upstream patch to fix a production problem? [15:22:35] _joe_: rah sorry. The mediawiki core test suite as a single failing test which is a bug in hhvm [15:22:48] _joe_: our bug is https://phabricator.wikimedia.org/T75531 and their patch is https://github.com/facebook/hhvm/commit/324701c9fd31beb4f070f1b7ef78b115fbdfec34 [15:22:53] <_joe_> because if the test isn't showing issues in prod I couldn't care less :) [15:23:03] I am wondering how to get that commit added to our wmf Debian package so it fix the failing test [15:23:23] <_joe_> hashar: I will take care of that [15:23:39] <_joe_> but [15:23:42] <_joe_> I ask again [15:24:01] <_joe_> is this just a test failure or we do have evidence of production failures? [15:24:08] prod impact is: Jenkins will -1 every single patch proposed to any mediawiki/* git repositories [15:24:26] <_joe_> oh a stability boost! I like it! [15:24:27] <_joe_> :P [15:24:31] :-D [15:24:57] <_joe_> hashar: my point is, jokes aside, if this is a test failure and not causing a prod impact... well... [15:25:06] <_joe_> ... why do we perform that test? [15:25:32] I guess prod is impacted whenever some use the API wddx format and the payload as some ampersand in it [15:25:37] so corner case issue I guess [15:25:51] <_joe_> yeah, it would be low-priority for me tbh [15:26:04] <_joe_> can't we temporarily disable that test? [15:26:19] yeah we have that hide_mess_under_carpet() decorator [15:26:32] <_joe_> low-priority meaning "sometimes this week or the next" [15:27:40] how often do you refresh the hhvm package? I guess you are pilling up cherry-picks in Gerrit and update our hhvm package once per month or so? [15:27:58] <_joe_> hashar: on a "we need that" basis [15:29:41] _joe_: well I guess sometime early next week will be nice [15:30:07] ideally, I would love us to have a different hhvm version on CI [15:30:10] and beta [15:30:23] so we could use CI / Beta as staging area before pushing the new hhvm package to prod [15:30:34] <_joe_> hashar: we already do that on beta [15:30:58] <_joe_> hashar: while we're at it, what do you needed, re: hhvm and CI? [15:32:13] _joe_: proper settings for HHVM :) Found out they have some documentation and we have a nice puppet class to easily craft configs [15:32:17] so I have came up with https://gerrit.wikimedia.org/r/#/c/178806/7/modules/contint/manifests/hhvm.pp [15:32:22] which seems to be working [15:32:38] <_joe_> ok, I'll take a look [15:33:03] I will need to vary Repo.Local.Path and Repo.Central.Path for each build though [15:33:24] <_joe_> meh, ok our hhvm module sucks badly [15:33:51] <_joe_> https://gerrit.wikimedia.org/r/#/c/179108/ this should help [15:34:00] Too bad we still support PostgreSQL 8.1. Fixing T78276 would be easier if we bumped that up to 9.1. [15:35:22] <_joe_> hashar: can I ask you what do you need to do? [15:35:53] I am thinking of a good way to inject in Jenkins builds: [15:35:54] HHVM_REPO_LOCAL_PATH="$WORKSPACE/cli.hhbc.local.sq3" [15:35:54] HHVM_REPO_CENTRAL_PATH="$WORKSPACE/cli.hhbc.central.sq3" [15:36:31] and then add those jobs to be triggered by Gerrit [15:37:30] <_joe_> hashar: you can surely do that on the command line [15:37:48] <_joe_> hashar: how do you execute hhvm? from the cli? fastcgi? [15:38:11] both [15:38:27] some jobs are running the PHPUnit test suite by invoking `php` [15:38:36] so I used Debian alternatives to point php to hhvm [15:38:43] gotta need to inject some env variable globally [15:38:53] <_joe_> which is already done by the package and the modules [15:39:08] then we are going to use fcgi. We have jobs that install mediawiki and make it available behind an Apache vhost, we then point a browser to it to run some qunit tests [15:39:46] for fcgi / apache I haven't looked at it yet [15:39:47] <_joe_> hashar: how does fastcgi work? [15:39:49] <_joe_> (also, you don't really need to change the repo path if you don't need concurrency) [15:40:07] we [15:40:15] <_joe_> ok so for now let's stick to the cli part [15:42:01] <_joe_> hashar: I don't particularly like your PS to be honest [15:42:15] <_joe_> can I elaborate on that? [15:42:22] sure [15:42:51] <_joe_> hashar: the repo path change can be done via the command line maybe [15:43:27] unlikely, the Jenkins jobs are all hardcoded with things like: php somecommand.php [15:43:49] I thought about using a php shell script wrapper that would let us switch between Zend and HHVM [15:43:52] <_joe_> mmmh then I have no solution for you [15:44:05] <_joe_> do we still need zend? [15:44:27] yeah since MediaWiki core still supports PHP Zend 5.3.x [15:44:49] <_joe_> hashar: we need puppet-rspec for unit tests btw [15:44:59] <_joe_> or our tests will keep failing on puppet [15:45:07] yeah I wrote a patch for rspec-puppet during some night [15:45:29] https://gerrit.wikimedia.org/r/#/c/178810/ tests os_version() from the wmf lib module [15:46:25] <_joe_> hashar: I added tests to wmflib btw [15:46:30] <_joe_> I'm about to merge those [15:46:36] <_joe_> the work on my computer [15:49:06] _joe_: hey. what's up? [15:49:35] <_joe_> bd808: so, what do you use the mediawiki-installation dsh group for? [15:49:41] <_joe_> does scap still use dsh at all? [15:50:06] scap does not use dsh directly, but it uses that group file and one other to know which hosts to talk to [15:50:42] <_joe_> ok [15:50:44] scap-proxies is the other group we use [15:50:47] <_joe_> which other? [15:50:49] <_joe_> oh ok [15:50:55] <_joe_> and I kinda fixed that [15:51:04] back [15:51:23] <_joe_> so bd808 - https://gerrit.wikimedia.org/r/#/c/179121/ [15:51:46] <_joe_> this should fix the long-standing problem of the dsh group being out-of-sync [15:52:18] will that be easy to update when you are taking a host down for maintenance? [15:52:29] <_joe_> mh, no [15:52:43] having deployers see a list of failed hosts on every sync kind of sucks [15:52:44] <_joe_> do you need that? [15:53:06] it leads to error fatigue and ignoringin real problems [15:53:28] I'd kind of like it to be tied to pybal really [15:53:34] <_joe_> well, we could clean the puppet facts in that case [15:53:40] <_joe_> mh, I think that's wrong [15:54:29] <_joe_> I can have an host out of pybal for some reason [15:54:30] we sync to things that aren't in pybal either so I guess that really wouldn't work [15:54:38] <_joe_> but still want it to be synced [15:55:16] <_joe_> please comment on the PS anyway [15:55:20] but the problem of known broken/down hosts in the list remains. If we could query that from somewhere we could subtract them from the master list [15:55:39] *nod* will do [15:55:43] <_joe_> (we should not warn if the host is down in icinga, for instance. Something we could easily test for in scap) [15:56:33] <_joe_> bd808: ok so, I'll take a look at scap and try to come up with a patch :) [16:01:04] _joe_: I rambled on the PS. Hopefully you can find some inspiration in there. And thanks for caring about this. [16:01:18] <_joe_> eheh thanks [16:04:08] I am doomed [16:07:11] guess who does not honor env variables? hhvm! [16:07:14] $ HHVM_REPO_CENTRAL_PATH="/tmp/hashar.central.sq3" hhvm <( echo ' hashar_: https://github.com/facebook/hhvm/blob/a43dd0c8ac9513a5ea45ea2010dccbeed5d75a5b/hphp/doc/repo#L66-L70 [16:09:16] ini overrides env [16:09:20] bd808: yeah I have been reading that yesterday [16:09:32] you'll need to pass it on the cli I guess. :/ [16:09:42] I would have expected the order to be command line parameter > env variable > ini file > internal defaults [16:09:48] -vRepo.Central.Path=foo [16:10:04] yeah but then all the jobs are hardcoded with commands such as: php command.php [16:10:14] yeah... not fun [16:10:50] which comes back to the idea of some yucky /usr/bin/php wrapper script [16:14:47] 3MediaWiki-Core-Team, MediaWiki-API: allpages filterlanglinks DBQueryError - https://phabricator.wikimedia.org/T78276#841942 (10Anomie) a:3Anomie [16:17:59] bd808: if I remove hhvm.repo.central.path from the php.ini hhvm fallback to the env variable \O/ [16:18:11] sweet [16:18:37] That may be a pain to do with puppet... [16:18:43] but can be handled somehow [16:24:57] <_joe_> it should be [16:27:26] bd808: well just need to pass an empty string apparently [16:27:52] we already have hhvm.pid_file = [16:27:54] \O/ [16:28:58] <_joe_> hashar_: not that easy I guess :) [16:37:29] ^d: updated blog thing. it now has an end and a transition into the second part. ready to have someone poke at it I think. probably [16:37:41] gonna go get some lunch [16:37:56] <^d> okie dokie, I'll have a read. [16:38:28] https://phabricator.wikimedia.org/T75462 (users unable to login, 503) I grepped hhvm.log and fatal.log but don't see any mention of "submitlogin". Is there another place I should be looking? [16:40:07] <_joe_> legoktm: I guess we should try to replay their requests on the backends directly [16:40:24] <_joe_> legoktm: watch out for any setting of headers that may be wrong [16:40:28] we don't know their passwords though. [16:40:57] <_joe_> we shoud try with an arbitrary one to see if some error is returned [16:41:34] one of the people who is affected said that if they type in a wrong password they get the normal wrong password message [16:42:22] <_joe_> ok so it's probably something in the response that is wrong for varnish [16:43:39] the response should be a redirect [16:43:48] <^d> I just tried to replicate: log out of phab & sul, press login on phab, do login dance for mw.org, authorize and get back to pha. [16:43:51] <^d> no errors for me. [16:43:53] <_joe_> maybe with a wrong content-length? [16:45:53] it should just be calling $this->getOutput()->redirect() as these are non SUL users [16:46:09] <_joe_> legoktm: I do see a bunch of 503s on submitlogin actually [16:46:15] oh, where? [16:46:27] <_joe_> in the varnish logs [16:46:32] <_joe_> but they don't say much [16:46:40] <_joe_> the response was returned though [16:47:01] <_joe_> so the server returned a response, that varnish found to contain errors [16:47:18] <_joe_> (99% of the times this happens because of a bad content-length) [16:47:18] does it say what errors? :) [16:47:25] <_joe_> nope [16:47:33] <_joe_> we should catch people in the act [16:48:13] <_joe_> if only one of the affected users accepted to give us his password, that would be much easier [16:48:27] <_joe_> or if we can reset it to a value we know with his/her approval [16:48:43] <_joe_> and let them change it back afterwards [16:50:24] <_joe_> legoktm: where is the relevant code? [16:51:15] login code is an absolute mess, most of it is in includes/specials/SpecialUserlogin.php, redirection stuff is in includes/OutputPage.php [16:51:45] and CentralAuth hooks heavily into the login flow (wherever you see $wgAuth), especially for non-CA accounts [16:53:04] <^d> That code hasn't really changed much in years. It's only really been poked at around the edges. [16:53:07] <^d> Band-aids here and there. [16:53:47] <_joe_> legoktm: does it set headers from PHP? [16:53:57] * anomie worries slightly about gerrit:149293, seeing as it adds 7 classes ending in "TitleFactory.php". [16:54:24] _joe_: yes [16:54:41] <_joe_> legoktm: mmmh ok [16:55:33] _joe_: https://github.com/wikimedia/mediawiki/blob/master/includes/OutputPage.php#L2190 and that's just a wrapper around `header()` [16:56:52] ^d: a lot of the CA code has changed recently (SUL2, auto-globalizing on login) [16:56:58] <^d> That yeah [16:57:04] <^d> I'm talking about the core stuff. [16:57:39] and because this is only happening to non-attached accounts, I think it might be CA-related [16:57:47] <_joe_> legoktm: nah sorry no clear outliers there - we should really ask one user to allow us to change his password [16:58:19] <_joe_> legoktm: is it happening for just a few users? if so, I'd look at the database as a starting point maybe [16:59:25] I think there have been 5 or 6 reported cases so far [17:01:58] Looking in the database, they have pbkdf passwords meaning they've logged in since that was rolled out, but they're all unattached [17:07:25] https://phabricator.wikimedia.org/T75462#842075 [17:09:54] <^d> legoktm: Good theory. [17:12:49] <_joe_> legoktm: I'm getting a pause, will be back later for a meeting [17:12:59] <_joe_> if you need something from me, just lemme know [17:13:17] ok, will do :) [17:14:29] csteipp: we were just talking about the bug in here. According to https://phabricator.wikimedia.org/T75462#781885 Paju has had the issue for a few weeks now [17:48:24] legoktm: So Paju@fiwiki is the clear winner of the automerge, so maybe a failure in the migration attempt? That wouldn't explain why using firefox is a problem though. [17:50:11] csteipp: I looked through the logs, and there's a "Safe auto-migration for '$user' failed" line for every timestamp on the bug [17:50:29] so it at least got that far [17:53:26] And you haven't been able to find the actual error message in the hhvm logs? [17:54:09] I grepped for "submitlogin" in hhvm.log and fatal.log with no hits [17:54:25] _joe_ can see the 503s in the varnish logs, but with no useful error messages [17:54:37] [08:47:01] <_joe_> so the server returned a response, that varnish found to contain errors [17:54:37] [08:47:18] does it say what errors? :) [17:54:37] [08:47:18] <_joe_> (99% of the times this happens because of a bad content-length) [17:55:23] <_joe_> legoktm: every pass/503 with php=hhvm is a response from hhvm that has been marked as invalid by varnish, typically [17:55:23] <^d> manybubbles: e-mailed about blog. cc'd you. [17:57:31] 3MediaWiki-Core-Team, CirrusSearch: Investigate if anyone is still using lsearchd - https://phabricator.wikimedia.org/T77921#842237 (10Chad) Looks like a fair bit of API traffic from a fairly small group of consumers are using srbackend=LuceneSearch on the API too. [17:58:59] <^d> manybubbles: Also, you said we had a task for shutting down lsearchd or was I imagining? I know T77921 for investigation but I couldn't find an actual "shutdown". RT? [18:07:32] ori: what version of hhvm are you running? [18:07:35] _joe_: ^ [18:07:57] <_joe_> swtaarrs: some ultrapatched 3.3.0 version [18:08:02] ok [18:08:06] 3.3.1 [18:08:06] https://en.wikipedia.org/wiki/Special:Version says 3.3.1 o.O [18:08:17] <_joe_> swtaarrs: 3.3.1 sorry [18:08:18] ooh right I forgot about that page [18:08:25] you should probably avoid 3.4 for now [18:08:30] there are some memory issues we're trying to debug [18:08:38] I don't know if/when you were planning on upgrading [18:08:51] <_joe_> we won't move soon, 3.3.x is LTS right? [18:09:08] yeah [18:32:08] MaxSem: Any reason not to turn on $wgExtractsExtendOpenSearchXml on WMF wikis? [18:32:39] potential performance regression due to parser cache misses? [18:33:15] even though it tries to mitigate this for lead-only extracts [18:33:22] so maybe not lethal [18:33:34] anomie, we could experiment with it [18:33:52] We killed the weird hacky code that was providing descriptions previously, so right now we're not getting any extracts in API action=opensearch. [18:34:08] yup, I seen the bug today [18:34:26] There's a bug? [18:34:42] that opensearch misses extracts [18:35:09] ah, ML post, bot bug:P [18:35:28] Want to make a bug for it? [18:35:50] mmm, let's just flip the switch and see what happens? [18:36:30] someone did file a bug for it I think [18:36:50] https://phabricator.wikimedia.org/T78313 [18:38:12] Huh. Why didn't my watching of MediaWiki-API not email me about that? [18:38:24] I got an email when the project was added [18:46:13] <^d> bd808: I think https://gerrit.wikimedia.org/r/#/c/176688/ makes more sense now that it has the followup showing how you'd use it as a dev wanting the role. [18:51:23] ^d: I haven't tested it yet, but I like the idea [18:51:55] Did we talk ori out of hating it being shared with the host system? [18:55:45] <^d> Maybe it should only be installed on the shared drive if you have the role. [18:55:50] <^d> Otherwise put it in /srv [19:32:41] csteipp: https://groups.google.com/forum/#!forum/php-fig-psr-9-discussion (mailing list for draft PSR on security disclosure standard) [19:35:37] 3Wikimedia-General-or-Unknown, MediaWiki-Core-Team: History of Utilisateur:Binabik@frwiki missing after global rename - https://phabricator.wikimedia.org/T76979#842478 (10Legoktm) All of the following pages have been restored now (thanks for providing a list :)), the number next to it was the previous rev_page.... [19:35:38] <_joe_> AaronSchulz: so my understanding is our code would work with the rsvg vanilla package? [19:42:49] 3MediaWiki-Core-Team, Librarization: Rename composer package from cdb/cdb to wikimedia/cdb - https://phabricator.wikimedia.org/T77934#842500 (10Legoktm) operations/mediawiki-config: https://gerrit.wikimedia.org/r/#/c/178340/ mw.org docs: https://www.mediawiki.org/w/index.php?title=CDB&diff=1307758&oldid=1267819... [19:48:33] _joe_: afaik [19:49:01] <_joe_> AaronSchulz: I'll take a look [19:49:31] <^d> I wonder if MW will behave with strict_all_tables enabled. [19:49:50] <_joe_> AaronSchulz: it is, great! Thanks [20:22:43] 3Wikimedia-General-or-Unknown, MediaWiki-Core-Team: History of Utilisateur:Binabik@frwiki missing after global rename - https://phabricator.wikimedia.org/T76979#842638 (10Binabik) I confirm that all histories are back :) Thank you! [20:23:13] 3Wikimedia-General-or-Unknown, MediaWiki-Core-Team: History of Utilisateur:Binabik@frwiki missing after global rename - https://phabricator.wikimedia.org/T76979#842639 (10Legoktm) 5Open>3Resolved [20:26:57] 3MediaWiki-Core-Team, MediaWiki-extensions-CentralAuth: No valid null revision produced during global rename - https://phabricator.wikimedia.org/T76975#842651 (10Legoktm) We currently batch 25 page moves in one LocalPageMoveJob. This means that if the job fails, when it gets retried, it will move all the success... [21:08:59] 3MediaWiki-Core-Team, Librarization: Deploy Monolog logging configuration for WMF production - https://phabricator.wikimedia.org/T76759#842730 (10bd808) The one part of the patch that I had not tested in beta was the use of a NullHandler to ignore events. And it turns out that this is the part that broke in prod... [22:03:32] <^demon|lunch> robla: T78343 filed. [22:15:23] 3MediaWiki-Core-Team: Replace MediaWiki:Common.css with MediaWiki:Common.less - https://phabricator.wikimedia.org/T78345#842936 (10Jdlrobson) 3NEW [22:17:29] where do you all think bugs like ^ T78385 (Replace MediaWiki:Common.css with MediaWiki:Common.less) should be filed? [22:18:00] <^demon|lunch> Not our team, some project under MediaWiki-* [22:18:13] <^demon|lunch> Our team is unfortunately named. We're getting lots of random mediawiki bugs assigned to us [22:18:16] -ResourceLoader [22:18:24] yeah RL [22:19:06] <^demon|lunch> I wonder if we could drop the "MW" from core team. [22:19:53] duped anyways. [22:19:55] It's actually hard to get the auto completer to show MediaWiki-General-or-Unknown [22:20:08] <^demon|lunch> Those tags are awful and need to be fixed. [22:20:15] agreed [22:20:25] <^demon|lunch> It should be [MediaWiki] + [Database] not [MediaWiki-Database] [22:20:35] <^demon|lunch> So general/unknown just is [MediaWiki] [22:20:38] <^demon|lunch> With no subgroup [22:21:35] bd808: hey :) I have yet to follow up on your email to describe the composer entry points :-/ [22:21:58] hashar: well get on that! :) [22:22:21] * bd808 has so many half done things that he is kind of lost [22:22:26] bd808: as for our discussion sometime earlier about splitting mediawiki/core unit tests to not depend on mw being installed, I got a basic patch for it. Was wondering whether it could be lead by the mediawiki-core team [22:22:54] * hashar throws his half done stuff with bryan half done stuff, hoping that will make whole stuff [22:23:03] We could probably start pulling tests over slowly [22:23:15] or not so slowly maybe too [22:23:36] basically we want to find all the tests that don't need a full wiki right? Things that are actually unit tests? [22:23:51] yeah [22:24:04] Which means looking at all the tests and deciding manually probably :( [22:24:05] having an empty LocalSettings.php is usually a good way to find them [22:24:06] here's a request (I think) for a "MediaWIki-Core" project separate from our team project: https://phabricator.wikimedia.org/T76942 [22:24:07] all of includes/libs/ ? [22:24:14] find . --realunittest unittests/ [22:24:23] legoktm: yes [22:24:33] and other places hopefully [22:24:34] that's where I would start at least :) [22:24:36] robla: https://phabricator.wikimedia.org/tag/mediawiki-general-or-unknown/ ? :] [22:25:08] all tests that require a database should be tagged with @group Database [22:25:58] legoktm: yeah that was enforced by Jenkins at one point. We had a job with no database setup [22:26:03] 3MediaWiki-General-or-Unknown, Project-Creators, Phabricator, MediaWiki-Core-Team: Allow to search tasks about MediaWiki core and core only - https://phabricator.wikimedia.org/T76942#842969 (10bd808) Can we just rename #mediawiki-general-or-unknown to #MediaWiki? [22:26:15] legoktm: but nowadays there is a single job running everything. So maybe @database can be dropped. Unless some folks still rely on it [22:26:58] I think the test runner still checks for the tag and clones the db if it's present [22:28:17] 3Phabricator, MediaWiki-Core-Team, Project-Creators, MediaWiki-General-or-Unknown: Allow to search tasks about MediaWiki core and core only - https://phabricator.wikimedia.org/T76942#842982 (10Chad) Here's what I've been saying to do (in a couple of places on IRC already). The current MediaWiki-* tags are awful.... [22:28:41] 3Phabricator, MediaWiki-Core-Team, Project-Creators, MediaWiki-General-or-Unknown: Allow to search tasks about MediaWiki core and core only - https://phabricator.wikimedia.org/T76942#842986 (10hashar) Once upon a time in Bugzilla we had a catch all component for Mediawiki: MediaWiki -> General-or-Unknown. #Med... [22:29:31] legoktm: yeah we need to drop that eventually. Cloning the DB is slowing down the tests :-/ [22:30:05] 3Phabricator, MediaWiki-Core-Team, Project-Creators, MediaWiki-General-or-Unknown: Allow to search tasks about MediaWiki core and core only - https://phabricator.wikimedia.org/T76942#842988 (10Chad) >>! In T76942#842969, @bd808 wrote: > Can we just rename #mediawiki-general-or-unknown to #MediaWiki? In my plan... [22:31:36] <^demon|lunch> I started on a patch to convert some tests to not subclass MWTestCase :p [22:31:39] <^demon|lunch> Didn't finish. [22:34:51] Can I get some eyes on https://gerrit.wikimedia.org/r/#/c/179217/ ? Config changes (try 2) to use monolog in prod [22:35:49] dho. it's "recheck" not "retest" right? [22:37:31] * hashar stares [22:37:41] yeah "recheck" [22:37:52] bd808: that is on beta already right? [22:37:58] yeah [22:38:13] can you add some tests to this mediawiki-config change? [22:38:20] maybe? [22:38:24] :-D [22:38:32] how does one test mw-config? [22:38:43] I looked at testing mediawiki-config once upon a time. But that is tooooo coupled with mediawiki/core [22:40:01] and extensions [22:40:39] config is a quagmire. very easy to mess something up and as far as I know no real way to verify most of it [22:41:05] legoktm: I had a look at your batch load interface yesterday, it's looking good [22:41:16] did you do a benchmark of it? [22:41:23] bd808: beta gives you some indication though [22:42:25] hashar: yeah. One of the two problems with the patch I tried this morning was something I had not tested explicitly in beta. That was totally my fault. [22:42:53] The other one (global not being seen as defined) is purplexing [22:43:37] bd808: well that is what beta is for [22:43:39] hashar: https://integration.wikimedia.org/zuul/ -- the test queue looks stuck [22:43:50] bd808: yeah on it [22:44:26] some things are moving apparently... [22:44:31] yeah it is back [22:44:33] zuul is still magic to me [22:44:47] when I deploy jobs there is some race condition which causes an issue in Zuul internal Gearman daemon [22:45:03] upstream is hit by it as well but not as much as us and it nobody found out the reason :-( [22:45:19] I don't know why they spawn gearman that way. It seems like it should be separate [22:46:27] James E. Blair (the Zuul guru) wrote a python async implementation of gearman [22:46:51] when Zuul starts, it fork early on and spawn the gearman server. Potentially we could replace it with a different gearman server [22:46:54] I think it is supported [22:51:12] TimStarling: I did some basic stuff, and have been testing it using 300 extensions with 30 hooks each, and cache hits on hhvm (for a normal page view) are about 1% of the total page load time. Cache misses are pretty slow though, around 20% of page load time. [22:52:06] how many milliseconds? [22:53:35] 371 for a miss, 20 for a hit [22:53:55] And this is with the mtime cache invalidation stuff commented out [22:54:02] right [22:54:07] <^demon|lunch> bd808: You sync to prod. If it breaks, you fix it. [22:54:12] <^demon|lunch> So you be extra careful :) [22:54:20] ^demon|lunch: heh. yeah [22:54:56] so around 1500-2000 the rest of the page view [22:55:46] ^demon|lunch: Poor marktraceur synced the borked one this morning. As soon as he did hoo and I started screaming for the revert. :/ [22:56:15] 20 would be pretty slow on bare metal, but maybe it is not so bad for beta [22:57:13] I'm especially interested in startup time from the perspective of API performance [22:59:55] I wasn't able to figure out how to get the profiler to display output on requests that weren't of format=*fm and didn't spend much time looking into it. [23:00:49] maybe something like http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=&format=json&continue= [23:01:07] i.e. action=query with an empty title list, should return nothing [23:02:03] as for the profiler, have you seen my StartProfiler.php hack? [23:02:16] no, link? :) [23:02:31] I just have the basic xhprof config in mine [23:03:26] http://paste.tstarling.com/p/fWwEgF.html [23:04:46] that REMOTE_ADDR condition is a bit broken, you should probably leave that out [23:05:25] but basically it is a complete xhprof implementation, independent of MediaWiki [23:06:49] thanks [23:26:42] <^demon|lunch> Also can do similar in Vagrant now, xhprof is unconditionally installed. [23:26:44] <^demon|lunch> https://gerrit.wikimedia.org/r/#/c/177479/5/puppet/modules/role/manifests/xhprofgui.pp [23:26:59] <^demon|lunch> (startprofiler bit is if you also use the role to install the gui bits) [23:30:33] ASCII art logos in file comments [23:32:29] good or bad? [23:34:06] good [23:34:52] http://paste.tstarling.com/p/xGufDm.html [23:36:20] AaronSchulz: https://graphite.wikimedia.org/render/?width=586&height=308&_salt=1418340966.575&target=mw.performance.save.HHVM.median&from=-3days [23:36:27] i think this is edit stashing [23:37:10] bd808|depressed: ? [23:37:53] monolog is not working right (or rather my config for it in prod) [23:42:11] at least the wiki is not down :-] [23:43:09] I am sure you will eventually land monolog in prod. And that day we will all rejoice [23:43:51] ori: too bad there isn't a non-bot figure [23:44:18] * hashar waves [23:44:27] AaronSchulz: it is a non-bot figure [23:44:45] so it's JS based I take it [23:45:03] yep [23:45:24] performance.timing.responseStart - performance.timing.navigationStart [23:45:33] for the post-edit landing page [23:46:15] navigationStart is the form submission, responseStart is the first byte from the server [23:48:49] ^demon|lunch: https://gerrit.wikimedia.org/r/#/c/178747/ [23:50:22] scary or just log spam? "Warning: Parser cannot be freed while it is parsing. in /srv/mediawiki/php-1.25wmf11/includes/media/XMP.php on line 156" [23:50:47] the hhvm log is so full of spam... [23:53:14] http://graphite.wikimedia.org/render/?width=1110&height=570&_salt=1418341982.047&target=MediaWiki.WikiPage.doEditContent.tavg&from=-7days [23:53:17] ori: grrr, no data [23:53:43] bd808, http://www.uncaught-exception.com/php/warning-parser-cannot-be-freed-while-it-is-parsing/ [23:53:52] not saying it clarifies anything:P [23:53:53] AaronSchulz: http://graphite.wikimedia.org/render/?width=1110&height=570&_salt=1418341982.047&target=HHVM.WikiPage.doEditContent.tavg&from=-7days [23:53:59] s/MediaWiki/HHVM [23:54:30] i wanted to do side-by-side comparisons for a while, so all the metrics from the HHVM app servers replace 'MediaWiki' with 'HHVM'. [23:54:37] might mean a resource leak. or not:P [23:55:42] ori: so it went from ~500 -> ~300 [23:56:20] * AaronSchulz wonders how much the preview patch will help, probably just a bit [23:56:20] AaronSchulz: yes. That's not bad. Is preview stashing in WMF11/12? [23:56:27] 12 [23:56:38] shall we cherry-pick? [23:57:24] ori: 17431af1544ca4d5869656296576b3018b57d8f6 I suppose if you want [23:59:19] http://graphite.wikimedia.org/render/?width=1110&height=570&_salt=1418342299.082&target=MediaWiki.WikiPage.doEditContent.tavg&from=00%3A00_20141201&until=23%3A59_20141210 [23:59:27] ori: the zend numbers looked better in general [23:59:32] is that some glitch or what?