[00:23:43] 3MediaWiki-Core-Team: Create a local jobqueue class that enqueues jobs to final location - https://phabricator.wikimedia.org/T89308#1033081 (10aaron) 3NEW a:3aaron [00:25:04] TimStarling: I suppose the DC caching stuff can be looked at now [00:25:16] * AaronSchulz tried to attack some closely related problems while at it [00:40:13] TimStarling, btw re hhvm memory leaks - I discovered that a thread that runs out of physical memory just hangs forever not releasing anything:P [00:42:21] https://github.com/facebook/hhvm/issues/4821 [00:44:10] 3MediaWiki-Core-Team, MobileFrontend, Mobile-Web: "Last modified..." text differs between desktop and mobile - https://phabricator.wikimedia.org/T89219#1033164 (10kaldari) [00:55:23] legoktm: doh. good catch on the monolog handler issue. I haven't been running locally with the legacy filter bits turned on. I'll see if there is an easy way to fix it. [00:58:47] hmm... this is going to be tricky [00:59:29] I guess I'll just have to move that logic into handle() [01:01:23] I'm not super excited that they made a breaking change without bumping the version number [01:04:04] The only part I can't fake is the 'private' context flag actually [01:04:42] Er nope. I need the channel too [01:04:44] poo [01:05:19] yeah I can just move it into the handle() method. It's a bit less efficient but not too horrible [01:09:21] 3MediaWiki-Vendor, MediaWiki-Core-Team, Librarization: Fix MWLoggerMonologHandler::isHandling() to work with Monolog 1.12.0 - https://phabricator.wikimedia.org/T89313#1033231 (10bd808) 3NEW a:3bd808 [01:10:12] robla: can I change my mind about working on Monday still? I just found out that my wife has the day off too. [01:38:51] bd808: no prob [01:38:57] sweet [02:16:00] MaxSem: interesting [02:57:42] TimStarling: how interruptible are you at the moment? I could really use your help with something. Roan and I have devised a means of testing VE changes for impact on VE initialization CPU time, but we must be failing to control for something, because we don't get reproducible results. [02:58:31] moderately interruptable, just chatting with gwicke in another channel [02:59:26] OK, I'll dump some notes here on the channel and then go offline for an hour or two, if you're around and not busy when I'm back I'll ping you. [03:05:28] So, osmium has the standard app server configuration, with the exception of $wgResourceLoaderStorageEnabled being set to false, to reduce the possibility that localStorage cache serialization/unserialization costs affect our measurements [03:06:35] additionally, it has xvfb running on display :99 [03:08:42] we have a Python tool, vbench, for running benchmarks. We invoke it via a shell script wrapper, /usr/local/bin/vb, which launches Chrome in the background, passing it a bunch of command-line arguments that help with testing [03:09:40] the significant ones are --host-rules="MAP * localhost, EXCLUDE upload.wikimedia.org", which maps all requests to localhost (except upload.wikimedia.org), and --remote-debugging-port=9222, which is what allows vbench to drive. [03:12:20] vbench takes the URL of a VE-editable MediaWiki page as a required argument. It repeatedly navigates to the page, waits for it to load, and then injects a bit of JavaScript that (1) starts the CPU profiler, (2) activates visual editor, and (3) stops the profiler. [03:16:31] if you invoke vbench with "--write-profile-data", it will write .cpuprofile files to cwd, which you can copy to your local machine and load and analyze in Chromium's dev tools (View -> Developer -> Developer Tools -> Profiles -> Load). [03:22:49] for each patch that we think will have a performance impact, Roan has been doing a run vbench with the patch applied and another without (with --repeat=30 each) and logging the min of each run in this spreadsheet: https://docs.google.com/spreadsheets/d/1FwRpDGDCmRKyqewbcSsxY7pJAH4KWja5NB0sxlcXEfs/ [03:25:12] (vbench's output is colorized but if you redirect standard out to a file the codes will be stripped) [03:26:39] the code for vbench itself is here: [03:28:09] the articles we've been testing are Geganii, Bassin minier de Ronchamp et Champagney, and États-Unis on frwiki, and Barack Obama on enwiki [03:30:49] That's basically it. The data is a lot more variable than we were expecting, and we don't know what the source of the variance. [03:30:54] *is [03:32:04] it could be garbage collection, I guess. [04:05:08] you are hitting the main mysql cluster? [04:05:27] mysql will give a lot of variability, RL probably needs to load messages from it [04:35:36] we are, but it's a JavaScript CPU profile, so that shouldn't matter [05:08:25] TimStarling: here now [05:09:50] I guess garbage collection would be at the top of the list, then [05:21:56] yeah, could be. i'm poking at the profiling data (it's not really documented) and you can get GC data from it [05:22:05] it's ~200ms for en:Barack Obama on my machine [05:23:43] not really enough to explain it [05:36:18] TimStarling: Hm.. I'm curious about our sqlite handling. I've taken some deep looks, but not sure I get it. Specifically, whether or not current MediaWiki is supposed to work with sqlite and concurrent http requests. Albeit a very small number, but it should work, right? [05:37:12] yeah, it should work [05:37:23] IIRC sqlite uses POSIX file locks [05:37:40] Reading http://www.sqlite.org/lockingv3.html to get a basic idea. [05:38:18] TimStarling: So how does it deal with a lock in our case. Gives up, or tries again at some point? [05:40:04] it should wait for a lock for a while [05:40:21] not in MW, just in the library [05:40:23] we have this: [05:40:28] function wasDeadlock() { [05:40:29] return $this->lastErrno() == 5; // SQLITE_BUSY [05:40:29] } [05:40:52] which I think means that code that can recover from MySQL deadlocks can also recover from SQLite lock failures [05:41:06] i.e. virtually none of our code [05:41:23] usually an exception will be thrown [05:41:39] I'm finding that our unit tests on Jenkins pretty much always hit between a dozen or two dozen database locks during a build. [05:41:50] Significantly worse on labs than on prod slaves. [05:42:24] TimStarling: Reduced debug log at https://phabricator.wikimedia.org/T89180#1033533 [05:43:06] Raw log https://integration.wikimedia.org/ci/job/mediawiki-core-qunit-karma/361/artifact/log/mw-debug-www.log/*view*/ [05:44:39] I'll look at the library [05:46:55] os_unix.c has a big rant about how awful file locking is and how much work it is to support it [05:47:23] 92 lines of rant [05:50:22] TimStarling: Wow. Seeing it now. [05:51:10] Posix Advisory Locking [05:55:53] well, I guess there is no waiting [05:57:09] hmm, there is an API for waiting [05:58:54] sqlite3InvokeBusyHandler() [06:02:41] it is used by SQLite3::busyTimeout() [06:03:42] but it's apparently not used at all by PDO_SQLite [06:04:31] oh, no spoke too soon [06:06:01] in PDO it is PDO::setAttribute(PDO::ATTR_TIMEOUT, ...) [06:07:20] or you can use new PDO(..., ..., ..., array( PDO::ATTR_TIMEOUT => $timeout ) ) [06:09:33] without doing that, it looks like there will be no busy handler and thus the timeout is effectively zero [06:11:08] TimStarling: Aye, scary. [06:11:11] That explains a few things. [06:11:47] I'll put a note on the bug [06:12:00] I don't have a hook into the html responses, but Im curious as to whether those locks cause page disruption or degrade gracefully. E.g. does it just continue without writing to the db, or does it present an error page. [06:12:17] I know for most cachnig it just goes on without storing it. [06:12:24] But there may be other write operations that don't degrade well [06:12:56] E.g. stuff that's using the database within a single request to transport data between two code paths. That sounds terrible, but are we doing that? [06:13:14] I suspect we might be in some cases. [06:15:21] it will present an error page [06:16:15] we do have Database::ignoreErrors(), but that's basically a PHP 4 legacy function [06:17:12] your log shows it throwing a DBQueryError [06:17:36] The reason I suspect we ignore them for bagostuff is because even the passing job has some lock errors in it. [06:18:57] 3MediaWiki-Core-Team, Services: Replace Tidy in MW parser with HTML 5 parse/reserialize - https://phabricator.wikimedia.org/T89331#1033553 (10tstarling) 3NEW [06:22:55] 3MediaWiki-Core-Team, Services: Replace Tidy in MW parser with HTML 5 parse/reserialize - https://phabricator.wikimedia.org/T89331#1033565 (10tstarling) [06:23:48] 3MediaWiki-Core-Team, Services: Replace Tidy in MW parser with HTML 5 parse/reserialize - https://phabricator.wikimedia.org/T89331#1033553 (10tstarling) is html5 doing the equivalent of what tidy does in MW? yes, it's pretty close modulo a couple of bugs in tidy and 'heuristics' <... [06:24:07] 3MediaWiki-Core-Team, Services: Replace Tidy in MW parser with HTML 5 parse/reserialize - https://phabricator.wikimedia.org/T89331#1033568 (10tstarling) [07:03:03] 3MediaWiki-Core-Team, MobileFrontend, Mobile-Web: "Last modified..." text differs between desktop and mobile - https://phabricator.wikimedia.org/T89219#1033590 (10Jaredzimmerman-WMF) I'd propose that we merge on a consistant "Last edited 1 month ago by Hafspajen" but have a tooltip on the simplified time, with t... [13:03:19] 3wikidata-query-service, MediaWiki-Core-Team: Investigate BigData for WDQ - https://phabricator.wikimedia.org/T88717#1034028 (10Beebs.systap) Here is the wikidata RDF demo that Peter Haase shared. here is again the link to the showcase based on the workbench and bigdata: http://grapphs.com:8087/ login: guest/gu... [13:04:34] 3wikidata-query-service, MediaWiki-Core-Team: Investigate BigData for WDQ - https://phabricator.wikimedia.org/T88717#1034029 (10Beebs.systap) Per Nik's email. Here is the information on the scale out architecture. http://wiki.bigdata.com/wiki/index.php/ClusterGuide See "Scale-out Cluster" on http://wiki.bigda... [14:19:58] <_joe_> https://phabricator.wikimedia.org/T89345 is pretty important. I think merging https://gerrit.wikimedia.org/r/#/c/187730/ should fix it [14:20:09] <_joe_> i /need/ memcached logs [14:41:33] _joe_: If nothing else, put it on the schedule for SWAT this morning? https://wikitech.wikimedia.org/wiki/Deployments#deploycal-item-20150212T1600 [14:42:01] <_joe_> anomie: ok thanks! [14:44:56] <_joe_> done [15:12:59] hi _joe_, is there any ticket or other thing I could track the hhvm issue? [15:45:25] <_joe_> Nikerabbit: not atm, open one :) [16:00:08] _joe_: somebody could swat that config change. I'd lost track of it not being applied yet [16:00:32] <_joe_> bd808: I've put it in the list for swat [16:00:37] <_joe_> it's going live right now [16:00:41] perfect [16:01:46] If I'd keept reading backscroll I would have seen that answer >_< [16:02:39] <_joe_> Brad routed me :) [16:09:29] Unable to jump to row 0 on MySQL result index 12 in /srv/mediawiki/php-1.25wmf16/includes/db/DatabaseMysqli.php on line 264 <- Never seen that one before [16:11:04] _joe_: well there is one in phab... but I am not sure what details are expected if filed against hhvm [16:24:11] bd808: I don't think Elasticsearch likes when you search for "\t+", lol [16:25:02] JimConsultant: Hmmm. In kibana/logstash or in general? [16:25:12] General, seeing it in Cirrus errors. [16:25:38] Something something, parse error due to [16:25:45] heh [16:26:40] I'd have to read some source to guess what craziness it is doing. I can think of a lot of ways that could go wonky depending on what query parser the string is sent trhough [16:27:54] What I see in Cirrus is this: [16:27:55] Feb 12 07:05:50 mw1025: #012Warning: Search backend error during degraded_full_text search for '  ' after 27. Parse error on '  ': Encountered "" at line 1, column 2. [Called from CirrusSearch\ElasticsearchIntermediary::failure in /srv/mediawiki/php-1.25wmf16/extensions/CirrusSearch/includes/ElasticsearchIntermediary.php at line 97] in /srv/mediawiki/php-1.25wmf16/includes/debug/MWDebug.php on line 300 [16:27:55] [16:36:50] <_joe_> bd808: FYI, I had to remove the "sqlbagofstuff" logger [16:37:03] broken? [16:37:05] <_joe_> it was logging every friggin db connection attempt to fluorine [16:37:09] doh [16:37:12] <_joe_> 2015-02-12 16:26:27 mw1122 enwiki: SqlBagOStuff: connecting to 10.64.16.158 [16:37:15] <_joe_> 2015-02-12 16:26:27 mw1122 enwiki: SqlBagOStuff: connecting to 10.64.16.157 [16:37:18] <_joe_> and so on ad libitum :) [16:37:28] <_joe_> so you may want to tune that a bit [16:37:35] I think the legacy logging system doesn't respect the limit lines [16:38:06] When we shutoff logstash+redis on Firday we reverted to the legacy logging implmentation [16:38:44] That config that was merged was tuned for the monolog logging stack [16:54:45] What should our policy be on random PHP libraries associating themselves with mediawiki/wikimedia on packagist? https://packagist.org/packages/mediawiki/slack [16:55:20] Using the namespace seems fine, but adding the mediawiki and wikimedia userss as maintainers seems to imply affiliation [17:08:19] * JimConsultant just wants to stop getting packagist e-mails [17:10:01] I'm going to ask him nicely to remove us [17:10:34] I don't see how to remove yourself. I can remove mediawiki from the wikimedia account but not the wikimedia account itself [17:11:22] I want gerritadmin@ removed from the mediawiki account. [17:11:27] It's a bad e-mail to have as the contact [17:11:55] do we have another one? What did we set the wikimedia to? /me looks [17:12:13] packagist-admin@wikimedia.org [17:12:22] I can change mediawiki to that too I think [17:12:51] JimConsultant: In your role as a consultant, can you file a phab task to get that done? [17:14:10] 3MediaWiki-Core-Team, Librarization: Remove gerritadmin@ from packagist contact - https://phabricator.wikimedia.org/T89360#1034622 (10Chad) 3NEW [17:14:23] thx [17:15:14] np [17:16:38] JimConsultant: https://github.com/grundleborg/mediawiki-slack/issues/5 [17:16:46] I tired to be nice and positive [17:16:51] *tried [17:17:14] :) [17:18:09] bd808: isn't the current restriction that it just has to be in gerrit to have mediawiki/wikimedia as maintainers? [17:18:26] 3MediaWiki-extensions-CentralAuth, MediaWiki-Core-Team: Undefined index warnings in ApiQueryGlobalUserInfo.php - https://phabricator.wikimedia.org/T89295#1034639 (10greg) An inaugural member of the #wikimedia-log-errors project :) [17:18:39] well I think it's a bit unspecified but that would be a minimum bar for me [17:18:50] so lets ask them to move it to gerrit? :) [17:18:52] I'd need to be able to change the repo if it was bad/broken [17:19:17] feel free to suggest that on the issue I opened :) [17:20:54] blerg! can't use the same email for 2 packagist accounts [17:21:05] "The email is already used" [17:21:43] ori: https://phabricator.wikimedia.org/tag/wikimedia-log-errors/ [17:21:50] Hey SMalyshev, you're the stas that closed https://bugs.php.net/bug.php?id=64938 right? [17:21:59] 3MediaWiki-Core-Team, Librarization: Remove gerritadmin@ from packagist contact - https://phabricator.wikimedia.org/T89360#1034648 (10bd808) I tried to do this and got the message "The email is already used" from Packagist. Apparently we need to have distinct email addresses for the accounts. [17:22:03] bd808: Could we cheat and do +something? [17:22:10] Like packagist-admin+mediawiki? [17:22:12] I was just wondering that myself [17:22:39] heh. worked [17:22:54] shitty email parser 0, JimConsultant 1 [17:23:03] \o/ [17:23:17] man, we're really getting our money worth from this Jim guy. [17:23:29] He's a real go getter [17:23:40] not afraid to ask the hard questions [17:23:47] Too bad we only have a 3 week contract with his firm. [17:24:43] maybe we can offer him a permanent position when the contract ends. Seems like a great cultural fit [17:25:57] I hear hear he jumps ship after 10 years or so on the same team. Where's the loyalty?!? Consultants. [17:26:06] I wonder if anyone besides me gets packagist-admin@ emails? [17:26:43] 10y is better than 18m :) [17:27:22] 3MediaWiki-Core-Team, Librarization: Remove gerritadmin@ from packagist contact - https://phabricator.wikimedia.org/T89360#1034651 (10bd808) 5Open>3Resolved a:3bd808 I was able to set the email address to "packagist-admin+mediawiki@wikimedia.org" and defeat their weak unique account algorithm. [17:28:44] csteipp: That was my plan all along. Infiltrate the team for 10 years and learn the secrets, then take my secrets with me! [17:31:12] good thing we have that strong non-compete clause [17:31:42] You can't work for a Top 10 wiki for 12 months! [17:32:33] That just means I can't go work in google knol, right? [17:34:00] actually I think our lawyers would allow that [17:34:26] is that project even still alive? [17:34:42] "Knol has been discontinued as of May 1, 2012 " [17:35:13] Apparently "sold" to http://annotum.org/ [17:45:02] heh, so I can't even do that :p [18:03:46] csteipp: hhvm supports using APC as an object cache, and it's extremely fast [18:04:33] Hmmm. [18:04:38] That's....interesting [18:05:17] There is even a fancy way to load the APC cache from a .so file that you point hhvm to via config [18:05:28] That's how facebook loads l10n messages [18:05:36] That was my exact idea [18:05:51] "I wonder if we could use it for l10n instead of cdb" [18:06:00] * anomie thought too much about AuthStuff, and needs lunch now. [18:07:19] JimConsultant: https://github.com/emiraga/hhvm-deploy-ideas/blob/master/1compile/1compile.sh#L56-L102 [18:07:34] legoktm: Ah, cool [18:08:19] JimConsultant: https://github.com/emiraga/hhvm-deploy-ideas/blob/master/0source/scripts/generate_cache_priming_so.php [18:08:48] Hmmmmm [18:10:26] It means you'd need to bounce hhvm on a scap, right? [18:10:33] yeah [18:10:49] which is part of their normal deploy process [18:11:02] they actually ship the hhvm binary as part of the deploy [18:11:42] Emir came down to the office last spring and talked us through the highlights of their deploy process [18:11:50] Yeah, makes sense. [18:11:56] I'm just thinking how it'd work in our case. [18:12:41] We will need a de-pool and restart step in deploy if we ever want to do repo authoratative [18:20:07] csteipp: yes [18:29:01] csteipp: what about that bug? [18:29:45] SMalyshev: I was trying to figure out how to see if the patch got merged, and what version of php it would be fixed in [18:30:01] (I'm a noob in the php codebase) [18:30:25] That fixes a bug some other people are trying to work around in core :) [18:31:55] it's in 5.5.22 and and 5.6.6 [18:32:28] I haven't pulled it in 5.4 but seeing as it is a security setting probably should [18:32:48] That would be nice :) [18:33:06] which also puts it into 5.4.38 [18:33:30] csteipp: are we running 5.4? we should upgrade to 5.6 :) [18:34:11] Cool. Well we run hhvm for most things, but otherwise it's 5.3. I run 5.4 though, so it makes me happy :) [18:35:00] You could make the case that this fixes a security bug, and the vendors might backport it [18:35:43] csteipp: though looking at it, I see it *enables* the entity loader... so that's probably why I didn't put it into 5.4 [18:35:57] because it's actually a bit less secure if you forget to disable it [18:36:11] Did 5.4 disable the entity loader by default? [18:36:29] csteipp: nobody should be running 5.3 [18:36:38] OpenSuse has it enabled in their version of 5.4, but I'm not sure how close to stock that is. [18:36:46] I totally agree on that [18:38:23] for 5.4, you still hvave about 4 months to upgrade :) http://php.net/supported-versions.php [18:38:29] *have [18:39:41] csteipp: 5.4 didn't disable loader by default, it's the same bug that it leaked the disable through requests [18:40:29] but fixing it may make some servers a little bit more vulnerable... so I'm undecided on this [18:41:34] e.g. if they have 100 scripts which properly disable the loader and one which does not, with bug unfixed, the one script will not be vulnerable since with very high probability one of the 100 disabled the loader and it stuck [18:41:47] but with bug fixed, that one script is vulnerable [18:43:37] SMalyshev: But if I understand the fix right, it isolates the setting too, right? The issue we're having right now is some scripts disable entity loading, but others enable it (so XMLReader->open() works...). And since it's shared, we have a race condition if it will be disabled while actually processing. [18:44:13] csteipp: no, if you always explicitly enable & disable, you never have race condition [18:44:45] csteipp: what happens is that if some scripts disable it, and others rely on it being enabled by default, then you have a problem [18:45:01] since the bug does not restore the setting to default state after being changed [18:46:04] Right... so is the setting only shared when it's not explicitly set? [18:46:19] (as a side note, the whole entity reader thing is a fiasco iMHO, libxml should have made API that allows to open files but doesn't load random remote crap) [18:46:40] It's a serious pain [18:47:18] csteipp: if you disable the loader, the setting leaks through and the loader stays disabled for next request. If you enable it back, not relying on the default, then it's back to enabled [18:48:07] Oh, so it's not shared between threads, just that the next request handled by that process inharits the last setting? I think I missed that point. [18:49:13] yes [18:50:38] I really appreciate the help. Have you been working in php libxml's handling for a while? I really need to fully understand what it's doing... we keep hitting security bugs related to it. [18:51:43] I may bribe you to get a high level introduction to it at some point.. [18:53:08] csteipp: I didn't work with xml too closely, but I'd be glad to share whatever I know [18:53:13] 3CirrusSearch, MediaWiki-Core-Team: inefficient work of CirrusSearch in Russian Wikipedia - https://phabricator.wikimedia.org/T88724#1034793 (10Rubin16) >>! In T88724#1019103, @Manybubbles wrote: > Scratch that intitle: theory - it was wrong. Are there any ideas? Can we get an old search engine for a period til... [19:14:35] SMalyshev: So I'm trying to track down what XMLReader::open() does exactly, so make sure that's safe to call with entity loading enabled. It does, [19:14:37] reader = xmlReaderForFile(valid_file, encoding, options); [19:15:06] But I can't find the definition for xmlReaderForFile. Is there a trick to figuring that out? [19:15:34] csteipp: I think it's in libxml2 [19:15:39] http://xmlsoft.org/html/libxml-xmlreader.html#xmlReaderForFile [19:15:52] It's listed in ext/libxml/php_libxml2.def, but where do I find the actual source? [19:16:24] Oh, that's a native call.. gatcha. [19:17:23] this one probably: https://github.com/GNOME/libxml2/blob/master/xmlreader.c#L5383 [19:28:10] so essentially what happens is that we're trying there to work around a messed up API of libxml2 [19:29:15] JimConsultant: Old search is shut down right? [19:31:56] SMalyshev: Thanks, that really helped... and yeah, libxml2 really needs... something. [19:34:13] bd808|LUNCH: Yep. [19:34:18] Removed from puppet, dns, etc. [19:34:31] Currently at the "wipe and reclaim" stage [19:36:20] 3CirrusSearch, MediaWiki-Core-Team: inefficient work of CirrusSearch in Russian Wikipedia - https://phabricator.wikimedia.org/T88724#1034915 (10bd808) >>! In T88724#1034793, @Rubin16 wrote: >>>! In T88724#1019103, @Manybubbles wrote: >> Scratch that intitle: theory - it was wrong. > > Are there any ideas? Can w... [19:36:40] JimConsultant: *nod* see ^ for why I asked [19:38:09] ah gotcha [19:39:11] And considering nobody really knew how it ran anyway, I doubt we could set it back up again [19:39:15] Well, not easily [19:39:40] You know I /never/ managed to get it running locally? Anything I ever patched on it was a guess at best. [19:42:42] (wow) [19:47:21] 3SUL-Finalization, Wikimedia-Site-requests, MediaWiki-Core-Team: Run sendConfirmAndMigrateEmail.php for all unconfirmed emails on all wikis - https://phabricator.wikimedia.org/T73241#1034946 (10Legoktm) 5stalled>3Open Emails are currently being sent out. [19:59:01] 3wikidata-query-service, MediaWiki-Core-Team: Investigate BigData for WDQ - https://phabricator.wikimedia.org/T88717#1034987 (10Smalyshev) @Beebs.systap this looks pretty good. How it is done - i.e. what is used to create the triples, how they are imported, etc. - is this code available? Also, I assume we'd wa... [20:05:03] https://meta.wikimedia.org/wiki/Special:CentralAuth/72.57.8.2I5 for a second I thought we were emailing an IP address >.> [20:06:35] Hehe, more of this :) [20:06:36] https://gerrit.wikimedia.org/r/#/q/owner:Chad+status:open+-puppet+-Cirrus,n,z [20:07:07] The parser functions one is interesting. I'm curious how much those could potentially shave off in an edge-case template calling those in a tight loop. [20:07:19] Micro-optimizing, but heh [20:08:22] ugh Oversight >.> [20:20:02] anomie: Any idea what "Security token mismatch, cannot log in." from Special:SecurePoll/login means I have misconfigured? [20:20:44] I used labs-vagrant to setup a wiki farm with securepoll enabled. I can create a poll but when I try to vote I'm getting that error [20:22:34] bd808: Not offhand... Let me investigate a little [20:25:17] bd808: Point me at the instance? [20:25:27] I got another weird error creating the poll that may have caused it -- http://pastebin.com/i7wCZrgj [20:25:39] I haven't poked at that yet either [20:26:12] anomie: https://vote-securepoll.wmflabs.org/ running on securepoll-farm.eqiad.wmflabs [20:26:25] Admin's password is "securepoll" [20:39:47] bd808: Huh... eval.php isn't producing output. Any clue what the deal might be with that? [20:40:46] hmmm... nope. That's new to me too [20:41:50] php5 -r 'print "foo\n";' works [20:42:03] but the equivalent via eval.php is silent [20:43:37] bd808: does an extra newline do anything? [20:44:29] 2 more did [20:45:23] anomie, AaronS: https://phabricator.wikimedia.org/P290 [20:45:44] so finall made it print [20:46:45] I don't care about the print newlines, but the ones after each eval command [20:46:58] right. that's the second test [20:47:15] line 10-13 [20:47:20] seems like the newline handling hack doesn't work for you [20:47:49] this is PHP 5.5.9-1ubuntu4.5 [20:47:56] on a labs trusty instance [20:49:07] AaronS, bd808: Your new multiline-handling code pipes through "php -l", which is HHVM (Zend is "php5"), which tends to b0rk because it can't access .hhvm.hhbc in the original user's homedir after sudoing to www-data. [20:49:38] poop [20:52:34] 3MediaWiki-API, MediaWiki-Core-Team, Librarization: Allow API to retrieve information about libraries as shown on "Special:Version" - https://phabricator.wikimedia.org/T89385#1035128 (10Legoktm) a:3Legoktm [20:54:51] That might be a permissions problem with the shared hhbc file for the cli mode of hhvm [20:55:07] it would explain the jobrunner being stopped too [20:56:22] yup. /var/run/hhvm/hhvm.hhbc is 0644 root:root [20:57:26] bd808: The problem you're having with SecurePoll auth is that whatever vagrant is doing for deciding wfWikiId() isn't working right for http://centralauthtest-securepoll.wmflabs.org/w/extensions/SecurePoll/auth-api.php, it's picking "wiki" instead of "centralauthtestwiki". [20:58:26] anomie: ok. that sounds fixable [21:00:33] If I try to vote from securepoll.wmflabs.org things work [21:00:47] that's the wiki that owns the "wiki" db name [21:01:09] well it almost works [21:02:07] anomie: Now I get this -- https://phabricator.wikimedia.org/P291 -- but don't knock yourself out figuring out why until I get the wikidb thing fixed [21:02:58] I have to task switch to something else for a while [21:05:00] 3MediaWiki-extensions-SecurePoll, MediaWiki-Core-Team: Set up mini wikifarm in Labs which has SecurePoll on it - https://phabricator.wikimedia.org/T88725#1035154 (10bd808) Setup this wiki farm using labs-vagrant: * securepoll.wmflabs.org * centralauthtest-securepoll.wmflabs.org * commons-securepoll.wmflabs.org *... [21:06:53] bd808: I015635a9bf080ef6d98b2cff49b949c4378a859f did it, suddenly Status breaks horribly if a subclass doesn't call parent::__construct() [21:08:00] anomie: nice find. can you patch for it? [21:08:58] bd808: Probably stick a call to parent::__construct() in SecurePoll_BallotStatus's constructor. [21:11:41] AaronS, legoktm: ^ You should check anything that subclasses Status (core and extensions) for that situation, since you broke it. [21:12:18] phpstorm says that only SecurePoll subclasses it [21:12:39] AaronS, legoktm: ^ You also broke anything that accesses $status->ok directly, looks like [21:12:51] * legoktm checks for that [21:13:35] AaronS, legoktm: ^ And $status->errors [21:13:38] hmm, that's handled via a magicmethod [21:14:02] Oh... __get and __set. I thought we didn't like thise. [21:14:05] I've got patches up to remove the last of ProfileSection callers [21:14:28] 3MediaWiki-API, MediaWiki-Core-Team, Librarization: Allow API to retrieve information about libraries as shown on "Special:Version" - https://phabricator.wikimedia.org/T89385#1035172 (10Kghbln) WOW :) [21:15:38] bd808: in java you have to call super... [21:15:49] JimConsultant: https://www.mediawiki.org/wiki/Manual:Profiling needs updating to mention that xhprof is the only thing that works now... [21:15:52] AaronS: yeah. php is nots that it's not required [21:15:56] *nuts [21:16:07] I blame php4 back compat [21:18:18] legoktm: https://www.mediawiki.org/w/index.php?title=Manual%3AProfiling&diff=1408080&oldid=1325182 done [21:19:35] 3MediaWiki-API, MediaWiki-Core-Team, Librarization: Allow API to retrieve information about libraries as shown on "Special:Version" - https://phabricator.wikimedia.org/T89385#1035183 (10Kghbln) p:5Triage>3Normal [21:19:58] * anomie notes that this new Status passes through every single method it could just inherit from StatusValue by subclassing it instead of playing games with references in __construct... [21:20:48] I didn't look closely but the *Value object pattern isn't my favorite [21:21:06] 3MediaWiki-extensions-SpamBlacklist, Continuous-Integration, MediaWiki-Core-Team: Figure out a system to override default settings when in test context - https://phabricator.wikimedia.org/T89096#1035188 (10Krinkle) Sorry for being pessimistic, but I'm challenging this to draw out more use cases. As it stands, I... [21:21:45] 3MediaWiki-Core-Team: Try to avoid doCascadeProtectionUpdates() call on page view - https://phabricator.wikimedia.org/T89389#1035191 (10aaron) 3NEW a:3aaron [21:23:33] JimConsultant: https://gerrit.wikimedia.org/r/#/c/190351/ [21:29:59] JimConsultant: I added https://www.mediawiki.org/wiki/MediaWiki_1.25#Less_invasive_profiling [21:30:24] * AaronS hands legoktm https://gerrit.wikimedia.org/r/#/c/185342/ [21:30:29] legoktm: lgtm [21:31:06] We should probably still have docs on how one does a manual profiling section of a non-functional unit [21:31:21] Which we don't :) [21:35:59] Profile section is now unused in all prod extensions, except one that has a review up on github :) [21:48:20] AaronS: Hm.. so you are Aaron Schulz? Or not? He always used AaronSchulz as nick and authed with nickserv. Honestly not sure whether you're the same person or not :) [21:49:01] NickServ says aarons is a different user, though you're not authed as that one either. [21:49:03] I am both [21:49:16] Voice of all. Classic :) [21:49:30] We are all Kosh! [21:49:43] we are all charlie [21:50:11] They're all clones, he's his own brother, everyone's a ghost. [21:51:26] I thinks everybody are their own siblings, by definition [22:00:31] The exchange above made me think of Arby's -- https://twitter.com/nihilist_arbys [22:02:37] TimStarling: I name dropped you on https://phabricator.wikimedia.org/T88813 (last visited cookie task) [22:04:52] I wonder who would speak in favour of it [22:05:56] That would be a good thing to find out I suppose [22:08:19] What is "[34 pts] {bear}"? [22:17:06] https://wiki.php.net/rfc/scalar_type_hints [22:17:48] 61 in favour to 30 against, 2/3 majority required to pass [22:18:05] so far, it ends on the 19th [22:19:52] 3MediaWiki-extensions-CentralAuth, MediaWiki-Core-Team: Undefined index warnings in ApiQueryGlobalUserInfo.php - https://phabricator.wikimedia.org/T89295#1035435 (10Legoktm) As far as I can tell these notices indicate that something in the CA database is out of sync, so I changed it to throw an exception instead... [22:19:55] return type hints are coming in 7 already [22:19:58] would be nice to see both [22:21:19] stas is opposed: "It leads to absurd limitations like 1 not accepted where a float value is required, or where a boolean value is required. That's the essential problem with this proposal - it sacrifices the frequent use cases that make sense to nearly everybody for abstract bondage-and-discipline notion under misguided assumption that it would be a service to the users." [22:21:34] I mean SMalyshev [22:23:54] AHA, ZEND BORROWS FROM HACK NOW!!1 :D [22:24:44] yeah, which is how it should be [22:25:02] neutralise a fork by stealing its features [22:25:11] good for everyone, right? [22:26:17] we still have silly comments like "HHVM doesn’t have scalar type hints for PHP. Hack does, but Hack isn’t PHP." [22:26:20] they should replace declare(strict_types=1) with anomie: "34 pts" is their relative difficulty ranking. "story points". "bear" is the project code name for that epic. Kevin and I talked about their board a few days ago and he clued me into the things he was doing there to make it easier for him to see things. [22:28:38] $foo = sin(1); [22:28:38] / Catchable fatal error: sin() expects parameter 1 to be float, integer given [22:28:42] WTF [22:29:04] even C++ is not that fucked up [22:29:33] borland pascal strikes back :| [22:30:16] Rasmus Lerdorf, Andi Gutmans and Zeev Suraski are all opposed [22:30:56] is it not too late for a sock attack? :P [22:31:47] they have manual approval of wiki user accounts via a mailing list [22:32:29] pfft. have to hack the wiki then. shouldn't be too hard, it's in PHP after all [22:32:49] * bd808 can voite [22:33:01] *vote [22:33:15] you just need to have a contributor account of any type I think [22:35:09] yeah I can log in due to my maintainership of a little PEAR module [22:35:23] gogogog [22:36:21] not sure I can vote the same way as tony2001 on general principal [22:36:39] hehe [22:37:18] * bd808 voted [22:37:32] That's the first rfc I've voted on in several years [22:41:16] in weak mode it is not so terrible, sin(1) etc. works as expected [22:42:09] it's just declare(strict_types=1) that is ridiculous [22:43:57] I really don't understand turning php into a strongly typed langague [22:44:05] I get why it helps with JIT [22:44:07] 3SUL-Finalization, Wikimedia-Site-requests, MediaWiki-Core-Team: Run sendConfirmAndMigrateEmail.php for all unconfirmed emails on all wikis - https://phabricator.wikimedia.org/T73241#1035492 (10Legoktm) I split up the emails by wiki type, today emails for testwiki, wikibooks, wikinews, wikiquote, and wikisource... [22:44:20] Keegan: ^ [22:44:23] but I don't get why a typical programmer would think it's a good idea [22:44:34] I blame Java [22:44:43] I lived in the java world for 7+ years and was glad to come back to php [22:44:44] * Keegan claps [22:45:09] well I take that back. I tried to write java in php for a year and then I rmembered why duck typing was nice [22:48:33] is there a discussion about the RfC somewhere? or is it just pure voting? [22:49:53] * legoktm found some php-dev mailing list posts [22:53:07] legoktm: see #wikimedia . The email worked for at least one person :D [22:53:28] legoktm: there's about 500 posts about it on php-dev, just this year [22:53:34] https://meta.wikimedia.org/wiki/Special:CentralAuth/Meatmanek [22:53:48] I've only read a few [22:54:46] Keegan: woot :D [22:56:36] I will pass on reading all of those :P [23:03:09] AaronS: Could you find a few minutes to review https://gerrit.wikimedia.org/r/#/c/188742 for tgr? [23:04:03] * AaronS is wresting with cascading protection [23:04:08] *wrestling [23:04:35] I think strict typing in PHP context is largely a cargo cult [23:05:32] 3CirrusSearch, MediaWiki-Core-Team: inefficient work of CirrusSearch in Russian Wikipedia - https://phabricator.wikimedia.org/T88724#1035545 (10Manybubbles) >>! In T88724#1034915, @bd808 wrote: >>>! In T88724#1034793, @Rubin16 wrote: >> Are there any ideas? Can we get an old search engine for a period till Cirru... [23:08:12] 3MediaWiki-General-or-Unknown, MediaWiki-API, MediaWiki-Core-Team: Add vendor to api for siteinfo - https://phabricator.wikimedia.org/T89409#1035553 (10Paladox) 3NEW [23:08:18] 3MediaWiki-General-or-Unknown, MediaWiki-API, MediaWiki-Core-Team: Add vendor to api for siteinfo - https://phabricator.wikimedia.org/T89409#1035560 (10Paladox) [23:09:06] 3CirrusSearch, MediaWiki-Core-Team: inefficient work of CirrusSearch in Russian Wikipedia - https://phabricator.wikimedia.org/T88724#1035564 (10bd808) @manybubbles Getting more folks comfortable with the search stack is a great idea @jdouglas Are you interested? @chad how about you if James... [23:09:10] 3MediaWiki-General-or-Unknown, MediaWiki-API, MediaWiki-Core-Team: Add vendor to api for siteinfo - https://phabricator.wikimedia.org/T89409#1035566 (10Legoktm) [23:09:11] 3MediaWiki-API, MediaWiki-Core-Team, Librarization: Allow API to retrieve information about libraries as shown on "Special:Version" - https://phabricator.wikimedia.org/T89385#1035567 (10Legoktm) [23:10:42] AaronS: Do you think you can get to it "soon"? It's blocking a fix that mobile apps wants in image metadata caching. They are getting stale images descriptions right now -- https://phabricator.wikimedia.org/T86955 [23:11:02] it can be the next thing, sure [23:13:22] cool [23:13:24] thanks [23:14:40] 3CirrusSearch, MediaWiki-Core-Team: inefficient work of CirrusSearch in Russian Wikipedia - https://phabricator.wikimedia.org/T88724#1035582 (10Manybubbles) @bd808 just having another person who could do second line support in case something goes sideways would be sweet. I _think_ a couple of these issues are q... [23:22:16] 3MediaWiki-API, MediaWiki-Core-Team, Librarization: Allow API to retrieve information about libraries as shown on "Special:Version" - https://phabricator.wikimedia.org/T89385#1035615 (10Paladox) This works so +1 please merge it. [23:27:52] TimStarling: do you have opinions on https://phabricator.wikimedia.org/T88665 ? I'm thinking that we should have extensions' settings go to a $wgDefaultExtensionUserOptions (or something) and then have User::getDefaultOptions() merge it with $wgDefaultUserOptions [23:28:31] <^demon|away> There we go. [23:28:33] * ^demon|away stabs his bouncer [23:31:10] legoktm: sounds good to me [23:32:41] I'm not sure if there is a good reason for calling array_merge_recursive() there, rather than doing our own thing, but if we can defer that question for a couple of years by introducing $wgDefaultExtensionUserOptions that is fine by me [23:33:33] 3MediaWiki-API, MediaWiki-Core-Team, Librarization: Allow API to retrieve information about libraries as shown on "Special:Version" - https://phabricator.wikimedia.org/T89385#1035635 (10Paladox) 5Open>3Resolved [23:33:48] <^d> TimStarling: I don't like declare() at all [23:34:02] <^d> It's a weird anti-pattern to pretty much anything else [23:34:12] <^d> (anything else in PHP that is) [23:35:47] #pragma __declspec(warning disable:1234) [23:38:03] TimStarling: I think I used array_merge_recursive because of $wgHooks, but I can't think of any other settings that need it...$wgGroupPermissions will break in the same way that user options is. [23:39:45] so an alternative would be to special-case $wgHooks? [23:41:28] SMalyshev: (...and while we are it: so is visibility) [23:42:31] yeah. Ideally $wgHooks would be = array() so we can just replace it, otherwise array_merge_recursive() it. [23:52:18] Keegan: today we've had 639 account attachments, most days have around 23 [23:54:31] Very nice! [23:55:11] and that's just a couple hours after emailing [23:55:34] yup yup [23:55:37] * Keegan wags [23:56:29] \o/ [23:56:48] Keegan: Can you get these things on a nice dashboard to show off to the execs? [23:56:55] Nemo_bis: ^ [23:57:12] Deskana: probably? I don't know the process [23:57:38] Keegan: Or, failing that, you should throw the number of merges per day into a table on mediawiki.org every day. [23:58:52] Yeah we already should daily track the number of rename requests etc. I could use assistance setting that up, I'm computer dumb.