[00:12:32] ori: https://gerrit.wikimedia.org/r/#/c/196126/ [00:32:36] TimStarling: so...all of the MW cache config files (except for 2 wikis, one of them enwiki luckily) are not readable [00:33:11] hmm [00:33:16] they are owned by l10nupdate and not writable to any others, so www-data can't write back the updated cache (the cache file mtimes are <= IS.php mtime) [00:33:25] not sure why this is broken no rather than any other time [00:33:45] because of https://gerrit.wikimedia.org/r/#/c/195196/ [00:34:47] I guess we can change the file permissions manually for now, since l10nupdate only runs daily [00:35:03] then have a look at what it is doing and why [00:35:13] this is on all servers? [00:35:42] at least tin (quite possibly only tin) [00:36:47] a random apache looks fine [00:36:55] same here [00:37:05] so probably just tin, possible other maintenance hosts [00:37:11] I guess that explains why the site did slow down [00:42:56] mwscript only does a sudo if the group is sudo or wikidev or root [00:43:54] also l10nupdate is not allowed to run scripts as www-data [00:46:23] and it's an escalation, allowing l10nupdate to run as itself [00:46:43] it has mwdeploy access, so it can change script files [00:47:19] so you could use l10nupdate to escalate from nobody to mwdeploy access [00:48:55] * autismcat just realized he meant to type "did not", heh [00:49:57] did/did not, basically the same, I understood what you meant ;) [00:51:20] ok, so /var/lib/l10nupdate will be owned by www-data [00:51:48] so that when the script runs as www-data, it can write its output there [00:52:26] sound good? [00:53:31] afaik [00:54:43] grr, why does everything have to be a template? [00:56:17] just reflex I guess [00:59:26] because <%= reason %> [01:00:21] <%= @reason %> [01:00:40] :) [01:00:41] * bd808 slaps himself with a trout [01:31:20] bd808: could I get a +2 on https://gerrit.wikimedia.org/r/#/c/195027/ ? [01:32:47] thanks [01:35:00] yw [01:47:16] bd808: wanna do https://gerrit.wikimedia.org/r/#/c/195028/ as well? :) [01:48:06] what keep toggling that "prefer-lowest" in and out? [01:48:14] just composer version differences? [01:48:41] oh maybe, let me selfupdate [01:49:56] bd808: re-added using latest composer [01:50:36] do those .inc files not need to be in the classmap? [01:51:10] they don't have classes, they get lazy loaded [01:51:30] https://github.com/wikimedia/utfnormal/blob/master/src/Validator.php#L167 [01:51:59] and then the .inc sets UtfNormal\Validator::$utfCombiningClass [01:52:03] https://github.com/wikimedia/utfnormal/blob/master/src/Validator.php#L473 [01:52:21] yeah, same thing there [01:52:21] needs a __DIR__ [01:52:30] it won't use the same path? [01:52:45] it will use the php search path [01:52:53] which you are not in control of [01:53:07] ah [01:53:18] line 169 does it right [01:53:45] * legoktm fixes [01:55:29] bd808: https://gerrit.wikimedia.org/r/196139 [01:58:36] TimStarling: short of someone fixing output buffering in HHVM, is there anything we can do for the PDF OOM problem? Should we disable PDF support until there is a backend fix? [01:59:03] Raising the memory limit seems like a whack-a-mole solution [02:00:04] we could have a limit on PDF size, so that we can present a proper error message [02:01:59] Seems reasonable. So we'd need a idea of how big we can get without breaking and then a safe stop instead of a 503 after the buffer fills when the generated PDF exceeds the cap. [02:05:24] depool a couple of servers, designate them as PDF generators [02:05:26] bd808: https://gerrit.wikimedia.org/r/#/c/195028/ updated [02:05:39] have the normal app servers request the PDF from one of the PDF generators [02:05:48] if you get a 503, give a pretty error [02:06:27] Are the PDFs generated by OCG now? Or is Book a different thing? [02:06:30] that's not very pretty, but if tim is considering fixing output buffering in HHVM, it's an OK bandaid [02:06:40] i have no idea [02:06:49] yeah, me neither [02:07:19] Ok. I'll note in the bug and see if I can round up someone to take a look at either solution [02:08:11] you could even spawn another hhvm process in a shell via wfShellExec() [02:10:34] 6MediaWiki-Core-Team, 10OCG-General-or-Unknown, 7HHVM, 7Wikimedia-log-errors: OOM reported at SpecialPage.php:534 due to large output from Special:Book - https://phabricator.wikimedia.org/T89918#1111881 (10bd808) A possible stop gap solution until there is a fix for T91468 that @tstarling suggested would b... [02:10:51] bd808: SOA! :) [02:10:53] I'm going to guess that we have other ways to generate more content than HHVM can return [02:11:18] Like big tiffs from commons or something [02:11:31] RFCs on content licensing on metawiki [02:11:49] heh [02:11:58] some wikidata blob? [02:12:18] Germany! [02:12:47] * bd808 needs to move to another computer (work time is long over) [02:25:58] 6MediaWiki-Core-Team, 10OCG-General-or-Unknown, 7HHVM, 7Wikimedia-log-errors: OOM reported at SpecialPage.php:534 due to large output from Special:Book - https://phabricator.wikimedia.org/T89918#1111896 (10Eloquence) Can we measure how many successes/failure we're currently getting? If the failure rate is... [02:56:48] 6MediaWiki-Core-Team, 10OCG-General-or-Unknown, 7HHVM, 7Wikimedia-log-errors: OOM reported at SpecialPage.php:534 due to large output from Special:Book - https://phabricator.wikimedia.org/T89918#1111945 (10bd808) A logstash query for `type:hhvm AND message:"request has exceeded memory limit"` shows 7292 oc... [03:41:24] 6MediaWiki-Core-Team, 10OCG-General-or-Unknown, 7HHVM, 7Wikimedia-log-errors: OOM reported at SpecialPage.php:534 due to large output from Special:Book - https://phabricator.wikimedia.org/T89918#1112048 (10ori) Why don't we start by raising the memory limit, as suggested above, and see if we can make the i... [03:51:00] 6MediaWiki-Core-Team, 10OCG-General-or-Unknown, 7HHVM, 7Wikimedia-log-errors: OOM reported at SpecialPage.php:534 due to large output from Special:Book - https://phabricator.wikimedia.org/T89918#1112049 (10ori) As of [[ https://reviews.facebook.net/D32721 | D32721 ]], HHVM's default is 16 GB, which seems a... [13:14:19] * anomie reviews notes from the RFC meeting yesterday [13:15:14] I suppose I shouldn't be surprised that Daniel K put a name ("Interactor pattern") on what I've been saying for a while now. [13:20:19] <^d> anomie: Hm? [13:20:27] ^d: https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-03-11-21.00.log.html [13:20:36] Timezones meant I couldn't attend [13:21:17] Lots of complaint about api.php only being able to do one thing at a time, but isn't that even more of a thing with REST? [13:24:04] * ^d shrugs [13:25:02] T88301 seems vague on what it means by "API". [13:25:37] <^d> I think it means internal APIs, not api.php API [13:25:47] <^d> (protip: most people aren't referring to the latter these days :() [13:26:50] T88301#1111350 seems to differ, although it may be hinting at some "replace MediaWiki with random services" master plan. [13:27:50] Speaking of a pile of services, has Hurd gone anywhere yet? [13:28:17] <^d> lol, not a clue [13:28:34] I think Hurd is in the iOS team. [13:28:52] <^d> zing [13:34:32] ^d: Oh, now that I keep reading I see they had that "define API" discussion at 21:27:52. Which further supports my suspicion that T88301 has a muddied definition. [13:35:58] Oh, good. "swagged out" at 21:28:36 was a typo, not referring to swagger. [13:38:05] <^d> We need more swagger in our step. [13:58:57] Towards 21:49, I see people are starting to think about re-inventing RPC... [15:10:35] anomie: there was good discussion I think but the ideas were all over the place [15:11:06] I'm not sure why I have such a negative gut reaction to people quoting pattern names but I certainly do [15:12:18] I think I can trace it to a couple of fresh hires I made ~8 years ago who learns a lot about how to identify and name patterns in their Masters programs and not much about how to decide which was best suited to the task at hand [15:13:48] unrelated: the ipv6 allocations at my VPS provider got my bouncer caught in a kline dragnet yesterday :( [15:17:00] Strangely they hand out ipv6 addresses from a /56 instead of giving a /64 to each node [15:20:30] MatmaRex: Is there a sane way to checkout oojs-ui and be able to use its demos, or does it require installing crap with npm to do even that? At the moment I'm stuck on it trying to refer to oojs/core's "dist" directory, which is empty in the checkout with no instructions on how to fill it. [15:21:53] anomie: `npm install` to get dependencies, `grunt build` to build the distribution files. and it is necessary, alas. we have a demo of OOUI up on labs though. [15:22:00] https://tools.wmflabs.org/oojs-ui/oojs-ui/demos/ [15:22:26] MatmaRex: I can't edit that labs demo to test a patch for T92449. [15:22:57] MatmaRex: `npm install` tries to install some huge phantomjs, ugh. `grunt build` says "-bash: grunt: command not found". [15:24:13] anomie: eh, yes, that annoys me too. it's not necessary for the basic build, so you might manage to do without it [15:24:27] MatmaRex: So how do I try to manage without it? [15:25:15] anomie: install the other dependencies (see the "devDependencies" section in package.json) by doing `npm install `, then run `grunt build` again [15:25:23] or possibly `grunt build --force` to make it not stop on failures [15:25:31] and you need to install grunt itself, globally: `npm install -g grunt` [15:26:04] Ugh. I'd much rather apt-get install it, but it doesn't seem to be packaged. [15:26:29] it's probably too hipster for debian/ubuntu repos ;) [15:26:36] * anomie grumbles about languages implementing their own package managers all over the place... [15:27:00] meh. it's not a problem if the package manager is any good. [15:27:05] npm isn't, though. [15:27:12] so that's somewhat sad. [15:27:18] It's another package manager to have to update things with, even if it is good. [15:27:40] MatmaRex: `npm install -g grunt` => "Please try running this command again as root/Administrator" [15:27:45] I'd rather not. [15:28:37] i'm sure you can install it as a user somehow, but i never tried myself. (also, i develop on windows ;) ) [15:29:22] http://stackoverflow.com/a/26825428 perhaps? [15:29:44] anomie: anyway, testing code is overrated. if you submit the patch, someone will surely test it. [15:30:51] MatmaRex: `npm install grunt-cli --force` then `node_modules/grunt-cli/bin/grunt build` might have worked. [15:31:09] neat. [15:32:43] MatmaRex: Well, it build oojs. Now to figure out how to get it to build oojs-ui... [15:34:40] Oh, ok. I just have to do the same dance in the oojs-ui directory, maybe. [15:44:57] MatmaRex: Success! I had to skip "grunt-svg2png" which was what wanted huge phantomjs, and I ignored the "karma" bits and "quintjs". [15:46:17] yay [15:51:11] 6MediaWiki-Core-Team, 10Datasets-General-or-Unknown, 6Services, 10Wikimedia-Hackathon-2015, 7Epic: Improve Wikimedia dumping infrastructure - https://phabricator.wikimedia.org/T88728#1113285 (10aude) some thoughts.... currently there is a throttle on download speed for dumps. This makes it a bit slow (... [15:57:11] <^d> Is it just me or is Special:Contribs slower? [15:57:15] <^d> *than usual? [15:57:17] <^d> *getting slower? [16:31:04] bd808: I got klined yesterday too :( [16:31:31] Somebody put in a big ipv6 ban apparently [16:31:44] I emailed kline@ and got some info [16:32:06] they figured out it was overly broad pretty quick [17:26:14] 7Blocked-on-MediaWiki-Core, 10Continuous-Integration, 6Scrum-of-Scrums: MediaWiki installs in Jenkins frequently fail to access their sqlite database due to locks - https://phabricator.wikimedia.org/T89180#1113698 (10Krinkle) 5Resolved>3Open https://integration.wikimedia.org/ci/job/mwext-VisualEditor-qun... [17:27:29] 7Blocked-on-MediaWiki-Core, 10Continuous-Integration, 6Scrum-of-Scrums: MediaWiki installs in Jenkins frequently fail to access their sqlite database due to locks - https://phabricator.wikimedia.org/T89180#1113705 (10Krinkle) a:5Krinkle>3None [17:46:01] 6MediaWiki-Core-Team, 10Parsoid, 6Services: Replace Tidy in MW parser with HTML 5 parse/reserialize - https://phabricator.wikimedia.org/T89331#1113780 (10Arlolra) @tstarling https://github.com/google/gumbo-parser [17:52:18] 6MediaWiki-Core-Team, 10Datasets-General-or-Unknown, 6Services, 10Wikimedia-Hackathon-2015, 7Epic: Improve Wikimedia dumping infrastructure - https://phabricator.wikimedia.org/T88728#1113812 (10aude) apparently it took a month for the full revisions dump of wikidata to finish: http://dumps.wikimedia.org... [18:05:51] anomie: I'm good (+2) with the ApiResult patch except for the bc/2015 part [18:06:39] legoktm: \o/ except for the bc/2015 part [18:08:12] we should probably get more people's opinions on that part [18:08:43] FWIW, I'm not really tied to bc/2015, I just don't like 1/2. [18:12:27] hmm, what other alternative versioning schemes are there? :P [18:25:12] legoktm: We could go with adjective animals ;) [18:25:49] 6MediaWiki-Core-Team, 10Continuous-Integration, 6operations: add a check for whitespace before leading 6MediaWiki-Core-Team, 10Continuous-Integration, 6operations: add a check for whitespace before leading 6MediaWiki-Core-Team, 10Continuous-Integration, 6operations: add a check for whitespace before leading >! In T92531#1113969, @Krinkle wrote: > See {T46875}. > > This and various other errors are already caught by phpcs. Please ensure proj... [18:32:11] bd808, csteipp: Meeting? [18:32:25] anomie: tyring to get out of one so I can join [18:32:49] 6MediaWiki-Core-Team, 10Continuous-Integration, 6operations: add a check for whitespace before leading >! In T92531#1113969, @Krinkle wrote: > See {T46875}. > > This and various other errors are already caught by phpcs. Please ensure project(... [18:32:53] one sec... [18:33:38] 6MediaWiki-Core-Team, 10Continuous-Integration, 10Incident-20150312-whitespace, 6operations: add a check for whitespace before leading 6MediaWiki-Core-Team, 10Continuous-Integration, 10Incident-20150312-whitespace, 6operations: add a check for whitespace before leading 3High [18:37:38] 6MediaWiki-Core-Team, 10Continuous-Integration, 10Incident-20150312-whitespace, 6operations: add a check for whitespace before leading >! In T92531#1114010, @ori wrote: >>>! In T92531#1113969, @Krinkle wrote: >> See {T46875}. >> >> This... [18:53:27] Keegan|Away: script is running again, at the B's [19:12:31] * robla starts poking through the full list of mwcore epics now: https://phabricator.wikimedia.org/project/sprint/board/37/query/9RvnwYbkd9jD/ [19:13:19] aspiecat: you've got https://phabricator.wikimedia.org/T91820 tagged as both "epic" and "patch for review", but no pointer to a patch [19:15:38] yes, the inherit from the epic and have to be untagged every time (on creation or afterwards) [19:15:53] ah, ok. that's an annoying artifact [19:15:55] 6MediaWiki-Core-Team: Create HTTP verb and sticky cookie DC routing in VCL - https://phabricator.wikimedia.org/T91820#1114245 (10aaron) [19:16:07] inheriting the core bit useful, just not the others [19:16:27] sounds like a feature requests :) [19:18:05] 6MediaWiki-Core-Team: Consider moving various DB writes on page views to using local jobs - https://phabricator.wikimedia.org/T91837#1114276 (10RobLa-WMF) [19:23:49] 6MediaWiki-Core-Team, 5Patch-For-Review: Fix various DB master warnings from dbperformance.log - https://phabricator.wikimedia.org/T92357#1114298 (10RobLa-WMF) [19:23:49] 6MediaWiki-Core-Team, 5Patch-For-Review: Add code to enable setting sticky DC cookies for POST requests - https://phabricator.wikimedia.org/T91816#1114300 (10RobLa-WMF) [19:23:50] 6MediaWiki-Core-Team, 10MediaWiki-JobQueue, 5Patch-For-Review: Use local jobqueue class for jobs enqueued on pages views - https://phabricator.wikimedia.org/T91819#1114299 (10RobLa-WMF) [19:29:52] 6MediaWiki-Core-Team, 10MediaWiki-General-or-Unknown: Clean up 'infinity' handling - https://phabricator.wikimedia.org/T92550#1114329 (10Anomie) 3NEW [19:30:35] 6MediaWiki-Core-Team, 10MediaWiki-General-or-Unknown, 5Patch-For-Review: Clean up 'infinity' handling - https://phabricator.wikimedia.org/T92550#1114340 (10Anomie) a:3Anomie [19:38:20] legoktm: Yay [19:39:14] https://en.wikipedia.org/w/index.php?title=Special:UsersWhoWillBeRenamed&dir=prev&limit=25 is the best way to check progress [19:41:55] Yeah that's what I was doing last time. Checking every hour or so [19:42:11] and checking random wikis as well [19:49:58] 6MediaWiki-Core-Team, 10Continuous-Integration, 10Incident-20150312-whitespace, 6operations: add a check for whitespace before leading https://www.mediawiki.org/wiki/Team_Practices_Group/Health_check_survey [20:47:45] I just noticed that over half of the things in the "Needs Review" column on our workboard are from me... [20:49:06] I'm not sure if that means no one reviews my stuff, or just the rest of you suck at putting your stuff in the column... [20:49:41] you also tend to have larger patches [20:50:10] a generous way to say that last part would be that some others (like lego or bryan) tend to have a few patchs for review in non-MWCore projects (like releng/ci, for instance) [21:45:03] 6MediaWiki-Core-Team, 10CirrusSearch: inefficient work of CirrusSearch in Russian Wikipedia - https://phabricator.wikimedia.org/T88724#1114967 (10Jdouglas) Some updates (and links) for the above searches: 1) searching for the following article - https://ru.wikipedia.org/wiki/Гагарин,_Юрий_Алексеевич: - [[ ht... [21:49:07] legoktm: https://gerrit.wikimedia.org/r/#/c/195837/ [21:54:02] +2'd [21:55:33] legoktm: https://gerrit.wikimedia.org/r/#/c/196130/ related [22:20:00] legoktm: the script is moving quicker than last time, no? [22:20:05] I like it, either way [22:20:47] It might be, I have no idea [23:48:41] legoktm: https://gerrit.wikimedia.org/r/#/c/196486/