[00:00:12] 3Wikimedia-General-or-Unknown, Core-Features, MediaWiki-Core-Team, Scrum-of-Scrums: Add ContentHandler columns to Wikimedia wikis, and set $wgContentHandlerUseDB = true - https://phabricator.wikimedia.org/T51193#799915 (10Spage) [00:03:20] 3Wikimedia-Logstash, MediaWiki-Core-Team: Get HHVM logs into logstash - https://phabricator.wikimedia.org/T76119#799936 (10bd808) >>! In T76119#799627, @hashar wrote: > I have made two tasks as child of this: > > {T75262} > {T71976} > > I guess both would be resolved once logs are sent to logstash instead of b... [00:31:01] Krinkle: does the updated change look alright? I was hoping to use it in another patch [00:32:02] ori: Checking in a minute. [00:32:55] ori: Meanwhile, I'm fiddling with ContentHandler and looking at how EventLogging ensures valid schemas before saving. Seems the infrastructure in core is broken, hence EL uses a hook eventhough Content has isValid() and Status Content::prepareSave(). [00:33:16] (Making isValid return false without that hook of yours in EL causes an "Internal error") [00:33:34] Krinkle: sounds vaguely familiar. It has been a while... [00:36:01] ori: https://gerrit.wikimedia.org/r/176845 [00:36:39] TimStarling: is tests/phpunit/includes/parser/TidyTest.php expected to segfault with HHVM + tidy extension + your patch for MWTidy? [00:36:58] Krinkle: will look in a moment [00:37:29] no [00:37:56] TimStarling: https://dpaste.de/R3wP/raw [00:38:44] that's not a segfault, that is an assertion due to the lack of https://github.com/facebook/hhvm/pull/4330 [00:39:16] crash, sorry. and oh, i see [00:39:57] so we need to repackage HHVM before we can load the extension [00:40:23] because i'm running the latest package, so that change set must be missing from it [00:47:12] ^demon|punkinpie: appending the first result is a core feature [00:55:57] <^demon|punkinpie> manybubbles: yeah. it feels wrong when offsetting though. [00:56:15] ^demon|punkinpie: yeah - you'd have to turn it off I guess [01:07:55] Krinkle: re: -- did you see hoo's comment re: checking for an erroneous implementation, and if so, do you plan to amend the patch? If not, I think it's fine, so I can +2 it. [01:09:28] ori: I don't plan to amend it. I thought about adding it in the end, but that seems superfluous. Either Content calls isValid, or WikiPage calls it. Right now the contract is that Content must call it. [01:09:38] * ori agrees. [01:10:04] If we're paranoid, we can inverse it. but that would make prepareSave an empty method as that's all its doing right now. [01:10:05] Cool :) [01:55:30] 3MediaWiki-Core-Team: Isolate throughput issue on HHVM API action=parse - https://phabricator.wikimedia.org/T758#800201 (10ori) [01:56:57] 3MediaWiki-Core-Team: Isolate throughput issue on HHVM API action=parse - https://phabricator.wikimedia.org/T758#12610 (10ori) >>! In T758#795816, @tstarling wrote: > tidy.so still needs to be added to /etc/hhvm/fcgi.ini . Please deploy it with care -- test one server first, and watch the logs. @Joe: HHVM needs... [02:02:49] TimStarling: once HHVM is repackaged to include that backported EZC patch, would the hhvm-tidy package need to be rebuilt? The version of HHVM it was built for is identical except for that one patch. [02:42:49] ori: it shouldn't be necessary [04:30:41] 3Release-Engineering, MediaWiki-Core-Team: Make ::mediawiki::syslog and ::mediawiki::php logging destinations configurable via hiera - https://phabricator.wikimedia.org/T1295#800356 (10bd808) p:5Triage>3High [05:31:04] 3MediaWiki-Core-Team: Isolate throughput issue on HHVM API action=parse - https://phabricator.wikimedia.org/T758#800391 (10ori) Order of operations for deployment: 1. Rebuild HHVM package with [[ https://gerrit.wikimedia.org/r/#/c/176864/ | EZC fix ]]. 2. Deploy HHVM package. 3. Merge and deploy change [[ https... [05:45:31] Tim: dunno if you saw hashar's e-mail to the ops list re: HHVM on jenkins. I can take that one. [06:01:02] ok [06:52:34] <_joe_> Tim-away, ori I'm going to reimage mw1114 and mw1189 soon, so if you have files you need there, back them up [08:56:54] 3Phabricator, Research-and-Data, MediaWiki-Core-Team, Engineering-Community, WMF-Design, Wikidata, Mobile-Apps, Multimedia, Services, Zero, Language-Engineering, Release-Engineering, Editing, Core-Features, Parsoid-Team, Mobile-Web, Scrum-of-Scrums: Create team projects for all teams participating in scrum of scru... [13:45:02] 3MediaWiki-Core-Team: Investigate memcached-serious error spam that mostly effects some servers - https://phabricator.wikimedia.org/T75949#801334 (10Aklapper) @aaron: Setting the priority on tasks is welcome (as you can probably judge best yourself how worrisome this is) [14:29:06] bd808|BUFFER: When did we change the channel for labs !log from -labs to -qa? [15:05:46] 3CirrusSearch, MediaWiki-Core-Team: Prefix search containing only a namespace finds weird results - https://phabricator.wikimedia.org/T76350#798425 (10Manybubbles) https://gerrit.wikimedia.org/r/#/c/176928/ [15:50:12] anomie: There is a special SAL for beta that you can write to from -qa. It was setup quite a while ago but possibly not advertised well. [15:50:42] The !log from -qa goes to https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [15:52:49] <^d> Can also `!log deployment-prep foo` from #-labs, right? [15:53:03] hmm, two different logs. Fun. [15:53:44] Yes, but it goes to a different log :( I wanted Jeremy to setup the log from -qa to go to the same place but apparently I lost that battle. [15:55:13] <^d> anomie: I was looking at your comment on https://gerrit.wikimedia.org/r/#/c/176016/6/includes/PrefixSearch.php again this morning. Passing that offset around to special pages was awful and creates the problem you mention. [15:55:32] <^d> Plus, considering most special pages have a bounded number of subpages, offsets seem less useful there. [15:55:45] The information campaign of this sort of died out, but the original idea was to get people to use -qa for beta like we use -ops for prod. Single place to report and check for status updates on problems [15:56:02] <^d> I have no clue what any channel is for anymore. [15:56:34] All channels are for phab spam and random kibitzing [15:56:49] ^d: OTOH, passing $limit = 9990 to only use the last 10 isn't so great for something like "Special:WhatLinksHere/" where it could theoretically autocomplete any page on the wiki. [15:57:10] <^d> Gah, yes. [16:01:58] 3Scrum-of-Scrums, MediaWiki-Core-Team: API help i18n for WMF-deployed extensions - https://phabricator.wikimedia.org/T956#801560 (10Anomie) 5Open>3Resolved [16:02:18] bd808: need to replace SAL with something nicer. nobody looks at SAL. need an 'event stream' thing [16:02:32] well, not nobody :) [16:02:52] YuviPanda: *nod* I think there are bugs for that actually. At least getting the wiki out of it. [16:03:00] yeah... [16:03:04] wiki is terrible for this, I think [16:03:52] It's been discussed numerous times :P [16:04:55] I think I would have been able to find the bug in bz... I can't find anything in phab yet. It's like my first month all over again. [16:05:04] <^d> anomie: No extensions implement prefixSearchSubpages() yet. We could change the method signature to *require* $limit and $offset since we'd only need to change core. [16:05:14] <^d> Then we wouldn't have to worry about something ignoring its existence. [16:06:00] bd808: https://old-bugzilla.wikimedia.org/ [16:06:01] xD [16:06:30] bd808: Phab doesn't seem to have a convenient link to https://phabricator.wikimedia.org/maniphest/query/advanced/ on most pages. The search at the top is something different. [16:06:52] yeah. the top search is more of a project browser [16:07:04] ^d: WFM [16:07:57] There are several UI decisions in phab that I'm sure Evan is passionate about but seem less than optimal to me. Lots of things that are 3 clicks into the ui instead of 0-1. [16:08:25] Like the list of unread notifications. That one bugs the crap out of me. [16:08:49] * bd808 needs to add things to his phab greasemonkey script [16:26:53] 3MediaWiki-Core-Team: [CirrusSearch] Split prefix searches from full text searches in pool counter - https://phabricator.wikimedia.org/T76490#801739 (10Manybubbles) [16:27:26] 3MediaWiki-Core-Team: [CirrusSearch] Split prefix searches from full text searches in pool counter - https://phabricator.wikimedia.org/T76490#801739 (10Manybubbles) https://gerrit.wikimedia.org/r/#/c/176941/ https://gerrit.wikimedia.org/r/#/c/176932/ https://gerrit.wikimedia.org/r/#/c/176931/ [16:55:39] bd808: does mw-vagrant just clone prod's apt? I can't install poolcounter in vagrant but its available in prod. [16:58:49] ah, maybe that is because its not built for trusty [16:59:15] it'll probably force install [16:59:23] manybubbles: prod apt yes, but like you noticed it needs to have trusty packages [16:59:25] manybubbles: I guess it'd be a good idea to get ops to rebuild/repackage for trusty though [16:59:51] yeah. if we have anything that is not built for trusty at this point that's a bug [17:00:14] shouldn't take someone long to get it done [17:00:14] Trusty is the default for reimaging servers now [17:00:15] * Reedy files an RT ticket [17:00:24] I'm not sure if it not built for trusty yet. I'm not sure how to check that other than apt-cache search on the vagrant machine [17:00:30] and it isn't htere [17:01:00] manybubbles: I think that's your answer :) [17:01:25] k [17:02:14] https://rt.wikimedia.org/Ticket/Display.html?id=8953 [17:05:34] thanks [17:06:38] like I say, you can probably force install the old package [17:06:43] force/manually [17:10:08] 3CirrusSearch, MediaWiki-Core-Team: Add per user concurrent search request limiting - https://phabricator.wikimedia.org/T76497#801864 (10Manybubbles) [17:17:53] bd808: https://gerrit.wikimedia.org/r/176953 [17:21:18] legoktm: thanks. You're faster than I am this morning. :) [17:21:36] legoktm: Do you think we should add those same flags to the mw-core composer.json too? It seems reasonable to me. [17:22:05] We should probably start a page on wiki with a prototype composer.json to follow too. [17:22:19] * bd808 will make a task for that [17:22:38] 3MediaWiki-Core-Team: Isolate throughput issue on HHVM API action=parse - https://phabricator.wikimedia.org/T758#801940 (10Joe) [17:24:03] bd808: I was already working on that :P https://gerrit.wikimedia.org/r/176956 [17:24:22] perfect [17:26:13] 3MediaWiki-Vendor, MediaWiki-Core-Team: Add optimize-autoloader:true to composer.json - https://phabricator.wikimedia.org/T76495#801846 (10bd808) [17:44:12] Reedy: Do you have time to deploy and test that scap fix today if I +2 it? [17:47:20] err, probably [17:47:20] * Reedy looks at the deployment calendar [17:48:40] bd808: Looks like I should be good to do it now ish [17:48:59] Reedy: okey doke. +2 incoming [17:49:53] Reedy: merged [17:50:06] * Reedy looks for trebuchet instructions [17:50:43] https://wikitech.wikimedia.org/wiki/Trebuchet#Deploying [17:50:48] ja [17:51:07] /srv/deployment/scap ? [17:51:11] git deploy start; git fetch; git rebase origin/master; git deploy sync; profit! [17:51:13] yeah [17:51:23] scap/scap [17:51:25] or is it in scap/scap? [17:51:26] heh [17:51:43] there is sooo much scap in that path [17:52:08] It's a shame there's not a /srv/deployment/scap/scap/scap/scap.pyc? [17:52:40] there was at one point. :/ [17:52:56] that became main.py [17:58:50] bd808: nooope [17:58:54] lots of major fail [17:59:05] boo [17:59:15] logs? [17:59:18] 17:58:47 sync-common failed: [Errno 2] No such file or directory [17:59:26] 17:58:47 ['/srv/deployment/scap/scap/bin/sync-common', '--no-update-l10n', '--include', 'wmf-config', '--include', 'wmf-config/***', 'mw1010.eqiad.wmnet', 'mw1070.eqiad.wmnet', 'mw1161.eqiad.wmnet', 'mw1201.eqiad.wmnet'] on mw1122 returned [70]: 17:58:47 Unhandled error: [17:59:26] Traceback (most recent call last): [17:59:26] File "/srv/deployment/scap/scap/scap/cli.py", line 283, in run [17:59:26] app._setup_environ() [17:59:26] File "/srv/deployment/scap/scap/scap/cli.py", line 178, in _setup_environ [17:59:28] sock.connect(auth_sock) [17:59:30] File "/usr/lib/python2.7/socket.py", line 224, in meth [17:59:33] return getattr(self._sock,name)(*args) [17:59:35] error: [Errno 2] No such file or directory [18:00:16] hmmm... I'll revert and look after my 1on1 with rob [18:02:25] Reedy: ready for revert [18:03:18] yeah [18:06:40] bd808: deployed and tested [18:06:51] Just @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ errors from the ongoing reinstalls [18:11:56] <_joe_> bd808: we are reimaging servers, so some of them are probably down ATM [18:22:15] ori, bd808: is hhvmsh a special mw-v thing? I can't find any mention of it outside of mw-v docs [18:22:37] yeah it's just a small shell script [18:22:59] it wraps the hhvm debugger [18:23:27] ah, found it. thanks! [18:32:00] Reedy: I would sort of expect that kind of ssh host key spam when boxes are being reimaged. The keys are collected with some dark puppet magic and put on tin. I think the copies on tin are only updated once or twice a day. [18:37:35] <_joe_> ori: ping? [18:39:47] legoktm, AaronSchulz, ^d : I'd like to have a "what are we working on" meeting for librarization tomorrow or Thursday. Any day/time better or worse for you folks? [18:40:22] <^d> tomorrow is better than thursday [18:40:59] ^d: I'm having a hard time finding a room :( [18:41:05] <_joe_> well, nevermind, I'm going off now [18:41:45] bd808: either works for me [18:43:49] <^d> bd808: i can do thursday too. [18:44:10] I found a time that takes half of my lunch and half of yours and had a room! [18:44:26] invite should be in the emails [18:44:34] <^d> mmm, food. [18:53:27] _joe_: hey, sorry [18:53:34] late start today. what's up? [18:55:57] 3MediaWiki-Core-Team: Isolate throughput issue on HHVM API action=parse - https://phabricator.wikimedia.org/T758#802142 (10ori) [18:59:21] bd808: Did you see https://phabricator.wikimedia.org/T76061#799528 ? That seems to suggest to me that scap and l10nupdate do fight... [18:59:49] "LocalisationUpdate is overriding newer, manually deployed extension messages" ? [18:59:51] except l10nupdate isn't actually syncing anything [19:00:16] that doesn't fit with the sync failure bug does it? [19:00:33] l10n is confusing in so may ways [19:00:35] No [19:01:05] And what are "manually deployed extension messages" [19:01:22] Not sure. Just asked for clarification [19:01:28] I wonder if they knew they had to run scap... [19:01:34] I'd presume it means they've updated the branches and synced [19:01:40] Yeah, asked if they actually scapped [19:02:23] <^d> Aww, https://www.mediawiki.org/wiki/Topic:S760wac72eru946i [19:18:55] <_joe_> ori: I wanted to ask you to test your puppet changes in beta, where hhvm is already at the newest version [19:19:14] <_joe_> also, me and Yuvi think we can be done by tomorrow with the appserver pool [19:19:43] <_joe_> or well, on thurdsay [19:20:40] slackers :) [19:23:15] <_joe_> ahah [19:31:23] 3Librarization, MediaWiki-Core-Team: Figure out how to use cdb library with multiversion in operations/mediawiki-config repo - https://phabricator.wikimedia.org/T1338#23529 (10bd808) [19:48:00] _joe_: wow, holy crap! that's cool [19:48:18] _joe_: yeah, i'll amend the patches to target beta [19:48:22] <_joe_> yeah Yuvi's been amaizing. [19:48:41] <_joe_> do they need amending? [19:49:33] i guess not, i can just cherry-pick them on beta [19:49:36] wfm [19:49:41] i'll do that in an hour or so [19:50:45] 3MediaWiki-Vendor, MediaWiki-Core-Team: Add optimize-autoloader:true to composer.json - https://phabricator.wikimedia.org/T76495#802233 (10Umherirrender) 5Open>3Resolved [19:51:08] <_joe_> great! [19:51:50] anomie: they should be fine, yeah. [20:01:37] AaronSchulz: https://gerrit.wikimedia.org/r/177004 small, need it for a change i'm making to your patch [20:02:59] so at some point I guess MW didn't do email address validation? [20:03:10] there are 2 users on aawiki with the email "querty" [20:03:17] qwerty* [20:03:53] heh [20:06:53] * legoktm filed a bug for it [20:08:11] heh. redis buffering for log events works too well. :) I broke logstash in beta last night and now it is trying to catch up on 94k log events in the redis queue. [20:24:03] AaronSchulz: amended that one to fix the jenkins failure [20:59:29] anomie: Do you have an example script to test https://gerrit.wikimedia.org/r/#/c/176667 already? [21:00:21] csteipp: I tested that it output the header correctly, but I haven't tested with an actual script. I'll do that. [21:01:34] share when you write it :) I think that change is right, but I wanted to test it. [21:14:43] csteipp: https://phabricator.wikimedia.org/P119 [21:29:56] 3MediaWiki-Vagrant, MediaWiki-Core-Team: Upload presentation slides to Commons and link on wiki - https://phabricator.wikimedia.org/T76493#802458 (10bd808) [21:30:35] 3MediaWiki-Vagrant, MediaWiki-Core-Team: Upload presentation slides to Commons and link on wiki - https://phabricator.wikimedia.org/T76493#802461 (10bd808) [21:34:03] <^d> boo wikitext :( [21:34:27] <^d> '''/([mailto:chad@wikimedia.org chad][mailto:chadh@wikimedia.org h?]{{!}}[mailto:chorohoe@wikimedia.org chorohoe])@wikimedia.org/''' [21:34:32] <^d> MW didn't like that ^ [21:51:44] ^d, my eyes don't like it either:P [21:52:14] <^d> Luckily we don't parse wikitext with your eyes :) [21:53:56] ^d: how did it not like it? wfm [21:54:37] <^d> It was in a table. [21:54:40] <^d> That could've been part of it. [21:57:54] <^d> MatmaRex: [21:57:57] <^d> |- [21:57:57] <^d> | [[User:^demon|Chad]] || xxxxx || xxxxx || xxxxx || || - || ^d || xxxxx || '''/([mailto:chad@wikimedia.org chad][mailto:chadh@wikimedia.org h?]{{!}}[mailto:chorohoe@wikimedia.org chorohoe])@wikimedia.org/''' [21:58:47] huh [21:58:50] yeeeeah [21:59:11] ^d: use | instead of {{!}} [21:59:20] :D [22:00:41] <^d> A-ha! [22:00:43] <^d> That worked. [22:09:50] it's always nice to have a language that even its creators can't really comprehend [22:33:00] https://github.com/composer/composer/commit/ac676f47f7bbc619678a29deae097b6b0710b799 [22:33:52] <^d> "it's explained in the bug" is the worst excuse ever. [22:34:13] lol. [22:35:01] <^d> also, my browser wants to spin a long long time loading those 200+ comments. [22:35:22] yeah [22:35:27] I think it's well deserved due to the commit [22:35:36] made it to r/lolphp as well: https://www.reddit.com/r/lolphp/comments/2o1w4a/php_garbage_collector_at_its_finest/ :P [22:36:47] <^d> We have a quip from Rasmus [22:37:10] to be fair, I don't believe there is a garbage collector which can track unlimited number of objects. You have to have limits [22:37:29] <^d> "If you have more than 1000 objects loaded on a single request, you are doing something wrong as far as I am concerned." [22:37:30] <^d> :) [22:37:46] How many does MW load? [22:38:17] heh from hacker news: " Looks like someone disabled garbage collection on that comment thread as well :)" [22:38:27] <^d> Reedy: probably over 9000. [22:54:07] 3Wikimedia-General-or-Unknown, MediaWiki-Core-Team: Remove invalid emails from the database - https://phabricator.wikimedia.org/T76512#802688 (10Legoktm) [22:54:19] 3Wikimedia-General-or-Unknown, MediaWiki-Core-Team: Remove invalid emails from the database - https://phabricator.wikimedia.org/T76512#802252 (10Legoktm) [22:57:20] presumably that's why gc_disable() exists, it doesn't work for every use case [23:07:05] <^d> manybubbles: Are you wanting https://gerrit.wikimedia.org/r/#/c/176931/ to go out in today's swat? [23:07:14] TimStarling: exactly, when you have a ton of objects which don't need to be GCed, disabling it is the right thing [23:07:51] ^d: we can push it I guess - it should go before we sync the cirrus change that splits them. I imagined I'd do it all together wed or thrs [23:08:02] they should just rewrite Composer to be a wrapper around npm :P [23:08:29] <^d> ugh i just threw up a little when you said that. [23:09:05] AaronSchulz: ApiStashEdit updated. No longer depends on the mw.Api patch (which Nemo_bis reverted, in case you haven't noticed). Please +1 to indicate acceptance of my changes. [23:09:19] AaronSchulz: (I only changed the JS file) [23:09:50] <^d> manybubbles: when do we get a new branch? tomorrow? [23:10:20] ^d: oh, I guess so. Ok then. [23:10:54] ^d: can you baby sit it for the evening SWAT? its about time for me to stop [23:11:11] <^d> yeah i was gonna. [23:14:07] bd808: Sorry if I've asked you this before, but do you have any idea why manual Puppet runs on labs hosts fail about 20-30% of the time because a class is inexplicably missing? [23:14:48] I think the puppet server gets swamped out somehow [23:15:09] odd [23:15:20] They almost always work the second time. I haven't really ever followed the problem down the rabbit hole [23:18:24] yeah. there are more important things to worry about. i just wanted to make sure i wasn't insane. [23:19:09] You may or may not be, but others have seen theis behavior in the outside world too. :) [23:25:08] heh [23:41:12] 3MediaWiki-Core-Team: Central Code Repository - https://phabricator.wikimedia.org/T1238#802754 (10Legoktm) [23:42:55] ori: every time I clean a core out of deployment-mediawiki01 to give it free disk space in /var hhvm dumps another one. :( [23:44:04] bd808: hm, that's no good. i'll take a look. [23:44:19] I'm opening one in gdb now... [23:44:29] segfault in tidy [23:45:28] ori: https://phabricator.wikimedia.org/P120 [23:45:57] charming [23:46:15] i'll revert [23:46:30] no symbols in tidy.so? [23:47:23] i used faidon's debianization of the other HHVM extensions as a template. it strips symbols, yeah. [23:47:52] i can rebuild it with debug symbols and generate another crash [23:47:58] It's happening a lot on that beta host [23:48:33] it'll stop as soon as beta pulls the latest master of mediawiki-config [23:51:22] Jenkins has turned Spanish or Portuguese again [23:51:40] that is the weirdest bug