[00:46:03] bd808: yes, it should throw out -toal [00:46:08] bd808: want to patch that? [00:46:15] yeah I can make a patch [01:25:36] bd808: btw I wrote https://www.mediawiki.org/wiki/Manual:Composer.json_best_practices [01:29:05] legoktm: Awesome [01:32:29] I wonder if we should add some stuff to https://www.mediawiki.org/wiki/Composer too [01:33:08] About core using Composer now and how I broke all of Jeroen's hard work :( [01:33:09] I think we should move most of that stuff to /For extensions and keep the main landing page general and a pseudo-disambig page [01:34:04] *nod* [01:34:16] Also, I need to write up my extension management proposal... [02:41:18] <^d> Go Cirrus: https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(technical)#Search_function_much_faster :) [04:27:01] ^d: nice! :) [07:57:35] <_joe_> TimStarling: around? [08:31:30] 3MediaWiki-Core-Team, SUL-Finalization, MediaWiki-extensions-CentralAuth: Re-enable $wgCentralAuthAutoMigrate = true - https://phabricator.wikimedia.org/T78727#851988 (10Nemo_bis) Do we know how many users were merged thanks to $wgCentralAuthAutoMigrate? [09:13:25] 3MediaWiki-Core-Team, MediaWiki-Search: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#931954 (10Liuxinyu970226) @Nikerabbit: Keeps on translatewiki, u know. [09:19:39] 3MediaWiki-Core-Team, MediaWiki-extensions-TemplateSandbox: Special:TemplateSandbox from Extension:TemplateSandbox needs edit token when raw HTML is allowed - https://phabricator.wikimedia.org/T76195#931984 (10Mglaser) Had trouble with committing to the right branch and decided that time from disclosure to publ... [14:19:02] 3MediaWiki-Core-Team, MediaWiki-extensions-TemplateSandbox: Special:TemplateSandbox from Extension:TemplateSandbox needs edit token when raw HTML is allowed - https://phabricator.wikimedia.org/T76195#932481 (10MarkAHershberger) Kunal, Thanks to your patch, I was able to get REL1_22 patched: https://gerrit.wikim... [14:31:59] <_joe_> anyone here? [14:32:07] uga [14:32:10] <_joe_> do we use xhprof extesively in production? [14:32:34] <_joe_> because I kinda need to disable the hotprofiler right now [14:33:19] _joe_: potential users are most probably still sleeping / barely awake [14:34:03] i haven't followed the xhprof effort , but it might well be used to generate our gdash profiling graphs [14:34:07] <_joe_> well, I need prod to be stabel, so whatever [14:34:31] _joe_: yeah I guess we can afford loosing a few hours of profiling if it is to make prod stabler [14:34:59] maybe some recent configuration changed ended up using xhprof more and causing whatever instability you may face [14:43:09] apparently we xhprof_enable on every single requests (from wmf-config/StartProfiler.php ) discussion in the private channel [14:59:07] <_joe_> sooo more people [14:59:14] <_joe_> I'd love a +1 on https://gerrit.wikimedia.org/r/180791 [14:59:39] <_joe_> in short, we have issues that are proved to be caused by profiling [14:59:56] <_joe_> so I need to temporarily disable it [15:04:23] _joe_: +1 ed [15:04:30] _joe_: can even +2 if you want [15:04:41] I have added ori for info [15:04:56] <_joe_> hashar: it seems rebooting "fixes" it [15:05:01] <^d> That's just the new flame graphs we're disabling? [15:05:06] <_joe_> if that is the case, I'll hold [15:05:09] * ^d yawns, tries to finish grokking what's going on [15:05:10] <_joe_> ^d: I guess so [15:05:39] <_joe_> yeah it's just the flame graphs [15:05:54] _joe_: and restarting hhvm does not fix anything? [15:06:04] <^d> merged. [15:06:07] <^d> sync'ing [15:06:14] <_joe_> hashar: nope [15:06:28] <_joe_> ^d: what did you merge? [15:06:43] <^d> Disabling the stuff in StartProfiler [15:06:45] <_joe_> oh thanks [15:07:24] <^d> {{done}} [15:08:16] {{donuts|user=^d}} [15:08:19] good morning [15:08:44] <^d> morning! [15:31:17] any clue where anomie is today ?:] [15:31:27] got a failing tests under hhvm which we want to skip https://gerrit.wikimedia.org/r/#/c/180208/ [15:32:40] <^d> I thought it was missing entirely, not just buggy. [15:32:51] * ^d could be wrong [15:34:00] hhvm does support it apparently [15:34:18] Tim Landscheidt filled a bug upstream and they fixed it https://github.com/facebook/hhvm/issues/4283 [15:34:24] pending release to our package:] [15:34:50] <^d> man, I can't believe we spend time on wddx. [15:35:05] well I don't feel like removing wddx:] [15:36:24] <^d> commented. [15:37:33] ahghggg [15:41:47] ^d: that is a good idea. Replied https://gerrit.wikimedia.org/r/#/c/180208/2/tests/phpunit/includes/api/format/ApiFormatWddxTest.php,unified [15:42:47] <_joe_> ori: ping [15:43:38] <^d> hashar: Could we wrap it in wfSuppressWarnings()/wfRestoreWarnings()? [15:43:41] <^d> Or is it a fatal? [15:44:50] <^d> _joe_: He's usually not up this early. [15:45:03] I'll probably submit a patch today or tomorrow to replace ApiFormatWddxTest.php with something that actually checks stuff. And similar tests for all the other format modules too. [15:45:04] ^d: no clue :( [15:45:17] <_joe_> I've seen he sent an email to ops@ :) [15:45:18] <^d> Try and find out :) [15:45:42] anomie: well with the RFC to dish out wddx ( https://www.mediawiki.org/wiki/Requests_for_comment/Ditch_crappy_API_formats ) it is probably not a good use of time [15:45:42] <^d> _joe_: Oh maybe then, nvm :) [15:45:48] anomie: good morning :] [15:46:17] hashar: Meh, I did it mainly to make sure I'm not breaking stuff when I'm overhauling ApiResult. [15:46:17] I am just lamely attempting to get tests to pass under hhvm and make the job voting ( https://integration.wikimedia.org/ci/job/mediawiki-phpunit-hhvm/200/console ) [15:47:04] anomie: make sense. But maybe we want to drop the old format first, and refactor ApiResult after ? [15:47:53] hashar: That would require me to wait on a few different things that I've planned for the API work. [15:48:28] anomie: but at save you a bunch of time since you will have to deal with less formats? [15:48:51] hashar: Except I already did most of the necessary work for it. [15:49:52] fair point :] [15:50:46] can we just skip the test so or should we attempt to detect whether hhvm behave properly as ^d proposed? [15:52:05] Probably better to detect than to just blindly skip. [15:55:55] Are all phabricator search links permanent, or do you have to save them to keep them from somehow expiring? [15:56:20] <^d> Yes, but you can. [15:56:47] <^d> But yeah, those hashes or keys or whatever are supposedly stable :) [15:58:05] Someone noticed that the "Bugs & requests" link in the api.php auto-generated docs still points to BZ, and I wanted to make sure we wouldn't be replacing it with something that would stop working in a week or whatever. [15:58:55] <^d> Ah gotcha. [15:59:03] <^d> Well the docs on mw.org say they're stable [15:59:25] at worth you could use a rewrite rule [16:06:27] ^d: using wfRestoreWarnings() in a test seems to make other tests to fail. Magic! [16:06:35] https://integration.wikimedia.org/ci/job/mediawiki-phpunit-hhvm/201/testReport/ [16:06:44] my super patch https://gerrit.wikimedia.org/r/#/c/180208/3/tests/phpunit/includes/api/format/ApiFormatWddxTest.php [16:08:00] <^d> Boooooooo [16:09:18] <^d> That's probably a sign of bad tests anyway ;-) [16:12:39] <_joe_> whenever someone that knows how to disable calls to xhprof in mediawiki is around, please ping me [16:12:50] <_joe_> we kinda need that [16:16:43] 3MediaWiki-Core-Team, CirrusSearch: Expose Cirrus' category intersections more prominently - https://phabricator.wikimedia.org/T84887#932824 (10Chad) 3NEW [16:20:14] I can't even get a patch right :( [16:21:22] <^d> _joe_: We need to disable all xhprof? [16:21:22] <^d> I thought it was just the flame stuff. [16:21:40] <_joe_> ^d: I hoped so, but apparently we need that [16:21:59] <^d> sec. [16:22:02] <^d> patch incoming [16:22:18] <_joe_> when I disabled the hotprofiler I started to see [Thu Dec 18 15:37:16 2014] [hphp] [15775:7f718f3ff700:355:000173] [] \nWarning: Division by zero in /srv/mediawiki/php-1.25wmf12/includes/profiler/ProfilerXhprof.php on line 143 [16:27:46] <^d> That just sounds like a flat-out bug. [16:27:50] <^d> AaronS, bd808? [16:28:10] bah [16:28:35] I saw that in scrollback. It should be an easy fix [16:29:03] I don't know why disabling the hotprofiler would cause it [16:29:08] but the fix is easy [16:33:43] <_joe_> bd808: no disabling the hotprofiler in hhvm itself [16:33:52] <_joe_> hhvm.stats.enable_hot_profiler = false [16:34:35] <_joe_> what is xhprof used for btw? [16:35:01] <^d> profiling mediawiki :) [16:35:03] <_joe_> I hope not for collecting edit times or page save times or such [16:35:21] <_joe_> ^d: well "profiling" can have like 10 different context [16:35:40] <_joe_> so I was asking some context [16:35:49] _joe_: It's used for function level profiling on a sampled basis [16:35:54] <^d> That ^ [16:35:55] <_joe_> ok [16:36:15] <_joe_> cool, we'd be without it while we find out what the heck was going on here [16:36:21] <_joe_> I strongly suspect a kernel bug [16:36:22] <^d> Yeah, we can live. [16:36:25] <_joe_> mark too [16:36:52] it uses a high precision kernel timer as I recall [16:36:55] <_joe_> from what I get from its docs, getrusage shouldn't even use spilocks [16:37:37] <_joe_> anyway, disabling the hot profiler works well on mw1191 (which was the ever-exploding-server) [16:37:50] <_joe_> so I'm gonna disable it elsewhere as well [16:38:18] Should we turn it off in StartProfiler too? [16:38:27] <^d> We did already [16:38:32] <_joe_> bd808: we did that [16:38:33] ah good [16:38:51] <_joe_> (context: my ugly hotfix is https://gerrit.wikimedia.org/r/#/c/180790) [16:38:59] <^d> _joe_: The major change here is in using xhprof. Previously all this profiling stuff was done with ProfilerStandard in userland. [16:39:18] <^d> But doing profiling in userland is error-prone and slow, so we're trying to improve :) [16:39:24] <_joe_> ^d: it looks like we might want to do that for some time again :) [16:39:44] <_joe_> ^d: turns out, profiling in kernel space is error-prone and slow :P [16:39:52] <_joe_> and it also crashes machines [16:39:58] <^d> tehehe [16:40:02] Our php code uses getrusage [16:40:21] which may actually be the problem [16:40:45] it's still used in the sectionprofiler (which is what was tossing that div 0 error) [16:40:49] <_joe_> bd808: it usage getrusage(RUSAGE_THREAD) [16:41:02] <_joe_> *uses [16:41:12] anomie: I think I finished the patch to skip the wddx test. Tim commented on it https://gerrit.wikimedia.org/r/#/c/180208/5/tests/phpunit/includes/api/format/ApiFormatWddxTest.php :) [16:41:14] _joe_: yes [16:41:31] <_joe_> while the part of the hotprofiler that gives the problem uses getrusage(RUSAGE_SELF) [16:41:39] <_joe_> which may be the source of the problem [16:41:52] https://github.com/wikimedia/mediawiki/blob/master/includes/profiler/ProfilerFunctions.php#L32-L40 [16:42:17] <_joe_> bd808: if you see an unusual number of errors in the next hour, ping me [16:42:35] <_joe_> now I need a break, it's the second consecutive day of hhvm firefighting [16:42:35] * bd808 opens up logstash [16:42:41] <^d> home -> office [16:42:51] <_joe_> and btw, logstash is being super-useful to me [16:42:56] awesome [16:43:16] <_joe_> https://logstash.wikimedia.org/#/dashboard/elasticsearch/hhvm_jobrunner [16:43:29] <_joe_> comparison between the hhvm jobrunner and the standard one [16:43:38] nice [16:44:22] <_joe_> which reminds me, no one is looking at this: http://ganglia.wikimedia.org/latest/graph.php?r=day&z=xlarge&c=Jobrunners+eqiad&m=cpu_report&s=by+name&mc=2&g=mem_report [16:44:29] <_joe_> I'll do later [16:44:40] yikes [16:44:48] Is the drop a restart? [16:45:16] <_joe_> it's the memory of the /cluster/ [16:45:21] <_joe_> so that's a crash, yes [16:45:25] <_joe_> something is being leaky [16:45:37] <_joe_> but that's php, so it's gonna be easy to find out what [16:45:44] <_joe_> actually, lemme take a look now :) [16:45:55] _joe_: you were going to take a break :) [16:46:11] <_joe_> bd808: whatever, now I'm curious [16:46:23] Ok, but things will still be broken later too [16:47:19] This is new -- Notice: Array to string conversion in /srv/mediawiki/php-1.25wmf12/includes/utils/IP.php on line 491 [16:47:37] Warning: preg_match() expects parameter 2 to be string, array given in /srv/mediawiki/php-1.25wmf12/includes/utils/IP.php on line 91 [16:47:51] <_joe_> oook [16:48:08] 5k per hour -- https://logstash.wikimedia.org/#/dashboard/elasticsearch/fatalmonitor [16:48:13] <_joe_> I this is weird. Don't we use the tidy extension on jobrunners? [16:48:25] I think we shell out [16:49:04] I *think* the tidy extension is not awesome in zend but ori would know more [16:49:12] <_joe_> ok [16:49:23] <_joe_> the php process shells out to tidy [16:49:27] <_joe_> and is blocked on [16:49:34] <_joe_> write(32, "\"600\" data-file-height=\"600\" />&"..., 8192 [16:49:56] argh. ori and brandon were looking at that recently [16:50:01] <_joe_> where fd 32 is a pipe to the tidy process [16:50:08] 3MediaWiki-Parser, MediaWiki-Core-Team: Make Parser::parse handle recursion better (throw exception or tolerate it) - https://phabricator.wikimedia.org/T76651#932997 (10Welterkj) I noticed that Parser::lock() now throws an exception in the case of Parser::parse recursion. So now my only interest is eliminating... [16:50:14] <_joe_> oh, ok [16:50:17] ori made a change to the way the stdout/stderr were drained [16:50:23] and it looked to be blocking [16:50:31] I thought he patched it [16:51:04] 3MediaWiki-Parser, MediaWiki-Core-Team: avoid Parser::parse recursion - https://phabricator.wikimedia.org/T76651#933000 (10Welterkj) [16:51:29] _joe_: https://gerrit.wikimedia.org/r/#/c/180384/ [16:52:03] cherry-picked to wmf12 but we are running wmf13 on group0 now [16:52:13] <_joe_> oh that should explain it [16:52:49] <_joe_> so, I would need the author of that patch [16:53:26] we can pick it as a hotfix for wmf13 [16:53:53] Looks like t.im would like to be fixed up some more, but if it helps it helps [16:53:59] <_joe_> well I'd wait for or.i [16:54:06] <_joe_> not sure it helps [16:54:10] <_joe_> tbh [16:54:54] We could revert this one too if necessary -- https://gerrit.wikimedia.org/r/#/c/176862/ [16:55:15] That was the refactor that changed the behavior internally [16:55:35] <_joe_> yeah I'd discuss that with the author :) [16:56:07] * bd808 goes to look for changes in IP.php [16:56:58] ^d: wanna +2 the wddx test skip? Tim confirmed the test pass on travis hhvm, and our hhvm skip it just fine https://gerrit.wikimedia.org/r/#/c/180208/5 [16:58:05] hashar: {{done}} [16:58:19] will make the hhvm job voting right away [16:58:26] (yay) [16:58:52] which will make core merges take twice as long :/ [16:59:16] greg-g: where are we on getting better hardware for ci? [16:59:31] Just waiting on the new labs hosts? [16:59:36] bd808: haven't heard anything, I was goign to file a ticket for that to track it.... [16:59:56] At least it won't be in the RT backhole [17:00:07] indeed! [17:00:13] ok, 1:1 time, last one of the year! [17:00:20] I was just cross linking several RT tickets I made to projects they effect [17:00:31] I saw :) [17:00:35] (IEG) [17:01:46] bd808: na zend and hhvm jobs run in parallel [17:01:57] hashar: excellent [17:01:57] 3MediaWiki-Parser, MediaWiki-Core-Team: avoid Parser::parse recursion - https://phabricator.wikimedia.org/T76651#933037 (10Welterkj) @tstarling, could you review this bug when you have a moment and share any more thoughts about refactoring the Parser to avoid the recursion seen in T75073#763545? [17:02:28] it takes roughly 3:30m for the hhvm job to complete https://integration.wikimedia.org/ci/job/mediawiki-phpunit-hhvm/buildTimeTrend [17:02:43] zend is a bit less than 6 minutes :( https://integration.wikimedia.org/ci/job/mediawiki-phpunit-zend/buildTimeTrend [17:03:03] The parser tests are half of that as I recall [17:03:16] I wonder if they do db stuff uncessarily? [17:03:35] unnecessarily [17:05:04] they do [17:05:18] there is a lot of optimization that is left to be done [17:05:30] I did so ages ago and managed to cut the run time in half [17:05:33] there is more to do though :( [17:05:47] iirc we setup/teardown interwikis for each parser tests, and that is quite slow [17:06:12] we probably setup/teardown the whole db anyway [17:06:34] bd808: any reason you rebased the patch before +2 ing it ? [17:06:49] I hate merge commits [17:07:27] So I have the habit of rebase then +2 [17:07:42] that is what I thought :] [17:07:45] I tend to do the same [17:07:54] for testing purpose, Zuul already test the patch as it will be merged in [17:10:30] We still have the xhprof profiler enabled somewhere... [17:12:10] test2 [17:12:28] <^d> hashar: We do setup the DB on each test. The problem with not doing that was that tests would bleed from one to the next. They're supposed to just truncate between runs but even that gets slow. [17:12:47] ^d: we will need a better test runner :] [17:13:06] <^d> Well we need to disentangle the database from more things then. Or get better at mocking it. [17:13:16] we can probably save a bunch of time by not having to rely on mediawiki being installed [17:13:32] some of our tests are pure unit tests and could directly extends PHPUnit class instead of MediaWikiTestCase [17:13:37] ^d: ProfilerXhprof is still enabled via StartProfiler for test2. Do we care (other than the log spam)? [17:13:47] <^d> hashar: Well for things that are standalone and don't depend on MW I'm totally +1 there :) [17:14:06] ^d: a quick POC I made following a discussion with bd808 is https://gerrit.wikimedia.org/r/#/c/178696/ [17:14:08] <^d> bd808: I figured we wanted it on somewhere still For Testing but if it's causing problems you can shut it off. [17:14:14] ^d: let one run phpunit without needing to install mw [17:14:47] ^d: I think it's ok, but it is noisy in logstash. JUst wanted to be sure I wasn't the only one who knows about it. :) [17:16:43] Something changed that is causing arrays of ips to be fed into IP::isIPv4/6. The errors are all from wmf12 so possibly something that is different on the 'pedias? [17:20:28] off, see you folks! [17:20:37] o/ hashar [17:32:14] 3MediaWiki-Core-Team: Improve logstash - https://phabricator.wikimedia.org/T84895#933231 (10Chad) 3NEW [17:32:40] _joe|off: https://logstash.wikimedia.org/#/dashboard/elasticsearch/hhvm_jobrunner is nice [17:32:54] <_joe|off> AaronS: :)) [17:33:06] any errors? [17:33:20] <_joe|off> AaronS: not particularly horrible AFAICS [17:34:06] logstash epic! so epic! [17:34:09] <_joe|off> AaronS: mostly slow queries [17:34:48] <_joe|off> and some memory limit reached [17:38:22] <^d> manybubbles: I spent some time digging the logs extensively for lsearchd. I'm inclined to cut it off for everyone other than nlwiki, enwiki, ruwiki (those are the only 3 with /any/ traffic whatsoever) [17:39:19] <^d> At least via the API :) [17:39:28] 3MediaWiki-Core-Team: Improve logstash - https://phabricator.wikimedia.org/T84895#933271 (10bd808) @gage has been thinking about some of these issues. I think he started talking to @mark about getting some nicer hardware for the logstash servers. The work I've been doing towards {T76759} will help some with the... [17:39:31] ^d: hmmm - makes sense. I think maybe the right thing to do is to try and reach out to those wikis somehow too.... [17:39:38] <^d> Yeahhhh [17:40:01] <^d> I think I'm jfdi for the others though. [17:43:35] +1 [17:53:13] <_joe|off> manybubbles: I'll be a few minutes late to the meeeting, sorry, do not wait for me :) [17:53:24] _joe|off: aren't you off? [17:54:01] <_joe|off> yeah I'm from the phone right now, I'm late. I didn't account for rome traffic around Christmas [18:01:18] <^d> manybubbles: lsearchd off everywhere but ru, en, nl. [18:01:45] <^d> that means we're only using pool 1 & 3....2,4,5 can be decommed. [18:06:04] <^d> Nemo_bis: Hehe Nemo_bis: Hehe :) https://wikitech.wikimedia.org/w/index.php?title=Search&diff=138667&oldid=136911 [18:06:10] <^d> Gah, sorry for double-ping. [18:06:15] <^d> copy+paste fail [18:07:42] ^d: Is this a reasonable thing to do to track down a crazy new bug? -- https://gerrit.wikimedia.org/r/#/c/180841/1 [18:08:29] <^d> I think so yeah [18:10:55] The bug started at 2014-12-18T09:00:14.000Z so I'm pretty sure its cluster config related somehow [18:11:38] but there are several ways into that method and the hhvm notice is less than helpful in telling where the problem came from [18:20:14] ^d: if you've got a minute -- https://gerrit.wikimedia.org/r/#/c/180843/ and https://gerrit.wikimedia.org/r/#/c/180845/ -- either +1 and I'll deploy or +2 and do the deed yourself :) [18:20:31] oh we has meeting still? [18:20:47] emailz says no [18:24:05] <^d> bd808: +2'd on both, sync away after jenkins does its thing. [18:24:35] <^d> you fail tests sir! [18:24:50] bah. looking [18:25:23] wddx test that we marks for skipping this morning [18:25:26] <^d> config bit done. [18:25:35] <^d> and sync'd for you [18:25:42] I wonder if that means all branch tests are borked without the backport :( [18:25:57] I bet it does [18:26:57] so... backport the test fix or force merge? [18:29:26] ^d: Its failing because this patch is missing from the branch -- https://gerrit.wikimedia.org/r/#/c/180208/ [18:29:32] <^d> Yeah [18:29:38] <^d> I know :) [18:29:55] the meeting coming up right now is being rescheduled [18:30:01] Which also sadly won't cherry-pick cleanly [18:35:25] * bd808 twiddles thumbs while jenkins does stuff [18:40:59] <_joe|off> bd808: we should sync up tomorrow about the packagist passwords [18:42:45] _joe|off: sounds good. We can do a hangout or something [18:43:21] <_joe|off> yes, I'd say now but I'm pretty burnt out [18:43:43] _joe|off: no worries. [18:45:31] <^d> bd808: phd status is silly. [18:45:40] <^d> What do you think it returns when no daemons are running at all? [18:45:44] phd's are silly too [18:45:47] <^d> Maybe a list of all the daemons? [18:45:49] <^d> And DEAD? [18:45:54] <^d> You'd be wrong, good sir. [18:46:13] oh? I thought that was it's MO yesterday [18:46:15] <^d> vagrant@mediawiki-vagrant:/vagrant/phab/phabricator$ ./bin/phd status [18:46:15] <^d> There are no running Phabricator daemons. [18:46:33] <^d> That's only if it's /partially/ running that you get that table thingie. [18:46:41] face + palm + head + desk [18:46:43] <^d> If it's not running /at all/ you just get a lovely error message. [18:47:34] 6 minutes to merge a backport seems ... excessive [18:47:57] * bd808 adds "make tests faster" to xmas wish list [18:48:38] I do like that the hhvm test suite runs faster than the php5 one though [18:54:51] <^d> I'm splitting "setting up phd" into its own patch. [18:55:00] <^d> I don't want it to hold up the rest of the role while I fight it in vain. [18:55:23] ummm.... that error magically stopped in logstash at 2014-12-18T17:37:01.000Z [18:55:31] <^d> yay magic! [18:55:41] boo too? [18:55:56] So magic start and stop [18:56:08] * bd808 eyes bblack suspiciously [19:10:18] 3MediaWiki-extensions-TemplateSandbox, MediaWiki-Core-Team: Special:TemplateSandbox from Extension:TemplateSandbox needs edit token when raw HTML is allowed - https://phabricator.wikimedia.org/T76195#933594 (10Legoktm) >>! In T76195#931984, @Mglaser wrote: > Had trouble with committing to the right branch and de... [19:59:22] 3MediaWiki-Core-Team, Wikimedia-Logstash: Improve logstash - https://phabricator.wikimedia.org/T84895#933704 (10Gage) [20:01:53] <^d> Of course Phab uses the query DSL so it's hard to mix match & match_phrase behaviors :\ [20:05:51] manybubbles: https://github.com/orientechnologies/orientdb/issues/3191 \o/ [20:07:39] sweet! [20:10:56] <^d> { [20:10:56] <^d> "query": { [20:10:56] <^d> "bool": { [20:10:58] <^d> "must": [{ [20:11:00] <^d> "match": { [20:11:02] <^d> "field.corpus": { [20:11:04] <^d> "operator": "and", [20:11:06] <^d> "query": "\"Parent project\"" [20:11:08] <^d> } [20:11:10] <^d> } [20:11:12] <^d> }] [20:11:14] <^d> } [20:11:16] <^d> }, [20:11:18] <^d> "from": 0, [20:11:20] <^d> "size": 101 [20:11:22] <^d> } [20:11:24] <^d> Of course \"Parent project\" won't work on a match. [20:11:28] <^d> So try a match_phrase, which works until you don't want exact matches :) [20:17:31] 3MediaWiki-Core-Team, Wikimedia-Logstash: Improve logstash - https://phabricator.wikimedia.org/T84895#933759 (10Chad) [20:18:50] ^d: whatareyoutryingtodo? [20:19:08] <^d> Phab search bug. [20:19:18] trying to make "s signal "I want exact matches"? [20:19:45] <^d> Yeah but they don't use query_string. [20:20:14] <^d> So I'd have to pull out any quoted bits and throw those into match_phrase calls instead of the match. [20:20:36] look at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-simple-query-string-query.html [20:20:46] might just do it for you [20:20:57] its like query string only it degrades better [20:21:06] but doesn't offert the crazy bs that cirrus needs [20:21:11] I don't remember what [20:21:13] <^d> Now /that's/ the tool I want. [20:21:29] but it has a flags parameter that lets you turn on and off features. [20:23:56] <^d> That's EXACTLY what I wanted. [20:24:00] <^d> What would I do without you manybubbles? [20:24:01] <^d> :) [20:24:15] lots more work than you have to, I'd imagine:) [21:03:09] TIL about https://phabricator.wikimedia.org/flag/ and the ability to make private notes on tasks and see them in one place later [21:52:07] 3MediaWiki-Core-Team: Get all WDQ query types working quickly in Orient - https://phabricator.wikimedia.org/T84930#934059 (10aaron) 3NEW [21:52:25] 3MediaWiki-Core-Team: Get all WDQ query types working quickly in Orient - https://phabricator.wikimedia.org/T84930#934059 (10aaron) p:5Triage>3Normal a:3aaron [22:00:43] * bd808 runs `brew upgrade git` for https://github.com/blog/1938-git-client-vulnerability-announced [22:01:40] 3Librarization, MediaWiki-Documentation, MediaWiki-Core-Team: Document new library requirements for logging interface, cdb, xhprof etc. - https://phabricator.wikimedia.org/T74163#934104 (10Legoktm) * https://www.mediawiki.org/w/index.php?title=Download_from_Git&oldid=1256311#Prerequisites ** Added section https:... [23:07:44] wtf. Why is the xhprof profiler still trying to run? [23:14:48] <^d> bd808: I've taken to listening to anytime I'm doing cluster work. [23:14:52] <^d> It seems somehow appropriate. [23:15:52] freaking hamster dance [23:16:19] <^d> And now it will be in your head the rest of the afternoon. You're welcome :) [23:17:11] ori: https://gerrit.wikimedia.org/r/#/c/180988/ [23:31:38] ori: j.oe said earlier that they were seeing a problem on the hhvm servers where "the part of the hotprofiler that gives the problem uses getrusage(RUSAGE_SELF)" [23:32:22] we call getrusage(RUSAGE_THREAD) in our explicit calls [23:33:01] There was also some mention of a possible kernel bug [23:33:31] it was early for me and late for him so I didn't probe for more info [23:34:12] * bd808 sees j.oe gave more info in -operations