[00:00:34] legoktm: noted, passed on, thanks [00:00:54] manybubbles: https://github.com/orientechnologies/orientdb/commit/9dcf98dc155d9c97f5d760fcd02446fbe49c4f7e [00:01:04] they are on quite a fixing spree :) [00:03:51] 3Wikimedia-General-or-Unknown, MediaWiki-Core-Team: Certain users are unable to log into their account (HTTP 503 upon login attempt) - https://phabricator.wikimedia.org/T75462#852055 (10TTO) 5Open>3Resolved a:3TTO From enwiki VPT: > OK, I was just able to log on. If someone implemented one of the older b... [00:04:08] 3Wikimedia-General-or-Unknown, MediaWiki-Core-Team: Certain users are unable to log into their account (HTTP 503 upon login attempt) - https://phabricator.wikimedia.org/T75462#852059 (10TTO) a:5TTO>3None [00:15:07] 3Librarization, MediaWiki-Core-Team: Update Profiling RFC with a plan for removing wfProfile*() calls - https://phabricator.wikimedia.org/T1135#852098 (10Chad) 5Open>3Resolved [00:17:27] <^d> manybubbles: I'm currently linting and pushing the amended patch for phab upstream. It takes into account all the feedback I think [00:17:39] <^d> I dropped english possessive and english stopwords [00:17:47] <^d> And swapped english stemmer for kstem [00:21:41] 3Librarization, MediaWiki-Core-Team: Document use of Monolog with MediaWiki - https://phabricator.wikimedia.org/T78596#852126 (10bd808) @legoktm and @chad, could the two of you take a look at https://www.mediawiki.org/wiki/Manual:$wgMWLoggerDefaultSpi and https://www.mediawiki.org/wiki/Manual:MWLoggerMonologSpi... [00:22:33] <^d> bd808: I wouldn't worry. [00:22:36] <^d> Some docs > no docs [00:22:43] <^d> And wikis can always be edited later. [00:22:44] heh [00:23:08] True. Hopefully if somebody needs more they will ask [00:23:29] Oh. I should add how to install monolog I suppose [00:37:52] 3Services, MediaWiki-Core-Team: Expose page properties as an API - https://phabricator.wikimedia.org/T78737#852164 (10GWicke) 3NEW [00:38:30] 3Services, Mobile-Web, MediaWiki-Core-Team: Expose page properties as an API - https://phabricator.wikimedia.org/T78737#852164 (10GWicke) [01:08:49] 3Librarization, MediaWiki-Core-Team: Deploy Monolog logging configuration for WMF production - https://phabricator.wikimedia.org/T76759#852234 (10bd808) [01:10:57] 3Wikimedia-IRC, Librarization, WMF-Server-Backports, MediaWiki-Recent-changes, MediaWiki-Core-Team: IRC feed broken on group0 wikis - https://phabricator.wikimedia.org/T78599#849104 (10bd808) [01:26:56] 3MediaWiki-Core-Team: Check all and none of advanced search can't work. - https://phabricator.wikimedia.org/T78742#852289 (10Liuxinyu970226) 3NEW [01:28:28] 3MediaWiki-Core-Team: Check all and none of advanced search can't work. - https://phabricator.wikimedia.org/T78742#852289 (10Liuxinyu970226) [01:41:09] ^d: cool! [01:42:16] AaronS: cool! It was kind of hard to read with all the whitespace changes they made.... [01:44:58] 3MediaWiki-Search, MediaWiki-Core-Team: Search auto complete does not list results despite it should - https://phabricator.wikimedia.org/T78721#852334 (10Manybubbles) a:3Manybubbles [01:45:57] 3MediaWiki-Search, MediaWiki-Core-Team: Search auto complete does not list results despite it should - https://phabricator.wikimedia.org/T78721#851880 (10Manybubbles) Looks to have something to do with finding css or js pages. I'll see if I can track it down in the morning. [01:48:29] 3MediaWiki-Core-Team: Fix HHVM PCRE cache - https://phabricator.wikimedia.org/T757#852341 (10tstarling) https://reviews.facebook.net/D29091 [01:52:07] 3MediaWiki-Search, MediaWiki-Core-Team: Search auto complete does not list results despite it should - https://phabricator.wikimedia.org/T78721#852342 (10Manybubbles) Happens on enwiki too: https://en.wikipedia.org/w/api.php?action=opensearch&format=json&search=mediawiki%3Ag&namespace=0&suggest= I imagine its... [01:58:45] 3MediaWiki-Search, MediaWiki-Core-Team: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#852344 (10Krinkle) [02:00:27] 3MediaWiki-Search, MediaWiki-Core-Team: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#848137 (10Krinkle) This isn't just causing a user script exception due to elements not existing. The core feature is also plain broken. 1. https://met... [02:08:46] 3MediaWiki-Search, MediaWiki-Core-Team: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#852359 (10Manybubbles) >>! In T78553#852344, @Krinkle wrote: > This isn't just causing a user scripts or extensions to throw an exception due to elem... [02:21:22] 3MediaWiki-Search, MediaWiki-Core-Team: Search auto complete does not list results despite it should - https://phabricator.wikimedia.org/T78721#852362 (10Manybubbles) Can reproduce in mediawiki vagrant by enabling the textextracts extension, settings $wgExtractsExtendOpenSearchXml = true; and finding a css or js... [03:25:45] 3MediaWiki-Search, MediaWiki-Core-Team: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#852414 (10Aklapper) [04:03:38] 3MediaWiki-Search, MediaWiki-Core-Team: Search auto complete does not list results despite it should - https://phabricator.wikimedia.org/T78721#852444 (10Manybubbles) Looks like there was some misshap on SWAT this evening and this didn't go out. It looks like we accidentally scheduled it for the SWAT for yester... [10:36:50] <_joe_> swtaarrs: around? [10:36:55] yep [10:37:12] <_joe_> great to have you in this TZ :) [10:37:28] <_joe_> so, looking at applying https://github.com/facebook/hhvm/commit/12f44e0498189ab75dbdbdc64b76fbe4183af168.patch to 3.3.1 [10:37:31] <_joe_> it's not easy [10:37:46] <_joe_> or better, it is easy, but it raised some questions to me [10:38:36] :) [10:38:38] hmm [10:38:44] what questions? [10:40:10] <_joe_> in https://github.com/facebook/hhvm/blob/HHVM-3.3/hphp/runtime/vm/jit/containers.h there is the template for boost::container::flat_set, but #include is not included [10:40:54] _joe_: it's included by runtime/base/smart-containers.h [10:41:01] or it should be, anyway [10:41:03] <_joe_> also, the same template is in runtime/base/smart-containers.h which is included by it [10:41:08] <_joe_> swtaarrs: it isn't [10:41:12] huh [10:41:21] are you having build problems? [10:42:16] <_joe_> no, I was just porting the patch [10:42:39] <_joe_> so well, I'll assume it's already included :P [10:42:50] heh, ok [10:42:56] <_joe_> next question would be - should I correct the template in runtime/base/smart-containers.h [10:43:01] <_joe_> as well? [10:43:29] <_joe_> (if I'm abusing your time, please tell me so) [10:44:04] hmm yeah I just realized we also have smart::flat_set [10:44:07] I don't think anyone uses it [10:44:11] but ifxing it can't hurt [10:44:56] <_joe_> ok thanks :) [11:01:41] bah scribunto tests fail totally under hhvm :-( https://integration.wikimedia.org/ci/job/mwext-Scribunto-testextension-hhvm/2/console :-( [11:02:06] will be for this afternoon *wave* [11:08:47] 3MediaWiki-Core-Team: HHVM is segfaulting 1/hour in production - https://phabricator.wikimedia.org/T78771#852950 (10Joe) 3NEW [11:11:10] <_joe_> whoever is around, ^^ this is _really_ serious [11:11:22] <_joe_> it can wait until ori and tim wake up though [11:27:26] <_joe_> swtaarrs: can I whow you a stack trace? [11:27:35] <_joe_> well, a small excerpt out of it [11:27:39] yeah [11:27:59] booo crashes [11:28:31] 3Wikimedia-General-or-Unknown, MediaWiki-Core-Team: Certain users are unable to log into their account (HTTP 503 upon login attempt) - https://phabricator.wikimedia.org/T75462#852983 (10Paju) Also I confirm that for first time since 4th Dec. 2014, I have been able to login and relogin to fi.wikipedia with existi... [11:30:09] <_joe_> swtaarrs: https://dpaste.de/RokR/raw [11:30:40] <_joe_> this is an excerpt, the trace for that thread goes back to more than 250k calls, it's clearly a loop of calls [11:31:02] hmm [11:31:13] do you have any very deeply nested objects that you serialize? [11:31:21] <_joe_> yes [11:31:26] <_joe_> I guess so [11:31:37] <_joe_> but some dev can explain that better [11:31:50] ok [11:32:00] _joe_: what does "ulimit -c" say on one of the servers? [11:32:02] er [11:32:04] ulimit -s [11:32:14] you might just need to give it more c++ stack [11:32:21] Ohai [11:32:35] * swtaarrs waves [11:32:36] <_joe_> I think we already do that, lemme check [11:33:10] <_joe_> # Increase the maximum size of the stack to 64MiB. See bug 71486. [11:33:25] If its serialisation its possibly more likely wikidata [11:33:38] <_joe_> Reedy: it looks like it [11:33:48] Dont suppose we have a php land stack trace? [11:33:49] <_joe_> swtaarrs: do you agree with me on that? [11:34:03] <_joe_> Reedy: the situation changed dramatically on Monday btw [11:34:20] _joe_: oh yor'e already set it to 64mb? [11:34:26] <_joe_> so for now I'll raise the stack size on one server to be twice as big [11:34:30] <_joe_> swtaarrs: yeah :/ [11:34:33] oof [11:35:03] if that gets rid of the crashes maybe we can try rewriting serialize to not be recursive [11:35:16] but that could be super complicated [11:35:22] <_joe_> yeah I guess so [11:35:32] it's bouncing between a bunch of unrelated classes [11:35:34] <_joe_> or we (wikimedia) set a serialization limit [11:35:45] yeah [11:35:48] I wonder what PHP5 does [11:35:52] <_joe_> there is one ini setting in hhvm if I remember correctly [11:36:20] <_joe_> swtaarrs: hhvm.resource_limit.serialization_size_limit ? [11:36:47] yeah looks like it [11:37:05] that's a heavier hammer than you need, but it should help [11:37:12] Wikidata deployed on Monday to wmf11 it seems [11:37:13] it looks like it's a limit on the total size of the output string [11:37:21] not the depth of the object or anything [11:37:36] Update Wikidata, site link ui and improve page view performance [11:37:47] https://github.com/wikimedia/mediawiki/commit/283f40ce1a6131a98429e379947ebe8340a1ef68 [11:38:46] <_joe_> ok so, I'll raise that limit to 128 MB on one server, see if this prevents crashes every hour, spread this as a stopgap [11:40:26] <_joe_> Reedy: can you poke wikidata folks? [11:41:05] aude: about? [11:42:05] ? [11:42:32] Seems since your Monday deploy its causing hhvm segfaults [11:42:52] that was only to test.wikidata [11:43:03] afaik [11:43:20] yeah [11:43:21] They're not frequent [11:43:37] <_joe_> it's not test wikidata Reedy [11:43:43] <_joe_> that is served by one host only [11:43:48] I didn't say it was :) [11:43:50] or test2.wikipedia [11:43:53] ? [11:44:25] <_joe_> but well, do you think anyone besides wikidata can have serialize objects with more than 250K levels of structure? [11:44:27] regarding "serialization limit" [11:44:34] wouldn't surprise me though [11:44:34] 3MediaWiki-Core-Team: HHVM is segfaulting 1/hour in production - https://phabricator.wikimedia.org/T78771#853010 (10Joe) Looking at the stack trace, it seems this happens because HHVM is trying to unserialize a ridiculously nested object (the stack trace from the core dump shows recursion up to more than 250 K t... [11:45:02] but we didn't deploy anything to wikidata / wikipedia on monday (just the test sites) [11:45:09] so when did this start? [11:45:23] <_joe_> aude: on dec 15th I'd say [11:45:38] so monday... [11:45:59] <_joe_> well, it /began/ on dec 23, but it became a serious number of cases on the 15th [11:46:06] ok [11:46:11] <_joe_> now it's 150 segfaults/hour [11:46:18] There was a wmf11 deploy on monday [11:47:10] <_joe_> aude: go here https://logstash.wikimedia.org/#/dashboard/elasticsearch/hhvm [11:47:17] <_joe_> and search for "SEGV" [11:47:28] k [11:48:31] aude: Are we the cause of these? [11:48:35] hoo: don't know [11:48:52] our stuff being ridiculously nested serialized thing sounds possible [11:49:05] <_joe_> hoo: well, the relevant backtrace from the core file is https://dpaste.de/RokR/raw [11:49:18] but being new issue, doesn't seem to match with our deployments [11:49:22] <_joe_> which I truncated, it went on until #250000 and beyond [11:49:35] but maybe someone edited Q183 or something [11:50:40] #45 0x0000000000955b31 in HPHP::ObjectData::serializeImpl (this=this@entry=0x7fb4e94caf90, serializer=serializer@entry=0x7fb528bfa8b0) at /usr/src/hhvm/hphp/runtime/base/object-data.cpp:867 [11:50:41] className = {m_px = 0x7fb571438400} [11:51:11] Any chance to resolve that? [11:51:52] <_joe_> hoo: I don't think so [11:52:06] :/ [11:52:14] Not happening when attached with gdb, I guess [11:52:35] <_joe_> no I am using a core file [11:52:45] <_joe_> maybe I did something wrong? [11:53:44] oh, Q183 is being edited :P [11:54:04] Lolol [11:54:25] * _joe_ headdesks [11:54:45] * _joe_ sets serialization limit to 100k [11:54:49] i have no clue really [11:54:59] but wouldn't be surprised if the size of it is an issue [11:55:04] wild guess [11:55:21] aude: But then we would see problems with editing again... or watchlists [11:55:26] or all the stuff we had last time [11:55:28] true [11:55:34] <_joe_> yes [11:56:09] <_joe_> keep into account that this can happen because varnish is retrying the request on a second backend [11:56:21] <_joe_> so it may even be a very rarely-edited page [11:57:04] <_joe_> or not on wikidata at all [11:57:55] aude: Thinking about it: Even the biggest possible entity can't be nested that deep [11:58:10] yeah [11:58:15] Unless every value class adds several ten thousand layers [11:58:29] but we would have noticed that :D [11:58:44] Might be that something is pumping something into memcached that really shouldn't... [11:59:00] * aude back in ~hour [12:00:56] _joe_: Any chance we can get it to meaningfully log stuff when nesting gets insane? [12:01:06] Like throw a PHP warning or something like that [12:01:28] <_joe_> hoo: "it" meaning HHVM? [12:01:35] yeah [12:01:40] <_joe_> we should write a patch [12:01:46] <_joe_> I don't think it has a warning [12:02:02] <_joe_> it just stops unserializing if you set a size limit there [12:02:07] If it already has detection for nesting deepness it should be rather trivial to make it add warnings [12:02:24] raising warnings is very easy in hhvm [12:04:12] <_joe_> not deepness, size [12:04:13] <_joe_> :/ [12:04:33] Oh, I feared that [12:04:36] <_joe_> which I guess (but i have to check the source, as there is no docs) is for the total size [12:05:02] <_joe_> btw, I am getting all the backtrace and we're already at #278568 in the call stack [12:05:17] <_joe_> err, #304188 [12:05:25] My backtrace highscore is at 800k+ [12:05:33] <_joe_> let's see [12:05:38] <_joe_> I can beat you! [12:05:49] I did that on Zend :D [12:06:17] <_joe_> swtaarrs: can't this be a bug of the unserializer? [12:06:23] (Well, I personally didn't... but I debugged it) [12:06:27] <_joe_> so that it gets into a deadlock? [12:06:46] _joe_: In the past we had that with circular reference structures [12:06:52] where stuff went into infinite loops [12:07:03] can happen if memory gets corrupted [12:07:08] <_joe_> hoo: I strongly suspect this is what's happening here [12:07:15] PHP 5.3 has a bad track record in that regard [12:08:15] Didn't we have something like that in the early phase of the migration? ori hacked around it AFAIR [12:09:04] <_joe_> hoo: by raising the stack size [12:09:19] <_joe_> to a value well above anything we ever thought we could encounter [12:09:34] <_joe_> (480K and counting) [12:09:38] ... and that proved to be a flawed assumption [12:09:51] I see :S [12:11:37] <_joe_> unsurprisingly enough, all the segfaults seem to originate from the same issue [12:12:10] How can you tell? [12:12:23] <_joe_> I have sampled 10 across the cluster [12:12:27] Could be just one corrupted serialized object in mc [12:12:30] <_joe_> they all look the same [12:12:39] <_joe_> hoo: it seems not to be happening on API [12:12:54] That's weird [12:13:00] _joe_: yeah it could be [12:13:03] very unlikely to be Wikidata, then [12:13:19] <_joe_> hoo: ok [12:13:22] _joe_: if a bigger stack doesn't help then it's probably infinite recursion [12:13:23] <_joe_> "good" [12:13:37] <_joe_> swtaarrs: I guess it is tbh [12:13:47] <_joe_> we're at #691656 and counting [12:13:53] <_joe_> hoo: I'll beat you! [12:14:25] Then you can claim that HHVM is better at memory corruption than Zend ;) [12:16:22] <_joe_> ok, definitely _not_ happening on api [12:16:56] what's at 691656? [12:17:18] <_joe_> swtaarrs: the call stack for that crash [12:17:27] <_joe_> I'm generating the whole backtrace now [12:18:10] <_joe_> so yes it looks like infinite recursion [12:22:21] hmm [12:23:21] <_joe_> swtaarrs: if you want to take a look, it's on mw1249, in /tmp/bt_segfault_full [12:23:29] thanks [12:24:29] <_joe_> HPHP::HttpRequestHandler::handleRequestImpl has a reqURI parameter, but I can't extract its value [12:24:38] I thought serialize had builtin checks against infinite recursion but there could be bugs [12:24:49] hmm gdb says ""/tmp/bt_segfault_full" is not a core dump: File format not recognized" [12:25:12] <_joe_> swtaarrs: that is the output of bt full [12:25:18] oh haha [12:25:24] <_joe_> the core dumps are in /var/tmp/core [12:25:26] <_joe_> :) [12:25:46] cool [12:25:58] <_joe_> hoo: for the record, I have a nice highscore of #967556 now [12:26:09] <_joe_> swtaarrs: I'm removing all the cores older than one day I guess [12:26:13] ok [12:26:16] <_joe_> I need to clean up some room [12:26:18] I don't expect the core will be useful anyway [12:26:25] I'll try to write something to look for cycles in this [12:27:10] what does eqiad mean? [12:27:37] <_joe_> eqiad is the name of the DC [12:28:07] <_joe_> it's built with the initials of the colo provider and the nearest airport [12:28:19] <_joe_> so United Layer in SF becomes ULSFO [12:29:10] ooh cool [12:29:20] so it's EQ IAD [12:30:30] _joe_: I can't find any cycles in this output :( [12:30:37] although this is in a lot of frames [12:30:45] <_joe_> Cyrus One in Dallas is CODFW [12:30:45] <_joe_> and so on [12:31:18] oh hold on I can use a different frame [12:35:36] <_joe_> btw, heisenbug! it suddenly stopped now, wtf? [12:35:38] <_joe_> LOL [12:35:43] <_joe_> the last segfault happened at 12:00 UTC [12:35:47] heh [12:35:55] well if it really is just a big object it could depend on user input [12:36:10] _joe_: yeah I can't find any evidence that there's a cycle in here [12:37:03] 3MediaWiki-Core-Team: HHVM is segfaulting 1/hour in production - https://phabricator.wikimedia.org/T78771#853085 (10Joe) For the record: the segfaults mysteriously stopped around 12:00 UTC and I haven't seen one in the last 40 minutes (while we had ~200/hour before). this is still really worrysome. [12:37:03] oh wait the thing I didn't isn't 100% accurate still [12:37:04] hmm [12:40:28] <_joe_> mmmh I have one hypothesis. the crashes stopped around the time I re-started one server with a larger stack. So maybe it was able to manage the bloated serialized structure and "fix" it? [12:41:47] 3MediaWiki-Core-Team: HHVM is segfaulting 1/hour in production - https://phabricator.wikimedia.org/T78771#853099 (10Joe) Again for the record: crashes stopped 10 minutes after I restarted one server with a larger stack. So maybe we had some Ill-fated structure waiting to be processed, and once that was done (by... [12:42:23] _joe_: oh, so maybe someone or something kept trying to export the huge object, and now that one server was able to it stopped? [12:42:49] oh yeah just saw your task comment [12:43:53] <_joe_> swtaarrs: it's just an hypothesis [12:43:54] <_joe_> but well, it would explain why crashes stopped abruptly [12:43:57] yeah [12:45:19] <_joe_> swtaarrs: it's a shot in the dark, tbh [12:46:11] yeah but it seems reasonable [12:48:22] <_joe_> we just had one segfault now, but it's not unreasonable to have one segfault/hour, would mean the mean time to segfaults is ~ 200 hours, which is bearable [12:48:58] <_joe_> still in serializer though, so the same segfault, MEH [12:51:51] _joe_: was it on the server with the big stack? [12:54:17] <_joe_> swtaarrs: nope, but we've got 180 of them in that pool [12:54:34] <_joe_> swtaarrs: it sucks because I can't tell if that solved anything [12:55:15] <_joe_> swtaarrs: btw, if you're able to extract the RequestURI from that core dump, that would help a lot, I kinda got lost trying to follow that [12:55:33] I've never really been able to do that :( [12:55:36] I've tried a few times [12:58:11] <_joe_> oh ok, I was already despising my gdb skills (which are mostly non-existent), but that's truly hard then :) [13:03:07] any luck yet? [13:05:00] <_joe_> aude: no, but the crashes suddenly stopped after I raised the stack size on one server [13:05:18] <_joe_> and by "stopped" I mean we've had 2 in the last hour [13:05:20] hm [13:06:24] <_joe_> aude: https://phabricator.wikimedia.org/T78771 [13:06:40] 3MediaWiki-Core-Team: HHVM is segfaulting 1/hour/server in production - https://phabricator.wikimedia.org/T78771#853136 (10Joe) [13:07:38] RequestURI would be nice [13:14:57] <_joe_> I'll be back later, I'm out for lunch now [13:15:16] <_joe_> and, the fact we don't have crashes atm makes me a little less worried [13:19:59] i just don't know where / how to start investigating [13:20:11] without a little more hint where / what to look [13:32:30] <_joe_> aude: well for now I guess we don't have any useful info for you [13:44:29] _joe_: ok [14:31:43] <_joe_> swtaarrs: btw thanks for taking a look, we had 0 segfaults since 12:30 UTC [14:32:25] np [14:32:57] I think I'm at UTC+1 so that's... 2 hours? [14:37:05] <_joe_> yes [14:37:14] <_joe_> you are at UTC+1, yes [14:37:20] <_joe_> and it's 2 hours [14:48:17] anomie: good morning! Got some Scribunto tests errors under HHVM I would like to point to you :] [14:48:33] LuaSandbox::enableProfiler(): Unable to allocate timer: Illegal seek ! [14:49:53] hashar: Probably would be fixed by https://gerrit.wikimedia.org/r/#/c/159822/, if I ever figure out a decent way to actually do that. An interim fix would probably be to just increase LUASANDBOX_MAX_TIMERS. [14:50:30] anomie: what puzzles me is that the tests pass under Zend [14:51:02] the end of the hhvm based job: https://integration.wikimedia.org/ci/job/mwext-Scribunto-testextension-hhvm/2/console (full log is 7,6MBytes) [14:51:33] the Zend one is apparently just happy about it https://integration.wikimedia.org/ci/job/mwext-Scribunto-testextension-zend/9/console [14:51:41] hashar: And I have no idea why. But if you can reproduce the failures locally, the first thing to try would be to increase LUASANDBOX_MAX_TIMERS. [14:52:04] will give it a try [14:53:56] TimStarling: are you already using your pcre cache in your hhvm build? [14:54:41] I had to do some nontrivial merging on current master so the version that makes it to github won't apply cleanly to 3.3 [15:00:39] anomie: ah got some file not found issue: Warning: LuaSandboxFunction::call(): Unable to allocate timer: No such file or directory in /mnt/jenkins-workspace/workspace/mwext-Scribunto-testextension-hhvm/src/extensions/Scribunto/engines/LuaSandbox/Engine.php on line 279 [15:01:01] anomie: running the suite with LUASANDBOX_MAX_TIMERS=100000 does not help either [15:01:29] hashar: Now that I think of it, Zend may be passing because it's probably still on luasandbox 1.9. [15:01:44] ahh [15:02:04] hashar: Sanity check: you adjusted the C code for luasandbox, recompiled, installed the new version, and ran against that? [15:02:57] anomie: oh not at all :D [15:03:38] anomie: I get our production hhvm (3.3.0+dfsg1-1+wm5) which has hhvm.dynamic_extensions[luasandbox.so] = luasandbox.so [15:04:17] I am not even sure which .so it ends up loading [15:04:51] strace shows it opens /usr/lib/x86_64-linux-gnu/hhvm/extensions/current/luasandbox.so [15:04:59] from hhvm-luasandbox package [15:06:03] 3MediaWiki-Search, MediaWiki-Core-Team: Search auto complete does not list results despite it should - https://phabricator.wikimedia.org/T78721#853373 (10Manybubbles) [15:06:32] 3MediaWiki-Search, MediaWiki-Core-Team: Finding js and css pages in autocomplete causes it to fail. - https://phabricator.wikimedia.org/T78721#853375 (10Manybubbles) [15:07:27] 3MediaWiki-Search, MediaWiki-Core-Team: Finding js and css pages in autocomplete causes it to fail. - https://phabricator.wikimedia.org/T78721#851880 (10Manybubbles) >>! In T78721#852444, @Manybubbles wrote: > Looks like there was some misshap on SWAT this evening and this didn't go out. It looks like we accide... [15:13:22] 3MediaWiki-Search, MediaWiki-Core-Team: Finding js and css pages in autocomplete causes it to fail. - https://phabricator.wikimedia.org/T78721#853379 (10Manybubbles) [15:13:27] 3MediaWiki-Search, MediaWiki-Core-Team: Finding js and css pages in autocomplete causes it to fail. - https://phabricator.wikimedia.org/T78721#851880 (10Manybubbles) https://gerrit.wikimedia.org/r/#/c/180485/ [15:14:54] hashar: Easiest way to test that it's hitting the right version would be to update the luasandbox version (see gerrit:159680 for an example on doing that) and then call LuaSandbox::getVersionInfo() in your test environment to check it. [15:19:29] anomie: yields LuaSandbox: 2.0-7 and Lua: Lua 5.1.5 [15:20:09] seems to be the one provided by hhvm-luasandbox 2.0-7+wmf2.1 package [15:20:49] hashar: Yeah, that's the version deployed on the cluster. If you change the version in your locally-recompiled version, then you can easily see if you're actually using it. [15:21:44] well Jenkins is not recompiling anything [16:15:43] 3MediaWiki-Search, MediaWiki-Core-Team: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#853480 (10Manybubbles) 5Open>3Resolved [16:15:54] 3MediaWiki-Search, MediaWiki-Core-Team: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#848137 (10Manybubbles) Verified on mw.org [16:23:37] <^d> bd808: Now /that/ is a lego car. https://www.facebook.com/video.php?v=10153600705945494 [16:24:35] I've seen that one before. It is pretty amazing [16:24:47] and it does make my supercar look sad [16:25:35] <^d> Hehe [16:27:52] ^d: I don't see an extra '*' in that comment change. Are my eyes getting old and weak? [16:28:59] <^d> * $wgMWLoggerDefaultSpi is expected to be an array usable by [16:29:04] <^d> * ObjectFactory::getObjectFromSpec() to create a class. [16:29:28] <^d> Baahhhhhh [16:29:32] <^d> I thought I was reading the release notes. [16:29:39] Ah. [16:29:57] "I see wikitext!" [16:30:11] <^d> I see a list with * [16:30:27] <^d> +2 [16:30:41] It's like seeing dead people only less socially acceptable [16:46:57] 3MediaWiki-Core-Team: Refactor Title to make permission checking it's own class - https://phabricator.wikimedia.org/T75958#853525 (10aaron) I assume this relates to https://gerrit.wikimedia.org/r/#/c/166357/ ? [16:48:53] ^d, legoktm: Better traces of $title showing in beta now -- https://logstash-beta.wmflabs.org/#/dashboard/elasticsearch/GlobalTitleFail [16:49:15] Top culprit is wikidata [16:50:17] <^d> EditPage. [16:50:42] anomie: did https://phabricator.wikimedia.org/T76195 ever get deployed? [16:50:53] <^d> DerivativeContext::msg needs another layer of checking, it's hiding something. [16:51:31] AaronS: Yes, it's deployed on both wmf11 and wmf12 [16:58:31] from -tech: [16:58:32] 11:56 < andre__af> If Special:Recentchanges is 0 bytes but works when passing ?debug=true, what could be the reason? HHVM? (T78776) [16:59:16] debug=true would bypass varnish [16:59:47] X-Cache cp1066 hit (2), cp4008 hit (2), cp4016 frontend miss (0) [16:59:51] for the empty page [17:01:07] * greg-g nods [17:01:26] * greg-g forum shops [17:01:28] just needs an edit to purge? [17:02:01] * greg-g shrugs, I'm not really here, 1:1 starting negative 1 minute ago [17:10:37] 3MediaWiki-Core-Team: Localization Cache Redo - https://phabricator.wikimedia.org/T78802#853563 (10Chad) 3NEW [17:10:50] <^d> Yay, epic tag. [17:28:27] so epic [17:31:49] like whoa [17:32:04] greg-g: Are there any hoops to jump through before deploying ApiFeatureUsage to Beta Labs? [17:32:27] anomie: other than writing and merging a -labs specific config change, nope! [17:32:39] * anomie goes to do that [17:32:51] anomie: in more seriousness: as long as you plan on deploying to production "soon", you're fine [17:33:27] where "soon" == "reasonably soon", where "reasonably" == "as long as you and I are on a similar page", where "similar" == "..... [17:34:28] greg-g: Does "As soon as all the hoops are jumped through" qualify as "soon"? Hoops including security review (T78808) and getting gerrit:173336 deployed. [17:34:39] anomie: Don't let me forget that your extension depends on logstash stuff. The rules to make the special events to send to the prod elastic cluster will need to change a bit when I drop udp2log input of mediawiki events. [17:35:28] anomie: ah, it needs a security review still? that one is annoying. I don't like just any old thing going on Beta Cluster, but chris won't be back until next year.... [17:37:08] greg-g: Oh? https://phabricator.wikimedia.org/T74465#752224 made it sound like not needed for Beta [17:38:54] anomie: ish [17:39:06] 3MediaWiki-Core-Team, LabsDB-Auditor: Manually verify whitelisted.yaml / graylisted.yaml to ensure completeness - https://phabricator.wikimedia.org/T78730#853685 (10mark) [17:40:04] anomie: you're Chris's delegate, so \o/ [17:40:10] robla: touche! [17:40:19] anomie: please review anomie's patches [17:40:21] Oh, I'm an actual delegate? Cool! [17:40:28] :) [17:40:33] 3MediaWiki-Core-Team: Refactor Title to make permission checking it's own class - https://phabricator.wikimedia.org/T75958#853694 (10csteipp) I hasn't seen that patchset. It looks like Adam had a similar idea. @awight, we should collaborate in Jan on this. [18:06:19] greg-g: I looked through my code and didn't see any security issues. [18:10:14] greg-g: are we deploying wmf13 today to test.* [18:10:15] ? [18:10:49] schedule says yes [18:17:07] aude: yes [18:17:47] * greg-g doesn't wave to notcsteipp [18:18:12] sshhhhh! It's a sekret I'm here :) [18:18:21] don't tell the wife! [18:23:30] greg-g: ok [18:36:34] <_joe|off> manybubbles, are you happy about wikidata importing freebase? we'll have a flock of users that are used to a google-invented api coming in hungry for a good knowledge graph search :P [18:37:04] <_joe|off> (and AaronSchulz and SMalyshev if you're around as well) [18:37:47] 3MediaWiki-API, MediaWiki-Core-Team: list=logevents in API shows type/action of suppressed and revdeleted log entries - https://phabricator.wikimedia.org/T74222#853789 (10Anomie) 5Open>3Resolved [18:40:30] I read that blog post an was excited and scared at the same time [18:41:17] Some stats comparisons on hacker news -- https://news.ycombinator.com/item?id=8761098 [18:41:47] 2,696,141,481 claims on 46,476,86 topics [18:46:23] <_joe|off> ori: https://logstash.wikimedia.org/#/dashboard/elasticsearch/hhvm_jobrunner the hhvm jobrunner is consistently outpacing the zend ones by 3x, https://logstash.wikimedia.org/#/dashboard/elasticsearch/hhvm_jobrunner [18:46:33] I think the licensing is going to be interesting [18:46:46] freebase is CC-BY, wikidata is CC-0. [18:46:52] <_joe|off> oh, double link, meh [18:47:19] <_joe|off> legoktm: probably google has already any right to that material, so they can relicense it at will [18:47:24] I still don't get why people keep TitleCasing "WikiData". [18:47:33] mhm [18:47:53] <_joe|off> I guess the already thought about that [18:48:11] I think people (google!) actually using Wikidata and exposing it to people is going to be a huge shift for the project. [18:48:26] <_joe|off> I hope so [18:48:44] <_joe|off> wikidata is the only hope to have a free knowledge graph available to the public [18:48:52] In a few years I fully expect that people will use Wikidata more than they use Wikipedia :) [18:49:36] <_joe|off> well, maybe not /knowing/ they're using it [18:49:46] <_joe|off> and well, wikipedia will always be useful [18:49:57] <_joe|off> as long as there's homework, we have a job :P [18:50:11] haha [18:50:40] I didn't read google post as they were going to use wikidata; I read it as they were killing the project and being nice enough to try and find a home for the data. "we’ll launch a new API for entity search powered by Google's Knowledge Graph." [18:50:41] <_joe|off> it's funny to see how much 11 yr olds count on wikipedia [18:51:06] <_joe|off> bd808: which is "the real deal" [18:51:08] <_joe|off> btw [18:51:15] bd808: aren't they dependent upon freebase? so they'd need a replacement for it... [18:51:26] <_joe|off> legoktm: they're not [18:51:38] o.O [18:57:03] <_joe|off> legoktm: the google knowledge graph is largely based on their ability to infer relationship via machine learning [19:02:12] AaronS: https://www.mediawiki.org/wiki/Scrum_of_scrums/2014-12-17 in case you wanted to update the page with details on what wikidata could help with [19:02:18] i'm going to send the link out now [20:00:41] bd808: is https://www.mediawiki.org/w/index.php?title=Manual%3A%24wgMWLoggerDefaultSpi&diff=1320484&oldid=1317065 ok with you? [20:01:34] also, https://www.mediawiki.org/wiki/Manual:MWLoggerMonologSpi#Requirements will modify core's composer.json right? I don't know if we want people to necessarily do that (conflicts with git and stuff)... [20:01:59] Well, that's how you install composer packages [20:03:39] The wiki edits look great. thanks [20:04:07] "/ [20:04:09] :/* [20:04:26] The issue of the core composer.josn is annoying but we are quite a ways into that to turn back now [20:04:48] I don't want to create a stub extension for every library [20:05:12] I don't think we should [20:06:46] bd808: also I don't like that the user has to specify a version number, they should just be able to use the one in our composer.json [20:07:05] We don't have a version number in our composer.json [20:07:15] we have a "suggests" [20:07:24] which doesn't allow a version number [20:07:32] oh [20:07:34] ugh [20:07:35] only a reason for the suggestion [20:08:13] The user doesn't have to give a version number. I think it will default to latest stable if you don't [20:08:25] 3MediaWiki-Core-Team, Librarization: Document use of Monolog with MediaWiki - https://phabricator.wikimedia.org/T78596#853985 (10Legoktm) Looks good to me! [20:08:29] but this is a minor part of the annoyance [20:09:11] the root problem here remains that we are using composer differently than upstream expects [20:09:44] the idea of a shared composer file (part upstream and part local) doesn't fit well with their current implementation [20:10:20] The "right" way would be to treat MW as a library in a local project [20:11:01] Then MW the library could specify whatever it depends on and the local app could specify its dependencies and they would be merged by composer [20:11:51] Or composer could be taught to do something different [20:16:04] https://github.com/wikimedia/mediawiki-extensions-WikiGrok/branches [20:16:11] Wonder why the 1.25wmf13 branch hasn't replicated there [20:27:49] 3MediaWiki-Core-Team, Librarization: Document use of Monolog with MediaWiki - https://phabricator.wikimedia.org/T78596#854011 (10bd808) 5Open>3Resolved [20:40:41] Whyyyy is everything on a go slow today [21:15:54] Did we ever file an upstream bug for the composer autoloader being slow? [21:20:03] I ... don't know. But I doubt it [21:20:54] with 400 open issues and the github search system ... [21:21:36] Related -- https://github.com/composer/composer/issues/2939 [21:21:53] https://github.com/composer/composer/issues/2938 [21:22:14] bd808: That's a new one.. rsync timeout on one server [21:22:22] gah [21:22:46] new scap issues every damn deploy [21:22:49] * Reedy waits to see what happens [21:23:10] we need more slaves I think as a start [21:23:34] fu---- [21:23:41] then my connection died (to irc and ssh) [21:23:59] screen? [21:24:29] Doesn't help when you need forwarded agents [21:25:58] at least noop scap is quick [21:26:03] so it should catch up again [21:26:09] quickly, that is [21:26:38] we need to find a solution for pushing to gerrit so we can get rid of agent forwarding [21:26:55] s/we/somebody/ [21:27:22] s/somebody/Reedy and Mukunda/ [21:29:42] hmm [21:29:51] I think my session might've been hung for a while [21:30:00] 21:29:51 1 apaches had sync errors [21:30:00] 21:29:51 Finished sync-apaches (duration: 01m 44s) [21:31:46] <^d> You can push over https. [21:31:55] <^d> It's always been possible where you don't have your ssh key. [21:32:11] true [21:32:29] doesn't help for the actual scap part [21:34:26] [20:30:11] (PS1) Reedy: wikipedias to 1.25wmf12 [mediawiki-config] - https://gerrit.wikimedia.org/r/180584 [21:34:27] [20:30:13] (PS1) Reedy: group0 to 1.25wmf13 [mediawiki-config] - https://gerrit.wikimedia.org/r/180585 [21:34:33] Can someone merge those 2 for me in a few minutes? [21:34:38] gerrit web interface won't load [21:34:53] scap-rebuild-cdbs is nearly done [21:38:06] ping me then [21:38:31] hoo: anytime now is good please [21:38:37] both? [21:39:06] https://test.wikipedia.org/wiki/Special:Version [21:39:12] why is teahouse in german? [21:39:35] both merged [21:39:35] "Stelle deine Frage" [21:39:38] thanks [21:40:16] Reedy: The thing is a *slight* mess [21:40:23] lol [21:40:29] and not using MW message for localization AFAIR [21:40:43] because it's designed to run as a gadget [21:51:27] 3MediaWiki-Core-Team, Librarization: Create a wiki page documenting a good composer.json file - https://phabricator.wikimedia.org/T76502#854227 (10Legoktm) [21:53:52] 3MediaWiki-Core-Team, MediaWiki-Documentation, Librarization: Document how to install xhprof - https://phabricator.wikimedia.org/T1354#854235 (10Legoktm) https://www.mediawiki.org/w/index.php?title=Manual%3AProfiling%2FXhprof&diff=1320832&oldid=1304372 I'm not sure about shared webserver users, how do they norm... [22:14:02] 3MediaWiki-Core-Team, MediaWiki-extensions-TemplateSandbox: Special:TemplateSandbox from Extension:TemplateSandbox needs edit token when raw HTML is allowed - https://phabricator.wikimedia.org/T76195#792803 (10Anomie) [22:17:38] 3MediaWiki-API, MediaWiki-Core-Team: Malicious site can bypass CORS restrictions in $wgCrossSiteAJAXdomains - https://phabricator.wikimedia.org/T77028#854294 (10Anomie) [22:19:49] 3MediaWiki-Core-Team: XSS in Extension:Listing 'name' and 'url' parameters - https://phabricator.wikimedia.org/T77624#854301 (10Anomie) 5Open>3Resolved [22:30:21] * ^d stabs puppet a few dozen times [22:31:01] <^d> Didn't fix the problem, but damn if I don't feel better about it :p [22:31:19] * bd808 defends puppet's right to be obtuse and non-intuitive [22:32:02] It was written by college kids in ruby. It's going to be a hot mess [22:32:10] <^d> I may not respect your obtuseness, but I will defend your right to be obtuse! [22:32:26] 'zactly [22:33:09] <^d> Puppet's not creating my service for the phab daemons :\ [22:33:21] <^d> Not erroring, just not creating it. As if my service{} definition wasn't even there. [22:33:44] "creating" you mean starting it? [22:34:15] <^d> That, I suppose. [22:34:43] <^d> https://gerrit.wikimedia.org/r/#/c/178643/7/puppet/modules/phabricator/manifests/init.pp - very last bit [22:35:05] ew. provider => base [22:35:20] no upstart/init.d scripts for it? [22:36:15] What's the exit code of `${phd} status` in your vm? [22:36:36] <^d> I copy+pasted from ops puppet. [22:36:41] <^d> Probably bad. [22:36:43] I think that has to return <>0 to trigger the start [22:37:34] <^d> vagrant@mediawiki-vagrant:/etc/init.d$ sudo /vagrant/phab/phabricator/bin/phd status [22:37:34] <^d> ID Host PID Started Daemon Arguments [22:37:34] <^d> localhost 2447 Dec 10 2014, 9:03:07 PM PhabricatorRepositoryPullLocalDaemon [22:37:36] <^d> localhost 2452 Dec 10 2014, 9:03:07 PM PhabricatorGarbageCollectorDaemon [22:37:38] <^d> localhost 2468 Dec 10 2014, 9:03:08 PM PhabricatorTaskmasterDaemon [22:37:40] <^d> localhost 2483 Dec 10 2014, 9:03:08 PM PhabricatorTaskmasterDaemon [22:37:42] <^d> localhost 2501 Dec 10 2014, 9:03:09 PM PhabricatorTaskmasterDaemon [22:37:44] <^d> localhost 2506 Dec 10 2014, 9:03:09 PM PhabricatorTaskmasterDaemon [22:37:46] <^d> vagrant@mediawiki-vagrant:/etc/init.d$ echo $? [22:37:48] <^d> 0 [22:37:50] <^d> Stupid phab. [22:38:26] heh. I wonder if it works in prod puppet either if it's copy-pasta [22:41:25] The phab docs are not very helpful -- https://secure.phabricator.com/book/phabricator/article/managing_daemons/ [22:41:44] sounds like Evan designed the service to require humans to think about it [22:43:45] <^d> Mmm, humans. [22:44:29] Puny Humans! Obey The Fist! [22:49:52] guess what I found today, the mediawiki/core structure testsuite (which checks autoloader / resource loader etc) is not run on mediawiki extensions :D [22:50:05] so we have a bunch of extensions not having the proper RL / autoload :( [22:50:43] I am going to enable them tomorrow morning and break all failing extensions [22:52:26] ^d, legoktm: I added some things to https://www.mediawiki.org/wiki/Manual:Profiling/Xhprof . I think we can remove the TODOs (nothing to do really) and the draft banner. Do you concur? [22:53:15] <^d> lgtm. [22:55:25] bd808: lgtm as well [22:56:35] {{done}} [22:56:50] did we have a phab task for that somewhere... [22:58:32] 3Librarization, MediaWiki-Core-Team, MediaWiki-Documentation: Document Profiling changes on wiki - https://phabricator.wikimedia.org/T76877#821730 (10bd808) [22:59:21] 3Librarization, MediaWiki-Core-Team, MediaWiki-Documentation: Document new library requirements for logging interface, cdb, xhprof etc. - https://phabricator.wikimedia.org/T74163#854437 (10bd808) [22:59:22] 3Librarization, MediaWiki-Core-Team, MediaWiki-Documentation: Document Profiling changes on wiki - https://phabricator.wikimedia.org/T76877#854434 (10bd808) 5Open>3Resolved a:3bd808 Done in collaboration by @chad, @legoktm and @bd808 [23:02:13] 3Librarization, MediaWiki-Core-Team: xhprof for MW - https://phabricator.wikimedia.org/T759#854446 (10bd808) [23:02:14] 3Librarization, MediaWiki-Core-Team, MediaWiki-Documentation: Document Profiling changes on wiki - https://phabricator.wikimedia.org/T76877#854444 (10bd808) [23:02:15] 3Librarization, MediaWiki-Core-Team, MediaWiki-Documentation: Document new library requirements for logging interface, cdb, xhprof etc. - https://phabricator.wikimedia.org/T74163#854445 (10bd808) [23:02:16] 3Librarization, MediaWiki-Documentation, MediaWiki-Core-Team: Document how to install xhprof - https://phabricator.wikimedia.org/T1354#854441 (10bd808) 5Open>3Resolved a:3bd808 Shared hosting users will have to lobby their host to install the PECL extension (not much we can do about that). Installation not... [23:06:00] <^d> bd808: If I remove the `status` from it it works. [23:06:11] <^d> I guess then it just relies on start/stop and doesn't try to check the status. [23:06:24] hmmm... ok [23:06:33] <^d> Blah, until restart. [23:06:33] <^d> nvm [23:06:35] <^d> Stupid me [23:13:37] 3Librarization, MediaWiki-Core-Team: xhprof for MW - https://phabricator.wikimedia.org/T759#854484 (10bd808) @tstarling Is there anything else you think we need to do before closing this task? We have the hhvm based profiler working in production and it still supports sub-function profiles. [23:14:40] 3MediaWiki-Core-Team, MediaWiki-extensions-TemplateSandbox: Special:TemplateSandbox from Extension:TemplateSandbox needs edit token when raw HTML is allowed - https://phabricator.wikimedia.org/T76195#854485 (10Legoktm) Why was this not backported to REL1_23 or REL1_22? [23:15:04] wow. [[Barack Obama]] triggers 2,317 calls to wfRunHooks [23:15:42] <^d> [[Special:BlankPage]] used to have ~300 calls to wfProfileIn/Out. [23:15:49] <^d> Now it has like 90-something. [23:17:13] 5.37% 28.391 10610 - wfGetRusage [23:18:03] that's just in sub-section calls? [23:18:52] ^d: https://gerrit.wikimedia.org/r/#/c/180637/ :) [23:18:58] * bd808 wishes that section profiling wasn't broken in hhvm :/ [23:19:24] <^d> AaronSchulz: For now or for swat? [23:19:26] bd808: it's not too bad with the new class afaik [23:19:34] ^d: whenever you want [23:19:49] there was a lot of time in __destruct() with the closure [23:20:24] yeah I saw that. Good fix [23:22:12] <^d> AaronSchulz: {{done}} [23:26:51] AaronSchulz: Should ProfilerXhprof::getFunctionStats throw out the '_total' entry from the scopped profiler? [23:26:55] -total [23:28:00] <^d> bd808: << "in most cases, we use nonzero exit codes to indicate unexpected exceptions, not normal exeuction with a negative/empty result. So nonzero exit means "the command did not work", and zero exit means "the command worked, parse the (hopefully machine-readable) output to figure out what the (possibly empty/negative) result is". [23:28:05] <^d> EOM; [23:28:28] like I said ... [23:37:56] <^d> start => "${phd} restart", [23:37:56] <^d> stop => "${phd} stop", [23:38:12] <^d> Works around it well enough, but you end up needlessly restarting like we do on the job queue service on every provision. [23:41:47] What does phd status say instead of DEAD when things are running? [23:42:35] `phd status | grep -V RUNNING` would have $? = 1 for example if not matches [23:43:02] <^d> Ahh, that could work [23:43:29] <^d> root@mediawiki-vagrant:/etc/init.d# /vagrant/phab/phabricator/bin/phd status [23:43:29] <^d> ID Host PID Started Daemon Arguments [23:43:29] <^d> 7 localhost 13943 Dec 17 2014, 11:36:54 PM PhabricatorRepositoryPullLocalDaemon [23:43:31] <^d> 8 localhost 13948 Dec 17 2014, 11:36:55 PM PhabricatorGarbageCollectorDaemon [23:43:33] <^d> 9 localhost 13953 Dec 17 2014, 11:36:55 PM PhabricatorTaskmasterDaemon [23:43:35] <^d> 10 localhost 13967 Dec 17 2014, 11:36:56 PM PhabricatorTaskmasterDaemon [23:43:37] <^d> 11 localhost 13972 Dec 17 2014, 11:36:56 PM PhabricatorTaskmasterDaemon [23:43:39] <^d> localhost 13977 Dec 17 2014, 11:36:56 PM PhabricatorTaskmasterDaemon [23:43:41] <^d> 12 mediawiki-vagrant.dev 13977 Dec 17 2014, 11:36:57 PM PhabricatorTaskmasterDaemon [23:43:43] <^d> That's when running [23:44:17] ugh the date stamp [23:44:25] so yeah... [23:44:35] not so easy to parse I think [23:45:58] <^d> We could grep for "" [23:46:06] <^d> We don't want that to appear. [23:46:22] <^d> Although I wonder if there's any other bogus states [23:46:34] * ^d cries a little [23:46:53] if you drop the header and then grep -v for DEAD it should return -1 when everything is dead [23:47:04] but if one thing is not it won't [23:47:26] I think you have to live with the crap result or patch phd [23:47:51] <^d> Based on upstream's response, I'm not too confident about a patch here. [23:48:55] maybe he'd allow an lsb-status action :) [23:49:16] but yeah I wouldn't bother [23:49:27] chase should be fighting that battle [23:50:19] does hhvm not have the native cdb functions? [23:50:40] <^d> No, it doesn't have any of the dba_* functions [23:50:43] <^d> cdb or otherwise. [23:51:53] hence why people need our library :) [23:52:19] 5.38% 28.464 527 - Cdb\Reader\PHP::find [23:52:53] not sure how that compares with the the dba_* version [23:58:03] <^d> We should write a benchmark test.