[00:03:41] TimStarling: follow your bliss :) [00:51:33] 3MediaWiki-Core-Team: error: request has exceeded memory limit in /srv/mediawiki/php-1.25wmf17/includes/specialpage/SpecialPage.php on line 534 - https://phabricator.wikimedia.org/T89918#1048696 (10mmodell) [00:52:45] 3MediaWiki-Core-Team: error: request has exceeded memory limit in /srv/mediawiki/php-1.25wmf17/includes/specialpage/SpecialPage.php on line 534 - https://phabricator.wikimedia.org/T89918#1048688 (10mmodell) @chad was it high on the list before? I didn't notice it and now it's suddenly #1 with 250+ instances of t... [00:54:12] 3MediaWiki-Core-Team: error: request has exceeded memory limit in /srv/mediawiki/php-1.25wmf17/includes/specialpage/SpecialPage.php on line 534 - https://phabricator.wikimedia.org/T89918#1048701 (10Chad) Maybe not quite as high? Things were really drowned out by T89505 and T89258 though in the last week. [00:54:17] 3MediaWiki-Core-Team: error: request has exceeded memory limit in /srv/mediawiki/php-1.25wmf17/includes/specialpage/SpecialPage.php on line 534 - https://phabricator.wikimedia.org/T89918#1048706 (10bd808) Sadly without a stack trace and request URL this one is about impossible to track down and squash. [00:55:07] <^d> bd808, twentyafterfour: Having better OOM logging would be nice [00:55:20] <^d> Right now we get a line # and in cases like this, you're just guessing [00:55:23] ^d: yes, I have a bug for that [00:56:59] ^d: https://phabricator.wikimedia.org/T89169 [00:57:04] 3MediaWiki-API, MediaWiki-Core-Team: API blocks query module causes PHP undefined property notice if bkprop parameter does not include 'timestamp' - https://phabricator.wikimedia.org/T89893#1048711 (10Anomie) 5Open>3Resolved [00:57:18] <^d> bd808: Do shutdown functions get called when you OOM? [00:57:21] <^d> I doubt it... [00:57:35] needs testing but I think they may in hhvm [00:57:56] <^d> Hmm, that could work then. [00:58:31] if I just had some time to code ... weekends and evenings are too short [00:58:55] * bd808 owes niharika ~2 hours of CR tonight [00:59:07] ^d: to track down CA OOMs we used the debug logging in CentralAuth + servername + timestamp to correspond with hhvm.log [01:03:47] AaronS: Are you comfy with inner workings of Sqlite? Re: https://phabricator.wikimedia.org/T89180#1046190 - I've tried locally to remove the doCommit(), doRollback() overrides and simplify doBegin() to just the conditional and calling parent::doBegin(). It seems to work, but not sure if there's cases I'm missing. [01:10:52] that seems reasonable [01:17:45] AaronS: Cool. I just pushed a patch. Fleshing out commit message now. [05:38:30] bd808|BUFFER: could you review https://github.com/wikimedia/composer-merge-plugin/pull/16 whenever you have some time? [05:53:17] 3MediaWiki-Core-Team: Multi-DC documentation RFC - https://phabricator.wikimedia.org/T88666#1049024 (10bd808) [05:53:18] 3MediaWiki-Core-Team: Devise stashing strategy for multi-DC mediawiki - https://phabricator.wikimedia.org/T88493#1049023 (10bd808) [06:30:38] 3MediaWiki-Core-Team: error: request has exceeded memory limit in /srv/mediawiki/php-1.25wmf17/includes/specialpage/SpecialPage.php on line 534 - https://phabricator.wikimedia.org/T89918#1049056 (10tstarling) I don't think it's actually running out of memory at that line, it probably ran out some time before it... [08:22:38] ori: here now, if you're still awake [08:22:46] (I'm in france for the next 2.5 weeks) [10:30:44] 3Datasets-General-or-Unknown, Services, MediaWiki-Core-Team: Improve Wikimedia dumping infrastructure - https://phabricator.wikimedia.org/T88728#1049359 (10ArielGlenn) They've been invited. If we don't have some feedback in a few days I'll float it past wikitech next. [10:31:09] 3CirrusSearch, MediaWiki-Core-Team: CirrusSearch: Allow *ORs* of incategory to be sent via a post or get parameter - https://phabricator.wikimedia.org/T89823#1049360 (10Manybubbles) Addendum: "ids" here means the pageid of the category. Not category id itself. This is to keep it compatible with catgraph. [10:45:54] 3CirrusSearch, MediaWiki-Core-Team: Add per user concurrent search request limiting - https://phabricator.wikimedia.org/T76497#1049372 (10aaron) What is this blocked on? [10:45:58] 3CirrusSearch, MediaWiki-Core-Team: CirrusSearch: Allow *ORs* of incategory to be sent via a post or get parameter - https://phabricator.wikimedia.org/T89823#1049374 (10daniel) For the record, here is the rationale for implementing this feature: - people (wikipedia editors) haven been asking for //deep categor... [10:46:52] 3Wikimedia-Blog, MediaWiki-Core-Team: Finish up blog post for Cirrus - https://phabricator.wikimedia.org/T85176#1049376 (10aaron) Is this done yet? [11:17:48] 3MediaWiki-Core-Team: error: request has exceeded memory limit in /srv/mediawiki/php-1.25wmf17/includes/specialpage/SpecialPage.php on line 534 - https://phabricator.wikimedia.org/T89918#1049487 (10Aklapper) p:5Triage>3High [11:42:06] 3Wikimedia-Blog, CirrusSearch, MediaWiki-Core-Team: Finish up blog post for Cirrus - https://phabricator.wikimedia.org/T85176#1049535 (10Manybubbles) 5Open>3declined [11:42:19] 3Wikimedia-Blog, CirrusSearch, MediaWiki-Core-Team: Finish up blog post for Cirrus - https://phabricator.wikimedia.org/T85176#940725 (10Manybubbles) Just not ever going to do it. [11:43:11] 3CirrusSearch, MediaWiki-Core-Team: Add per user concurrent search request limiting - https://phabricator.wikimedia.org/T76497#1049538 (10Manybubbles) p:5Triage>3Normal [11:44:43] 3CirrusSearch, MediaWiki-Core-Team: Add per user concurrent search request limiting - https://phabricator.wikimedia.org/T76497#801864 (10Manybubbles) No longer blocked. Can be worked on if we have time. All we have to do is configure the config in Cirrus to do it. I suggest doing this only on group0 for a few... [11:52:54] 3MediaWiki-Core-Team: error: request has exceeded memory limit in /srv/mediawiki/php-1.25wmf17/includes/specialpage/SpecialPage.php on line 534 - https://phabricator.wikimedia.org/T89918#1049553 (10Manybubbles) It seems unlikely this line is the cause: ``` public function getOutput() { return $this->getContex... [12:29:04] 3MediaWiki-Core-Team, wikidata-query-service: Investigate BigData for WDQ - https://phabricator.wikimedia.org/T88717#1049596 (10Haasepeter) Regarding the RDR/Wikdata Toolkit discussion: Markus is not available this week, he will propose a time for a call next week. [13:12:22] 3MediaWiki-API, MediaWiki-Core-Team: API: Add jsconfigvars to action=parse - https://phabricator.wikimedia.org/T67015#1049647 (10TheDJ) I'm wondering if adding an extra option specifically for this is desirable. perhaps we should just group it under modules ? Does the one option really make sense without the ot... [13:15:50] 3MediaWiki-API, MediaWiki-Core-Team: API: Add jsconfigvars to action=parse - https://phabricator.wikimedia.org/T67015#1049654 (10TheDJ) [14:08:38] 3Wikimedia-General-or-Unknown, MediaWiki-Core-Team: Existed pages without ability to reach and obviously wrong namespace - https://phabricator.wikimedia.org/T87645#1049696 (10Aklapper) Could somebody please answer Rillke's questions above so we could close this ticket as resolved? [14:36:29] 3MediaWiki-API, MediaWiki-Core-Team: API: Add jsconfigvars to action=parse - https://phabricator.wikimedia.org/T67015#1049735 (10Anomie) Considering we're offering two different formats for the data (inline and JSON-encoded blob), new props seem more straightforward than an extra format-selector parameter. [15:28:33] 3MediaWiki-Core-Team: Unexpected N4HPHP13DataBlockFullE - https://phabricator.wikimedia.org/T89958#1049812 (10Chad) 3NEW [15:28:48] <^d> bd808|BUFFER: That's a thing ^ [15:31:51] 3MediaWiki-Core-Team, GlobalUserPage: Wikilinks from global user pages should point to the central wiki - https://phabricator.wikimedia.org/T89916#1049836 (10Legoktm) [15:34:36] 3MediaWiki-Core-Team, operations: Unexpected N4HPHP13DataBlockFullE - https://phabricator.wikimedia.org/T89958#1049847 (10Chad) p:5Triage>3Normal [15:34:54] 3MediaWiki-Core-Team, operations: Unexpected N4HPHP13DataBlockFullE - https://phabricator.wikimedia.org/T89958#1049853 (10Joe) From my analysis, we have exhausted our cold cache on a few appservers. I have proposed https://gerrit.wikimedia.org/r/191620 as a solution. [15:42:18] ^d: sneaky N4HPHP13DataBlockFullE ninjas! [15:42:52] <^d> /nick N4HPHP13DataBlockFullE [15:51:39] <_joe_> how much of what we throw into memcached is in any way useful? [15:52:01] <_joe_> because AFAICT losing ~ 20% of memcached has NO effect whatsoever [15:52:16] <_joe_> I'm not even sure if losing all of it would [15:52:40] _joe_: CentralAuth login tokens :P [15:52:56] <_joe_> legoktm: ok, apart from that :P [15:54:33] probably page text that would otherwise have to be fetched from external store? [16:27:25] anomie: there are enough subclasses of Article in extensions that I don't think it's realistic to change the parameters...do you have any ideas on other ways to pass that option from render() to view()? [16:27:39] legoktm: [16:27:42] oops [16:28:00] legoktm: The problem is the warning that caused the zend unit tests to fail [16:29:10] I can fix ImagePage, but then every extension that overrides Article::view() will start complaining as well [16:30:18] legoktm: I suppose you could use the same trick that ApiBase::getAllowedParams() uses for its $flags parameter. [16:30:53] Well, what it would do if it needed $flags, using func_get_arg() to get it. [16:31:41] :/ [16:32:11] I guess that works... [17:39:42] 3operations, MediaWiki-Core-Team: Unexpected N4HPHP13DataBlockFullE - https://phabricator.wikimedia.org/T89958#1050178 (10Chad) Errors seem to have dissipated from hhvm.log. Temporarily because this can still happen again. Gerrit patch seems like a good idea to me. [17:49:04] 3MediaWiki-extensions-ApiSandbox, MediaWiki-API, MediaWiki-Core-Team: Merge ApiSandbox extension into core - https://phabricator.wikimedia.org/T89386#1050210 (10Anomie) a:3Anomie [17:49:54] \o/ [17:51:48] bd808: Hey, is there any reason I shouldn't go on https://phabricator.wikimedia.org/tag/mediawiki-api/, create columns like "Needs Code", "Non-Code", "Extension stuff", "Unclear", "In Dev", and "Needs Review", and just start sorting stuff? [17:52:10] anomie: do it! [18:07:48] anomie: Should we remove MediaWiki-API from this? https://phabricator.wikimedia.org/T88083 [18:07:55] anomie: I'm not sure this is an API thing as much as it's a backend thing. [18:09:12] Deskana: Yeah, it's not much to do with the API IMO besides that it would probably eventually need changes to the API in some manner if it were to go ahead. [18:10:02] True. We can leave it then, I guess. [18:10:16] Whatever you prefer. [18:25:35] 3Services, Parsoid, MediaWiki-Core-Team: Replace Tidy in MW parser with HTML 5 parse/reserialize - https://phabricator.wikimedia.org/T89331#1050376 (10cscott) I think a pure PHP implementation is best -- there are a number of standards-compliant HTML5 parsers for PHP now. But we could also writing a "tidy-ng" a... [18:29:01] 3Services, Parsoid, MediaWiki-Core-Team: Replace Tidy in MW parser with HTML 5 parse/reserialize - https://phabricator.wikimedia.org/T89331#1050384 (10matmarex) [18:29:48] 3Services, Parsoid, MediaWiki-Core-Team: Replace Tidy in MW parser with HTML 5 parse/reserialize - https://phabricator.wikimedia.org/T89331#1033553 (10matmarex) (This is a duplicate of T56617, really. I love that it is being worked on, I wish I heard about it sooner on any of the dozens of Tidy bugs I'm watching.) [18:54:09] robla: I have a pretty bad cold so I'm WFH after all [18:54:41] ah shoot...well, get well soon! [18:54:54] thanks [19:04:04] csteipp: Hey! I'm in the hangout now. [19:04:28] csteipp: Although my microphone doesn't appear to be working. [19:04:30] csteipp: Okay, fixed it! [19:09:54] 3MediaWiki-Page-protection, MediaWiki-Core-Team: Admin functions for (autoconfirmed) users - https://phabricator.wikimedia.org/T16325#184423 (10Paladox) [19:10:28] 3MediaWiki-Page-protection, MediaWiki-Core-Team: Admin functions for (autoconfirmed) users - https://phabricator.wikimedia.org/T16325#184423 (10Paladox) I have added project MediaWiki-Core-Team for reason this task is also to do with core of mediawiki. [19:10:51] <^d> AaronS: Trivial [19:38:49] 3MediaWiki-Page-protection, MediaWiki-Core-Team: Admin functions for (autoconfirmed) users - https://phabricator.wikimedia.org/T16325#1050819 (10Aklapper) @Paladox: Does not make sense as this task is closed for good as declined. Please do not add random projects to tasks (you have been told so before). [19:47:23] 3MediaWiki-Core-Team, GlobalUserPage: Wikilinks from global user pages should point to the central wiki - https://phabricator.wikimedia.org/T89916#1050874 (10Anomie) Duplicate of or blocked by T42128? [20:07:52] AaronS: Regarding sqlbagostuff and sqlite, does that apply to mysql as well to some degree? E.g. having 1 database as main and default objectcache? I assume not, since that's our popular default. [20:09:58] AaronS: This is blocking a high priority task due to constant failures in Jenkins. The majority of jobs currently have no concurrency within a build because the unit tests run one by one from the command line. But for working client side, one has to be able to make at least 1 normal mediawiki page view from a browser. Which a basic as it is, fails when using sqlite right now. [20:10:24] I could focus effort on using mysql in Jenkins instead. But that'll push this whole thing back by another week. [20:10:38] <_joe_> ori: ping on the appservers, they're exhausting the cold TC cache [20:10:47] <_joe_> I have a patch for review that fixes that [20:10:49] Krinkle: how hard would the 2nd DB trick be? [20:11:16] MW config can do that already, you just have to make the DB with the table [20:11:34] basically our sql parser cache config already makes use of this [20:12:19] AaronS_: You tell me :) How would I create that? I'd prefer to avoid coding much for it inside Jenkins since that's all extra maintenance overhead we can't afford and bound to get out of sync with mediawiki or have to be fragmented by version at some point. [20:12:23] <_joe_> ori: https://gerrit.wikimedia.org/r/#/c/191620/ [20:12:45] Krinkle: did you try just disabling the objectcache? [20:12:47] AaronS: How would it get created? Is there a way to do instal.php + localSetting config change and then have update.php create that 2nd db? [20:13:44] ideally the installer would handle this, which would be better for third parties and would shift the debt off of CI [20:13:48] $wgObjectCaches[CACHE_ANYTHING] = array( 'class' => 'EmptyBagOStuff' ); ;) [20:13:50] AaronS: I haven't and would prefer not to. While short-lived, we do re-use a lot within those few requests. It would slow things down a fair bit I think. And it ought to work. [20:14:11] sure [20:14:26] Its' an integration test of sorts. Though very implicit :P [20:15:37] OK. So I'm thinking the path ahead is to disable object cache for the moment, and then switch to mysql soon after and re-enable it there. [20:16:23] AaronS: The sqlite issue we're facing, you reckon it's limited to bagostuff in practice? Or is there still a genuine bug for small scale mw usage? [20:17:28] I've seen various errors about other tables as well. At least resourceloader table queries were also failing as frequently as l10ncache tables. [20:17:33] Neither of which is objectcache I guess. [20:17:35] the phab task only mention objectcache in the errors...and it's an obvious point of high contention [20:17:55] l10ncache would make sense too [20:18:42] all cache tables should be another DB (maybe all the same second DB) and worked with in auto-commit mode [20:19:17] AaronS: Something that could become part of sqlbagostuff? [20:20:47] I wonder about msg_resource too [20:23:04] looks like they could be key/blobs in objectcache...the only reason they are tables is to have easy hierarchy (lang=>key,resource=>lang) [20:24:21] of course prefix keys or invalidation keys could be used instead, maybe lookup key/blobs could be kept for eager deletes to work around the lack of eviction [20:25:26] AaronS: Data loss aside, another reason they're in its own table is (I think, this design predates me) because they're supposed to be atomic as one version. E.g. not remove some data only. [20:25:36] So that its's always refreshed at once [20:25:44] it's like a big json blob in 1 object cache row [20:25:54] but build out to provide selective querying [20:26:21] whereas the contract with objectcache is that any key may be purged at any time without compromising the integrity [20:26:45] msg_resource? [20:26:55] Yeah [20:27:06] for each resource, the only hierarchy is a list of languages [20:27:24] a lot could fit in a blob there [20:27:40] are you saying it has to be atomic across modules? [20:27:41] msg_resource / msg_resource_links both do not roll over individual rows. It's a database table. [20:28:24] messages relate to each other. [20:28:32] in any case, a prefix key can handle whatever atomicity is needed [20:28:55] AaronS: You mean wfMemcKey? [20:29:15] so when you update you store the new blobs with a random prefix then (when done) change the prefix key to refer to items with that same random prefix [20:29:22] sure, it would just be another key [20:30:16] AaronS: Hm.. I'm not convinced that ensures integrity. Our blobs roll over individually based on whatever right? LRU, MRU, whatever it might be. [20:30:35] AaronS: And while it's one large data set, we don't replace all rows every time. Only selective updates. but atomic selective updates. [20:30:39] sqlbagostuff only purges based on ttl [20:30:43] it doesn't really have lru [20:31:25] or any eviction [20:31:47] AaronS: OK. But is that a contract with object cache? What about other backends that can end up in there. [20:32:03] Do we use sqlbagostuff in prod? [20:32:25] for parser cache (behind memcached) and for a few tiny bits [20:32:30] Though I guess we could segment it inside mediawiki to require beign an instance of sqlbagostuff. [20:33:00] as opposed to whatever main-cache is, because that seems fragile I think. [20:33:14] If it ends up backed solely by redis or memcached, we'd have a problem, right? [20:33:29] yeah, objectcache does not require lru/eviction by contract, it's left as "depends on the cache", but if we require the local sql one (which is always there), it works [20:33:55] Yeah, we'd need this contract to not only not require LRU, but explicitly require the opposite. [20:34:15] redis could be just fine if it's allkeys-volatile [20:34:19] AaronS: So we'd use getCache( CACHE_DB ) or something? [20:34:45] (assuming we don't have a ttl), but allkeys-* would have trouble [20:35:53] noeviction would also be fine for redis [20:36:56] Krinkle: so the options would be to require sqlbagostuff or we could just say "don't do anything with eviction"...and allow others (e.g. redis, which can work fine) [20:38:40] <_joe_> ori: I noticed we never have any code in tc.hot [20:38:46] <_joe_> do you know why? [20:38:55] _joe_: tc.hot is only used in repoauth mode [20:38:59] AaronS: OK. Thanks, I'll file a task. [20:39:05] <_joe_> ok, makes sense [20:39:18] it would be nice to consolidate these different caching mechanisms a bit [20:39:22] <_joe_> that would be an hackathon project [20:39:55] <_joe_> try out RepoAuth mode and think of a deploy mechanism [20:43:34] _joe_: That would be fun to work on [20:44:51] AaronS: https://phabricator.wikimedia.org/T90001 [20:44:56] Krinkle: I made another suggestion on https://gerrit.wikimedia.org/r/#/c/191517/ [20:49:54] <_joe_> bd808: :) [20:49:58] <_joe_> let's do that [20:50:18] Phile a phab task! [20:50:21] <_joe_> but wait, we need buddies! [20:50:31] We can get Reedy for our non-wmf buddy ;) [20:50:43] <_joe_> ahahah that's cheating [20:51:00] It's called gaming the system thank you very much [20:51:10] too late, I already put him down as my buddy! [20:51:15] <_joe_> lol [20:51:29] was there anything that said a buddy could not be shared? [20:51:36] <_joe_> can my imaginary friend count? [20:51:44] <_joe_> bd808: right! [20:52:07] we could bring swtaarrs with us. Who doesn't want to spend a weekend in France with a bunch of geeks in a hotel [20:53:10] oh shit, are applications for going to the hackathon already due? [20:54:17] <_joe_> bd808: great idea actually if he wants [20:54:41] ori: just opened yesterday. not due for a while [20:55:37] <_joe_> ori: anarchy in the WMF! [20:55:48] ori: Due March 4th apparently [20:57:56] AaronS: I noticed these messages frequent the logs. LoadBalancer::getConnection: using server for group '' [20:58:10] AaronS: Are they useful in more realistic scenarios, or could they be removed perhaps? [20:58:14] e.g. https://integration.wikimedia.org/ci/job/mediawiki-core-qunit-karma/611/artifact/log/mw-debug-www.log/*view*/ [20:58:24] It's like 60% of most logs. [20:59:41] 3MediaWiki-Core-Team: Triage Mediawiki-API tasks - https://phabricator.wikimedia.org/T90003#1051169 (10Anomie) 3NEW a:3Anomie [21:12:55] Hm.. do we need MM_WELL_KNOWN_MIME_TYPES? It seems redundant with the mime.types file. Which is part of MediaWiki. Why would it need a fallback? [21:14:29] Krinkle: is it still 60% [21:14:57] oh, nevermind [21:15:35] I'd suggest moving that to getReaderIndex() [21:15:44] 496 out of 720 [21:15:46] lines [21:16:09] most of the time <mReadIndex;>> should be hit [21:16:15] the othertimes, it can log it [21:19:42] AaronS: Where abouts in getReaderIndex()? around '# No server found yet'? https://github.com/wikimedia/mediawiki/blob/master/includes/db/LoadBalancer.php#L255 [21:20:38] Or at the end? [21:20:42] if ( $i !== false ) { [21:36:31] 3VisualEditor, VisualEditor-MediaWiki, MediaWiki-Configuration, MediaWiki-Core-Team: convertExtensionToRegistration.php does not set defaults for globals, so $wgResourceModules += array(...) and similar cause fatals - https://phabricator.wikimedia.org/T86311#1051321 (10Jdforrester-WMF) [21:48:55] bd808: Say what now? :P [21:50:17] for the Lyon hackathon we are supposed to find a community buddy to work on a project with. Apparently several of us independently decided that we should buddy with you :) [21:51:18] surely there's plenty other former contractors to choose from? :D [21:51:55] I dunno if I'll be able to be in Lyon [21:52:00] :/ [21:52:20] can't you just fly? [21:52:37] just close your eyes and think happy thoughts! [21:53:00] I guess I'd need to stop every 400-450 miles or so [21:53:21] which adds landing practice, very important [21:53:55] it's slightly scary that these little planes have been flown back and forth across the atlantic numerous times [21:54:31] It other news, it's cold in Florida [21:55:39] My grand dad used to fly his little 5 seat cessna all over but never trans-oceanic as far as I know [21:56:26] nice [21:57:06] When it was brand new my little brother threw up all over the back seat. It was gross [21:57:14] and Bob was not pleased [21:59:28] I think it was one of these -- https://en.wikipedia.org/wiki/Cessna_180 [22:09:09] robla: arthur's calendar looks like a brick wall for the rest of the day [22:59:37] 3MediaWiki-Core-Team, wikidata-query-service: Investigate BigData for WDQ - https://phabricator.wikimedia.org/T88717#1051672 (10Jdouglas) //Out-of-band note:// I've updated the docs in the [[ https://github.com/earldouglas/blazegraph-scratchpad/tree/master/inference-rules | inference-rules ]] example to cover my... [23:15:16] 3MediaWiki-Core-Team, wikidata-query-service: Investigate BigData for WDQ - https://phabricator.wikimedia.org/T88717#1051717 (10Jdouglas) [23:30:18] 3MediaWiki-Core-Team, GlobalUserPage: Wikilinks from global user pages should point to the central wiki - https://phabricator.wikimedia.org/T89916#1051756 (10Legoktm) [23:36:34] manybubbles: did you read my comment on https://phabricator.wikimedia.org/T89918 ? [23:36:58] TimStarling: good morning. No, I hadn't yet [23:37:08] I made it about 5 hours before your comment ;) [23:37:09] ah, yeah, that one [23:37:10] yeah [23:37:27] I did. I was more commenting that that line would have been pretty uninteresting [23:38:00] thing is, it's not even reporting that it's going OOM in those functions [23:38:11] it's reporting that it went OOM in that actual line, which basically does nothing at all [23:38:31] all it does in that line is calls two functions with no parameters, there is no copying [23:38:51] yeah [23:40:04] I think MaxSem is right in -- the INI setting should indeed be 'hhvm.error_handling.log_native_stack_on_oom' [23:40:23] 3MediaWiki-Core-Team, GlobalUserPage: Wikilinks from global user pages should point to the central wiki - https://phabricator.wikimedia.org/T89916#1051787 (10Legoktm) >>! In T89916#1049237, @TTO wrote: >>>! In T89916#1049144, @Nemo_bis wrote: >> Let's just remove "m" from $wgLocalInterwikis. We've done without f... [23:43:36] it looks like HHVM does not check the OOM flag on return [23:43:45] it does check it on call [23:43:48] so if you have [23:43:55] function a() {b();} [23:44:00] function b() {c();} [23:44:18] function c() {str_repeat('x',1<<31);} [23:44:24] function d() {} [23:44:29] a(); [23:44:31] d(); [23:44:38] it will give an OOM in the last of those lines [23:44:42] the call to d() [23:44:59] at least, it gives you the idea where it is [23:46:51] are HHVM segfaults logged to any low-traffic log? [23:47:01] or do I have to search the syslogs for them? [23:48:13] they should be going to hhvm-fatal.log [23:48:24] but i don't see that on fluorine, and while i'd love to believe that it's because there haven't been any fatals.. [23:49:27] the last one in archive/ is hhvm-fatal.log-20141210.gz, which is when https://gerrit.wikimedia.org/r/#/c/177432/ was deployed [23:49:29] bd808: ^ [23:50:05] I was wondering if there was any way to find out the URL for https://phabricator.wikimedia.org/T89918 [23:50:06] yuck. revert ti? [23:50:12] so I checked the 5xx logs on oxygen [23:50:25] there are interesting things in there, but possibly segfaults rather than OOMs [23:50:44] the Special:Book errors are easy to reproduce [23:50:59] I was just trying to make Faidon happy with that patch. He didn't like some rainer and some not in the file for whatever reason [23:51:40] ah it's missing a : [23:52:11] i'd prefer to revert, for the reason i mentioned in the comment [23:52:11] $syslogtag == "hhvm-fatal" vs :syslogtag, isequal, "hhvm-fatal:" [23:52:37] well, screw it, just add the colon [23:52:40] could you submit a patch? [23:56:27] TimStarling: I think we might git a stack trace if we fixed https://phabricator.wikimedia.org/T89169 [23:57:08] doesn't the patch you just submitted fix that? [23:57:32] actually maybe the Special:Book errors are the OOMs in T89918 [23:58:23] I added fatal error handling block but the messages from it are mixed with normal warnings and they got shut off when Sam and Timo added stack traces to all of them and it melted fluorine with log volume [23:58:29] these multilingual errors are deliberately designed to annoy me aren't they? [23:58:39] we need to split fatals from errors in the log stream [23:58:55] TimStarling: yes [23:59:38] we were going to encode the whole thing as JSON, with superfluous escaping [23:59:56] <^d> TimStarling: The textvccheck ones?