[00:49:29] Krinkle, ya, that makes sense -- we definitely do want trains to be blocked if parsoid breaks .. so we should do whatever required to integrate into that deployment process. [00:50:36] subbu: cool, should I wait per T255627, or is there not anything automated that needs updating as far as you know? [00:50:37] T255627: Use type:mediawiki for mediawiki logs from wpt* servers - https://phabricator.wikimedia.org/T255627 [00:51:36] I did find these two [00:51:37] https://codesearch.wmflabs.org/operations/?q=type%5Cb.%2B%5Cbparsoid&i=nope&files=&repos= [00:51:49] which should be fine to update later, which I'll do tomorrow as well. [00:56:52] nothing automated no ... but we will need to update all the logstash dashboards on our end. [00:57:41] but, you will also see a lot of noisiness in parsoid logs and you will need to add appropriate filters to exclude those. [00:59:30] let us actually think this through .. i'll leave a comment on the patch. [01:04:16] have to step away again for dinner. later. [01:52:42] subbu: i've updated the parsoid-php logstash dash so that type:parsoid-php is set at the top level (rather than each panel internally), and turned it into a DSL/JSON filter that checks both 'type' and 'servergroup' for future proofing [01:53:11] I also fixed the broken fatal/timeout panels while at it. channel:fatal no longer exists, was renamed to 'exception' a while back. [01:53:52] in case you thought there were no longer any timeouts :P [02:08:20] let's continue on task, as you pointed out, easier :) [02:08:22] good night [12:00:36] Krinkle: Can you remind me whether it's reasonable to split cache for a ResourceLoader module, to get maybe ~2-10 different configurations that are loaded and locally cached depending on attributes of each page? For context, I'm considering doing some prefiltering on QuickSurveys so that we only load configuration, messages, etc. if the survey might be displayed. [12:01:01] My plan is to split cache on the list of survey names which are going to be delivered as payload. [12:01:40] One awkward detail is that individual clients would often be loading multiple variations of the module, depending on what page is viewed. I'm not sure if this is allowed. [12:26:45] awight: quantify the audience size impacted and the payload size as it would be currently. Might be fine? [15:22:56] Nikerabbit: is it intended that ULS create an empty Languages section e.g. on non-article-related pages like this one? https://en.wikipedia.org/wiki/Special:Blankpage [15:23:07] I don't recall seeing it before, but maybe it's normal [15:24:54] Krinkle: yes I believe it has been so for a long time [15:28:04] OK [16:12:44] Hi and thanks for the awesome software and free support. I'm trying to import a 23MB article history, but browser times out. What should I do? [16:13:08] Do it on the command line? [16:13:13] Increase the server timeout? [16:16:31] I ran it again, but now I got only 161 revisions imported instead of the 500+ that it is in reality [16:16:53] I'll revert the database and then try either method [16:18:07] jukebohi: you can import it from command line [16:18:17] Maybe the other ones were already imported during the timeout [16:18:18] into a special location? [16:18:22] that should be more stable than importing from browser [16:19:04] ok. instructions for the shell method of importing an XML-dump plox? I want the article to be imported as a subpage please [16:19:14] I can search now [16:20:14] https://www.mediawiki.org/wiki/Manual:Importing_XML_dumps [16:20:59] more like this one https://www.mediawiki.org/wiki/Manual:ImportDump.php [16:21:36] it is a wiki family of 2 wikis. this may complicate the issue slightly [16:21:49] I'll just pass it the server name as a flag [17:19:40] there is no load on the server and importDump has been running for 45+ minutes for a dump of only 23MB [17:20:04] both the import and mysqld both use around 10% of CPU max [17:20:08] weird, huh? [17:26:15] there is no lack of CPU time and next to none in write in 'sudo iotop' [17:26:26] I mean this is supposed to take a couple of minutes, not an hour [17:28:01] I don't get it why this is running so daaaaamn slow... It's not lack of CPU and its not lack of IOPS [17:36:07] Hey all, following up on a question I asked the other day about PHP-FPM performance, I'll try to concisely summarize the context and my question. [17:37:55] I changed PHP-FPM from dynamic to static with max_children=30 and max_requests=10000. Since the requests are really well balanced across all children, they all complete their 10k requests within a couple of minutes of each other on average every 4 hours, so they all restart right around the same time. Is this a potential concern and, if so, able to [17:37:56] be mitigated? [17:38:39] (max_children=32 not 30 and php memory_limit=256mb fwiw, so about 8-9 GB total for php, which is totally fine on these servers) [17:43:52] fwiw here's a graph of the last 10 days of php-fpm total memory usage on all of the web servers, you can see where the change was made and the resulting usage pattern https://imgur.com/juE8ZnA [17:45:26] and here's the last 24 hours for more detail https://imgur.com/BOkvCLD [17:52:52] I'll just leave the computer open and hope it is done by tomorrow [17:53:33] But with no IO to speak of going on and no CPU utilization to speak of I do not know why importing a 23MB dump needs to take for ages [18:13:40] hiya Krinkle yt? [18:13:58] in https://gerrit.wikimedia.org/r/c/mediawiki/extensions/EventBus/+/594505 you and petr suggested I remove the hard dependency on EventStreamConfig [18:14:18] to do that, I have to mock EventStreamConfig\StreamConfigs in the EventBusFactoryTest somehow [18:14:32] i'm reading about PHPUnit mocks and stubs, but all the examples I find have to refer to the class to be mocked [18:14:34] e.g. [18:14:39] StreamConfigs::class [18:15:09] but if I remove the dependency on EventStreamConfig in extension.json, how can I use that symbol? [18:15:25] it'll be an undefined class reference, if EventStreamConfig is not installed, right? [18:16:43] ottomata: skip the test if class not exist, still keep config ext as dependency in CI config but not hard require for the extension itself. [18:17:32] hm, but i want to test the code, both IF EventStreamConfig is configured and if not. [18:17:43] i could do this if I could duck type it, but we have arg typing now in PHP [18:19:22] hmm [18:19:40] class_exists, hmm [18:24:26] Hello there, I installed an extension recently but the displayed text aren't parsed. They appear with the curly brackets ⧼...⧽. Any idea what's missing? [18:24:52] localisation cache is probably out of date? [18:25:33] I see, thank you! [18:28:20] ottomata: note that 'use' statements don't use autoloading and don't require anythig to exist. [18:29:04] https://3v4l.org/0VQMK [18:29:48] hm [18:30:00] so I can do [18:30:01] use Mediawiki\Extension\EventStreamConfig\StreamConfigs; [18:30:03] and [18:30:14] if (!class_exists('StreamConfigs')) [18:30:14] ? [18:30:57] generally, we tend to use ExtensionRegistry and call isLoaded( 'ExtensionName' ) [18:31:08] Rather than relying on specific class names (which may change, or be namespaced etc) [18:31:20] ah ok [18:31:28] that sounds good too [18:32:13] ExtensionRegistry::getInstance()->isLoaded( 'ExtensionName ) [18:32:19] with an extra ' [18:32:33] Where extensionname is the same (case and spacing) as the extensions extension.json [18:34:21] ok cool [18:34:24] trying [18:34:24] thank yopu [18:35:50] ottomata: PHPUnit markTestSkipped() would be used if you go that route [18:36:01] (for the case you test with) [18:41:08] hm Reedy still not sure how to make php happy, even if i guard the class references with a isLoaded or class_exists conditional, the interpreter bails when encountering the undefined class [18:41:14] Error: Class 'Mediawiki\Extension\EventStreamConfig\StreamConfigs' not found [18:41:28] hm, or [18:41:28] hm [18:41:40] i guess it is executing, not interpreting if i'm getting that lemme see [18:42:17] ok right sorry this is somethign else then, thee extension is loaded, but class not found, not sure why... [18:42:25] Namespacing correct? [18:42:37] i have [18:42:38] use Mediawiki\Extension\EventStreamConfig\StreamConfigs; [18:43:57] Casing? [18:44:00] Should it be MediaWiki? [18:44:07] I can't remember if that matters for use statements [18:44:10] !@ [18:44:10] @ is the "error suppression operator" and should never be used in code, ever. This is an example of a lazy coder: $from = @$options['from']; [18:44:40] Reedy: you are a brilliant human [18:44:44] other problem now, but not that one :p [18:44:49] heh [18:44:58] Second pair of eyes is often very helpful in these [20:24:27] weeeee netsplits [21:08:34] The https://www.mediawiki.org/wiki/Manual:ImportDump.php finished after a few zero beers in the pleasant summer night, but it is not showing up in the import log at all [22:03:26] I truly am excited to use this for my research [22:12:29] Hi [22:13:01] I was able to successfully set up my wiki yesterday, which I am using on my localhost server, and successfully set up Apache to work with it [22:13:16] But after editing a page, I noticed that localhost would no longer load anything at all, all of a sudden [22:13:25] and I'm almost certain it is linked to this that I found in the errors log: [22:13:37] [Wed Jun 17 17:34:10.826392 2020] [core:notice] [pid 36159] AH00052: child pid 36184 exit signal Segmentation fault (11) [22:14:27] it broke [22:15:45] is there any way to determine what broke it? [22:16:06] There were also several instances of this in the same error log, right after that error occurred. [22:16:09] [Wed Jun 17 17:41:08.484839 2020] [core:warn] [pid 36159] AH00045: child process 36160 still did not exit, sending a SIGTERM [22:16:57] after about 13 of those, there are no more errors, which I guess means that's when it stopped loading anything [22:17:13] What os? [22:17:14] to see if this was only MediaWiki doing this I tested it on an HTML dummy page, and it didn't load that either [22:17:17] Mac OS X [22:17:30] how're you with gdb and such? :P [22:17:57] Do you get anything else error-esque in the logs? [22:22:35] nothing else that I can see [22:22:35] a few more things [22:22:49] I tried on my terminal "ping localhost" and got results [22:22:59] PING localhost (127.0.0.1): 56 data bytes64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.061 ms64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms^C--- localhost ping statistics ---2 packets transmitted, 2 packets received, 0.0% packet lossround-trip min/avg/max/stddev = 0.061/0.070/0.079/0.009 ms [22:23:07] ping localhost won't use apache etc though :) [22:23:27] apachectl configtest just says Syntax OK [22:23:37] that just validates your apache config [22:23:42] In the first instance, making sure things are up to date, OS, apache, PHP etc [22:24:00] So what else should I be looking for? [22:24:04] segfaults can be hardware issues, os/kernel issues, software issues (both the software you're running and the software you're running it with) [22:24:47] to be more specific it is OS X El Capitan 10.11.6 [22:26:05] That's pretty old.. [22:26:09] I read something about it possibly being a memory issue [22:26:12] Are you on hardware you can't upgrade past 10.11? [22:26:21] memory issue is definitely possible [22:26:33] the Pro is from 2009 [22:27:09] Hmm. 10.11.6 isn't as old as I thought. Just under 2 years ago [22:28:47] I reset the memory limit to 2048M, I wonder if I should've done this from the start [22:29:15] I'm not sure that'll necessarily help [22:30:05] so what are the steps that you'd take in this situation? [22:30:44] [23:23:42] In the first instance, making sure things are up to date, OS, apache, PHP etc [22:31:28] If that doesn't help, you're down to using something like gdb to work out what's going on [22:31:39] As it's unclear at this point whether it's actually an MW issue [22:31:57] sure, MW is triggering it, but doesn't mean MW is causing it [22:32:31] I've never used that particular protocol, but I can try [22:33:24] what versions of MW, PHP and Apache ooi? [22:37:46] Apache's version seems to be 2.2, MW is 1.34.1, and PHP 7.2.21 [22:40:00] how did you install php and apache? [22:40:45] or the stock macos shipped ones? [22:40:57] PHP was updated with root command curl -s http://php-osx.liip.ch/install.sh | bash -s 7.2 [22:41:09] that sounds scary [22:41:55] and Apache was preinstalled on the system [22:46:04] should I go about reinstalling PHP from another location? [22:47:12] I guess it depends if you trust that source [22:47:29] I've personally never heard of it [22:47:49] But it's installing an out of date version; 7.2.31 is out upstream [22:49:10] personally, if the stock provided PHP wasn't new enough, I'd be using homebrew to install it on macos [22:59:27] Funny thing is though when I say php -v it says this [22:59:29] PHP 5.5.38 (cli) (built: Oct 29 2017 20:49:07) Copyright (c) 1997-2015 The PHP GroupZend Engine v2.5.0, Copyright (c) 1998-2015 Zend Technologies [22:59:46] Even after I tried to install v 7.4.5 from the same source [23:11:10] That's probably due to path issues [23:11:46] like, wherever macos installs php probably has higher prescdence than the one you installed [23:11:52] if the one you installed is even in $PATH [23:12:17] I'm presuming apache is/was using 7.2, as otherwise MW so new wouldn't run