[04:57:22] !fast [04:57:22] [08:04:09] Hi - we are trying to figure out how the compatibily matrix works on this page at our company: https://www.mediawiki.org/wiki/Compatibility [08:04:49] Where there is + written in MySQL 5.0.3+, for example, does that mean that the compatibility is from that version to the most recent version? [08:05:19] In other words, is it possible to use the latest MySQL/MariaDB with MediaWiki 1.18? [08:06:38] MediaWiki 1.18? Omg that's really old [08:07:25] Yeah, we just use the wiki for some little internal linking to other places, nothing much going on and no one has really been giving it much attention [08:09:11] Nobody has tested a MediaWiki version released in 2011 against a database engine from 2020. You may be the first one [08:09:22] :D [08:10:15] Hm.. I guess you're right - thanks anyways :) [08:10:30] It might work but if you have any issues it's probably just easier to update to a newer mediawiki version [08:12:52] Yeah, I understand this is hardly tested, just wanted to ask [09:52:36] "First one" is unlikely. WikiApiary can be used to find out the weirdest combinations [09:52:49] But yes, usually upgrading MediaWiki is easier. [10:54:28] Hi folks, I need to upgrade a couple of MW instances (1.25 and 1.27) to something modern. Can I generally just jump straight from those to the latest, run the migrations, and profit? Or will I need to jump through lots of hoops? (Ignoring the fun I'll have with PHP versions, of course) [10:56:41] These are simple wikis so I expect I can just suck it and see, but was hoping someone might have some similar experience in case there are major gotchas [10:57:56] bootc: Those should be new enough to mostly just work [10:58:05] Excellent, thanks [10:58:11] Usual backup etc before you do it [10:58:35] Of course, they are in fact being moved to a totally new system so we're working from backups in the first place [10:58:47] You could just upgrade to 1.31 (which is what debian is still packaging), and is still supported by us for another year [10:58:52] as an LTS [10:59:08] 1.35 the next LTS is due out at the end of this month [10:59:25] I suspect I'll do that for these anyway as $customer doesn't care about what they run as long as they run; _we_ care they are supported [10:59:43] by the time these are upgraded it'll probably be 1.35 anyway [10:59:48] heh [11:00:18] might stuff them into Docker containers anyway to keep it consistent with the other stuff we're doing (we run ours that way in K8s, works great) [11:00:55] thanks for the tips as usual Reedy :-) [11:01:07] np! [11:01:20] What backups are you working from? just database? or files on disk too? [11:01:26] yep both [11:01:37] fairly easy then [11:01:44] fingers crossed! [11:01:51] I obviously don't need to explain to you how to change db auth creds/grants :P [11:01:56] but :-) [11:02:05] -but [11:02:17] but you can basically unzip the new tarballs, copy in LocalSettings.php to the folder, copy across images/uploads, then run update.php [11:04:09] * bootc feels dirty [11:04:20] these wikis connect to MySQL as root [11:04:28] and the password is about as bad as you might expect [11:04:28] "but it works!" [11:04:30] quite [11:04:54] as long as mysql isn't exposed to the world... [11:05:00] thankfully not [11:05:03] nor these wikis [11:05:26] I was going to say MW has a good history when it comes to sqli... but depending on random extensions they might be running... while MW might be ok, they might not be [11:05:51] these wikis have exactly 1 and 3 extensions loaded, respectively [11:06:17] That's pretty lean [11:06:28] so I assume the stock extensions, or even less [11:06:43] even less tbh [11:06:56] WikiEditor, PDF Handler and SyntaxHighlight [11:07:06] 1.25 didn't bundle any extensions.. [11:07:13] so yes this shouldn't be hard, I hope, as long as they haven't hacked around in the sauce [11:07:16] 1.27 had 17 [11:07:23] Haha. Yeah... *that* is a different issue [12:26:38] Hi :) Yesterday I asked about and how to update those paged. It seems that my MW is not updating the list of pages unless I re-save the page. Is there a way to do it through cron, or should I be looking at something else? [12:27:26] !jobqueue [12:27:26] The Job Queue is a way for mediawiki to run large update jobs in the background. See http://www.mediawiki.org/wiki/Manual:Job_queue [12:27:33] ^ Forza [12:27:58] Platonides: I do run job queue every 5 min in a cron [12:28:41] hmm, it should be handling that [12:28:53] "re-save the page" is suggesting a null edit is fixing it [12:29:47] What is a nul edit? [12:29:55] I heard that before but didn't understand it. [12:30:09] it's exactly that [12:30:11] literally clicking save, but not changing anything [12:30:12] according to my cron logs "Job queue is empty." so that doesn't catch it. [12:30:19] pressing save to the page with no change [12:30:25] Ah ok [12:30:31] But is there no better way? [12:32:08] This is the one I am using: https://www.mediawiki.org/wiki/Extension:DynamicPageList_(Wikimedia) but I see there are two others [12:32:49] Reedy: do you mean that a null edit is by design for DPL? [12:33:07] No [12:33:15] it should never be needed [12:33:31] The question here is what the null edit is fixing [12:33:31] I'm presuming it's some cache [12:33:59] Could be. I do use opcache with php-fpm as well as redis [12:35:35] I suppose I can try one the other DPL extensions instead. Just seems a bad hack when I don't know whats wrong :/ [12:36:29] You could just try some patience too ;) [12:36:46] I waited since yesterday :D [12:37:04] And job queue is empty [12:38:14] So the obvious two are any HTTP cache you have infront [12:38:25] Or your parser cache [12:38:47] I can disable redis and try [12:39:29] opcache shouldn't really be an issue as that is more code cache rather than object cache [12:39:54] Indeed, but I didn't blame the opcache ;) [12:40:02] I dont know [12:40:10] what is "http cache" [12:40:21] squid/nginx/other [12:40:27] I dont run that [12:40:31] something infront of your webserver [12:40:36] Right, but how would I know that? [12:40:45] You wouldn't :) [12:40:48] Sorry [12:41:10] * The expiry time for the parser cache, in seconds. [12:41:10] * The default is 86400 (one day). [12:41:11] */ [12:41:11] $wgParserCacheExpireTime = 86400; [12:42:24] That could be something. Is there a way to purge the parser cache? [12:42:41] https://www.mediawiki.org/wiki/Manual:Purge saw this too, but it suggests null edits [12:44:47] there's a purgeParserCache maintenance script [12:45:18] https://en.wikinews.org/wiki/Main_Page has a button "refresh page" for reloading latest news on the right side. it points to https://en.wikinews.org/w/index.php?title=Main_Page&action=purge [12:45:29] So I guess this is needed [12:46:06] It depends on impatience [12:46:17] I think that's for things like daily updates [12:46:25] Most wikimedians will do stuff like purge and null edit and then just get on with their day [12:47:11] I see [12:47:38] I just wanted blog posts to show up :) but yea, I guess I can do null edits too. Just wanted to make it easier. [12:48:04] IIRC, wikipedia and such have bots basically just doing null edits... [12:48:14] My plan/goal is to make a form for making new blog posts from the phone and that they show up right away. [13:06:44] perhaps this defualt is what I need to change =) $wgDLPMaxCacheTime = 60*60*24; [13:07:16] That looks suspicious [13:09:50] How do you mean? [13:10:10] https://www.mediawiki.org/wiki/Extension:DynamicPageList_(Wikimedia)#Configuration [13:10:40] As in, it might help [13:10:44] /might be responsibile [13:11:50] I'll try it. Thanks. [13:12:37] I only have one page with DPL and so the site isn't slow if it looses its cache anyway [14:57:42] Some time ago the french wiktionnaire started to render different for me, and now the french wikipedia randomly switch between this and the normal layout. https://lutim.net/8ADsEwKP.png [14:58:07] (in several browsers) [14:58:35] pages that rendered on way stay this way after force refresh [14:58:39] That's the "new" version of Vector [14:59:14] Why does it switch randomly between the two? nd can I disable it? [15:00:39] You can definitely disable it if you have an account [15:13:34] Bad A/B testing? :P [17:00:56] hi, I'm running this code in a subclass of ContentHandler: https://pastebin.com/tRAPKedL [17:01:33] I see it's being called 4 times at various points from the skin. is there a good way to cache the value once and retrieve it, rather than looking it up 4 times? [17:07:53] or should I just not worry about it? [17:08:01] maybe some of the stuff it calls is already caching [20:08:47] alas, poor yurik [20:09:03] he served me well... [20:09:20] what did i miss? [20:34:29] just submitted my first patchset to mediawiki core :P [21:16:25] Is there anyone around who can help me with a parser issue? I am calling Parser::replaceVariables() but sometimes I get a MWException. Not always (though not sure if that is just caching) and also this didn't seem to happen on MW 1.29, only since upgrading to 1.34. [21:16:34] (so far as I know) [21:16:45] Stack trace here: https://dpaste.org/oEiP