[04:46:52] hello can i speak spanish? [04:47:50] hola necesito que alguien realice cambios en la pagina en la que habla de la economia de cuba [07:04:15] Hey, quick question - any PHP function available in MW to add a template to an article? [07:07:37] Anyone knows the answer? [07:08:12] #mediawiki [07:14:04] throwaway67943: do you mean via a skin or something? Where do you want to add the template? Would a parser function work? [07:15:07] I have a Title object...just wanna add a deletion template to the article via the PHP [07:15:20] given I already have the WikiPage/ Article/ Title instance for the article [07:15:45] I can also just do this: [07:16:11] $wikitext = ContentHandler::getContentText( $rev->getContent() ); $wikitext = "$newTemplates\n$wikitext"; $updateResult = $article->updateArticle($wikitext, 'Engineering test', false, $watch); [07:17:45] that sounds like it'd make sense [15:25:09] Hello - I'm trying to run reassignEdits.php to change edits from an IP address to a user, but it's not picking up any of the edits. Any advice on what I should do? Thank you. [15:27:34] When upgrading, migrateActors seems to have made several pages inaccessible due to the script not making the actual actor records for some user names [15:28:14] I can run cleanupUsersWithNoId.php, but that doesn't repair the pages that now have an actor_id of 0 [15:36:05] Sazpaimon: You'll need manual repair of those tables, by locating the correct actor id and assign it manually to those tables. Or create a fake actor and update all rows with actor_id = 0 with it [15:38:15] looks like I can do that in one query by joining the image table to page, and getting the actor name from there [15:39:05] then setting the img_actor field that way [16:05:30] okay so I guess I can't use the names generated by cleanupUsersWithNoId? [16:05:44] I get "Bad value for parameter $title: invalid name" when trying to view a file uploaded by one of those users [16:14:27] ah okay there we go [16:14:33] just couldnt make the user prefix a _ [16:17:20] probably because "_" becomes " " [16:21:05] so basically, ran cleanupUsersWithNoId, migrateActors, then ran this sql: https://pastebin.com/5R4srcgB [16:21:10] that repaired all the images [17:31:46] hi, I have successfully installed the original mediawiki from docker but as soon as I restart the container (the LocalSettings.php is correctly mounted into the container) the site stays empty. I can't find anything in the /var/logs. [17:49:49] Hello [17:50:14] How do I regenerate pages after I have updated a template? [17:50:40] puge them, usually [17:50:44] or run your job queue [17:50:51] Purge? [17:53:16] Cool jobqueue worked [17:53:20] Thanks! [18:19:22] I have another question. I am using output compression with Apache. I check for gzip /deflate/brotli and enables what the browser supports. [18:20:02] So I think it makes sense to set "$wgDisableOutputCompression = true;" [18:20:15] Am I thinking right here? [18:25:55] Forza: yes [18:26:14] Great. Thanks [18:30:14] I'm not sure it would actively do any harm, I'd expect Apache to see the content is already gzipped and not compress it again, but I suppose you're more likely to get better perf from the Apache setup since brotli etc are options too [18:30:36] I'm not 100% sure what Wikimedia actually does, but we still have it set to false [18:40:34] I have set brotli compression level to 1 as its not a super fast cpy [18:40:38] CPU [18:41:50] I was thinking maybe some dynamic content would be broken or so [18:57:34] Z:\sandbox\web\mediawiki\maintenance>php install.phpFatal error: Uncaught Error: Interface 'Liuggio\StatsdClient\Factory\StatsdDataFactoryInterface' not found in Z:\sandbox\web\mediawiki\includes\libs\stats\IBufferingStatsdDataFactory.php:11 [18:58:48] did you use 7zip? [18:59:01] yes. This was an upgrade and I get the same error in rdbms about ILoadBalancer.php not found. It looks like the autoloader is not working. [18:59:12] No, it's not that [18:59:35] https://phabricator.wikimedia.org/T257102 [19:03:41] Hi peeps, another day, another challenge :) I'm processing a lot of articles, requesting a fresh parser service on each of them to make sure output doesn't get screwed. [19:04:13] all well for small wikis, but on a big one the process just dies [19:04:25] any tips on how I could solve this? [19:05:03] what process? [19:05:54] the process is a custom Job that process the articles to export in a CSV file [19:06:39] I'm presuming it's either some process timeout or a memory issue [19:07:47] win 10 has a tar program, will that work [19:07:59] should be [19:08:06] we've only seen issues with 7z [19:08:21] Thanks, I'm trying it now. [19:08:53] makes sense, Reedy. Thanks [19:09:13] Though, it could be hitting some sort of fatal [19:10:54] Depending on the actual issue... Maybe just splitting things into smaller batches might help [19:10:59] do N articles at a time [19:15:32] Yes, that would probly work. It fails only after processing about 4000 articles. [19:16:12] Php memory limit? [19:18:01] Certainly possible if the script (or MW potentially; it's not really designed for long running parser tasks like that) has a memory leak [19:20:14] There usually are different php.ini settings for cli and web/cgi/fpm so check that [19:21:41] Found it. Is another extension we have installed that sets a limit on recursion. This limit was reached by this articles. At least is not something fatal or memory related [22:28:38] hi. could soemone make an educated guess as to when 1.35 *might* release, based on prior experience? [22:30:59] i'm neither trying to push for it, nor assuming that any response to this question would be reliable. the reason i'm asking is hCaptcha support, which I'm afraid our wiki may need to raise the bar to (our current questycaptcha is being repeatedly broken by human labor) [22:33:43] It's supposed to be August [22:33:58] What version of MW are you running now? [22:38:07] If you download the latest copy of REL1_34 from MediaWiki.org, it now has hCaptcha :) [22:38:47] Obviously if you're running an older version (or the LTS), it probably won't work. [22:41:47] right, we're ont he LTS so far [22:42:29] but it'd make it worth going with the latest release branch for a while, i guess [22:42:30] I honestly couldn't say if the code would technically work, or if minor changes are needed [22:44:45] you don't which code exactly would work on what exactly? [22:55:19] On that old of a version of MW [22:55:44] hCaptcha developed on master working on 1.31 [22:55:59] It's not impossible it might, or only need minor changes [23:00:20] thanks for clarifying, Reedy, and thanks for the REL1_34 hint, Lcawte! [23:26:01] Don't users hate captchas? [23:31:49] I'm sure most hate spam more [23:38:20] Yea. [23:38:43] Just read through https://m.mediawiki.org/wiki/Manual:Combating_spam [23:39:05] Surprised to set no fail2ban or similar [23:39:54] Surprised to see no fail2ban or similar [23:41:56] Or even moderation. I guess for high volume sites that moderation is timeconsuming