[04:14:13] Hi, I am trying to look for a page on-wiki, which lists all the extensions installed on that wiki. Can you please remind me what is it called? [04:15:42] My bad -- I was looking at /wiki/Version.php, while it was Special:Version. [07:53:17] Hi, good morning. [07:53:45] I am trying to increase the priority of categories in my sitemap by following this https://www.mediawiki.org/wiki/Manual:$wgSitemapNamespacesPriorities [07:54:04] How do I figure out what NS_xxx I should use? [07:55:27] or is it simply NS_CATEGORY ? [07:58:01] yup it was :D. Thanks =) [07:58:09] Forza: see for default namespace constants: https://www.mediawiki.org/wiki/Extension_default_namespaces [07:58:51] Majavah: Oh thats good. Thanks :) https://www.mediawiki.org/wiki/Extension_default_namespaces#MediaWiki_Core [08:15:28] Majavah: https://wiki.tnonline.net/w/Blog/Taco_Bowl :D [10:08:23] I hope someone's awake, I'm sort of in a pickle here. I can't update my wiki from 1.31.8 to 1.34.2 because I'm getting a message about duplicate actors? Error: 1062 Duplicate entry 'some user' for key 'actor.actor_name' [10:41:40] Oh good lord. The encoding got broken. rev_user_text went from utf-8 to latin1 or something. Grr. [10:57:11] Hi. I have a page https://wiki.tnonline.net/w/Blog but I noticed some crawlers appending a / to the end like https://wiki.tnonline.net/w/Blog/ and that link does not work. How can I fix that? Should I make a redirect? [10:59:31] I would like to avoid the redirect if possible [11:01:13] FWIW, it doesn't work on Wikipedia etc https://en.wikipedia.org/wiki/Paris/ [11:04:36] You might be able to fiddle something with a regex, but I'm too brain burnt to noodle on that further. Anyone have advice on how I can get this actor refresh script to stop being a pain in the arse about latin1/utf-8 chars? [11:07:43] These are all ancient usernames from a very very old wikipedia export and it is driving my squirrely [11:14:33] Reedy: a shame. Kind of whished there was a setting for handling trailing slashes [11:14:49] Reedy: I set up a redirect for now. Thanks [11:15:20] Reedy: would it be OK to add a feature request on Phab? [11:15:46] You can. Doesn't mean it will get implemented though :) [11:16:44] Ulfr: sorry I don't know how to deal with the UTF-8 issue. Maybe some external tool can convert the data? Is it in sql? Export to .sql, convert encoding/fix names, re-import [11:17:40] Reedy: :D of course. Do you know if the current way was a continuous choice or just an oversight? [11:17:59] I wound up purging ~500k revisions from 2009 and before, only had to manually query for like, 50 usernames after that. Wouldn't be nearly so annoying except the actors update script is transcoding them correctly, it's upset because the key already exists :| [11:19:28] and I could probably work some magic with a regex and the sql file but the production site's been down for 9 hours already while this update.php refuses to comply :( [11:27:10] :( [11:27:22] Use staging site next time ;) [11:27:43] I so very wish I could man, you have no idea. [11:27:53] Next version is around the corner too [11:27:59] :| [11:28:01] Just load it in a vm [11:28:15] End of August I think the release is [11:28:31] 1.35 [11:29:08] Unfortunately this project's gotten to clunky to carry in my pocket. And now that I'm not stuck on an end of life operating system I'll be able to update without wondering if my current setup can even run it [11:29:49] the sql file I used to swap over was chump change compared to wikipedia, but 10gig of sql is still a bit much to run on a vm [11:30:34] Ah that's big [11:31:07] well, after my slash and burn it's probably a fair bit smaller [11:31:24] but yeah, it's a problem lol [11:33:48] err, a whole swathe of missing revision table for pages that wont be viewed wont cause me trouble down the road will it? [11:34:47] No idea :( [11:35:10] Isn't there a maintenance script for deleting revisions properly [11:35:29] Reedy, any thoughts if I might impose? and yes, there sure is! it also gets very upset at me about improperly encoded characters [11:38:16] About what specifically? [11:38:23] err, a whole swathe of missing revision table for pages that wont be viewed wont cause me trouble down the road will it? [11:38:27] Sorry [11:38:42] Nah, as long as you don't try and view those revisions, or diff to them, shouldn't be an issue [11:39:07] Cool. I'll suffer the wrath of some europeans then. Thank you! [11:39:15] You could potentially repair the rev_parent_id if you wanted [11:39:19] But probably effort for little gain [11:39:41] yer. goal 1 is get update.php to finish [14:54:00] Ulfr: how did it go [14:54:57] Forza: el_id 2636000 - 2636200 of 4649889 [14:55:05] Happy sunday! I need a nap. [14:55:20] :D or c(_) coffee [18:46:46] Y'all must be messing with me now. As soon as I turn session logging on my session hijacking bug goes away. [18:49:14] hello, i am trying to use the PHP grabbers (specifically, grabFiles.php) but whenever it downloads the 500th file, it stops and i have to restart it with --from to continue where it stopped [18:49:31] PHP Notice: Undefined index: gaifrom in [redacted]/w/grabbers/grabFiles.php on line 79 [18:51:10] I am using php 7.4.9 [18:51:12] Notices aren't usually going to break a script, have you checked for any arguments that will grab more than 500? [18:51:29] whenever that notice appears the script stops [18:52:00] Huh, weird. [18:52:04] sometimes it hits 502 files [18:53:18] there arent any args that have anything to do with grabbing more than 500 [18:53:21] got a link to the source of grabFiles.php? [18:53:35] 1 sec [18:53:44] https://phabricator.wikimedia.org/diffusion/MGRA/browse/master/grabFiles.php [18:58:25] so I guess whatever happens has an effect to this: https://phabricator.wikimedia.org/diffusion/MGRA/browse/master/grabFiles.php$78 [18:58:57] so might need to look into the query() [18:59:57] that will probably be in FileGrabber? [18:59:59] not sure [19:00:04] Yes [19:00:14] https://phabricator.wikimedia.org/diffusion/MGRA/browse/master/includes/FileGrabber.php [19:00:25] Or seems so to me, I'm not exactly a PHP programmer :) [19:00:32] ok, lets look at that now [19:00:39] its a new MediaWikiBot [19:00:42] "new MediaWikiBot" [19:00:43] * [19:00:44] so [19:00:55] I see [19:01:40] How long do these operations take, the 500 you are doing? [19:01:42] that seems to be in "mediawikibot.class.php" [19:01:54] can you link that too? [19:02:35] https://phabricator.wikimedia.org/diffusion/MGRA/browse/master/includes/mediawikibot.class.php [19:02:45] query seems to be a method in the MediaWiki API [19:03:17] It is a bit ugly to see commented out code in master [19:05:37] daniel11420: do you have any idea how long do might these requests take? Just thinking for example if the login times out or something other time based [19:05:47] not sure [19:05:58] i dont think that would be the case since it breaks at almost exactly 500 files each time [19:06:07] wait maybe its being ratelimited [19:06:18] that's another option [19:08:03] i think i figured it out [19:08:10] its doing "query" with allimages [19:08:17] and allimages has a parameter "ailimit" [19:08:20] which has to be from 1 to 500 [19:08:25] so its limited to 500 images/files at a time? [19:08:32] sounds plausible [19:09:16] so maybe grabFiles is trying to get the -next- 500 files [19:09:18] but it cant do that [19:09:38] sounds like a possible bug? [19:09:56] it seems to be doing something with "query-continue" at line 79 which is where it fails [19:10:04] so maybe its trying to [19:10:09] https://www.mediawiki.org/w/api.php?action=help&modules=query%2Ballimages [19:10:12] use the parameter aicontinue? [19:10:19] but its failing? [19:11:11] already closed the code :/ [19:12:00] man it surre wouldve been great if whoever was making the grabber script left some comments explaining what the frick "$gaifrom = $result['query-continue']['allimages']['gaifrom'];" does [19:12:26] ohhhhhhhh wait i think i understand [19:13:20] the result has a query-continue thing that says which is the file after the ones it listed so that u can continue [19:13:34] and gaifrom is --from and its changing gaifrom to the file at which it should continue at [19:14:38] makes sense [19:15:00] and it isnt continuing cuz it fricks up there. so even tho its just a notice, the bug that made the notice appear makes the loop stop going [19:16:24] so it says "PHP Notice: Undefined index: gaifrom" [19:16:27] great job, maybe you should report that :) [19:16:39] that would mean... gaifrom isnt defined????? idk im confused [19:16:45] yes [19:16:54] and there is basically no error handling for the case it isn't there [19:17:11] harmaahylje, im trying to fix it myself right now bc the wiki that im trying to copy is only up for like once every 2 months [19:17:14] and its up right now [19:17:18] so im trying to copy it fast [19:17:24] and make a mirror of it that doesnt go offline for 2 months at a time [19:17:29] lol [19:17:41] why is it up only every 2 months? [19:18:06] even better if you can fix it, then you can report the bug and suggest the fix immediately [19:18:28] its basically abandoned by whoever owns it and their database sucks and goes down [19:19:26] hmmmmmmmmm but why would gaifrom be not defined there if its defined outside of that if statement [19:22:35] ok i think i fixed it time to check [19:24:01] i did not fix it great [19:25:33] what do you mean? [19:27:16] my brain hurts [19:29:21] :( [19:29:53] wait [19:29:56] if the line is $gaifrom = $result['query-continue']['allimages']['gaifrom']; [19:30:04] then why would $gaifrom need to be a thing for that line to work? [19:30:14] $name = [...] is how u define variables in php [19:33:43] OHHHHH [19:33:49] it is not $gaifrom that is undefined [19:33:56] it is ['gaifrom'] [19:34:27] probably leads to $gaifrom being undefined though? [19:35:36] I see, and it tries to continue the loop because gaifrom is not null [19:51:50] Hi guys, I want to use StructuredDiscussions on my wiki. So I downloaded and tried it on my local wiki. It works, but now I want to see if I can disable it and tried "Extension:StructuredDiscussions/Turning off all StructuredDiscussions" [19:52:09] But it's not working, how do I disable this extension? [19:52:25] I am on mediawiki 1.34.2 [19:53:07] I see that you guys also use this extension [19:53:46] What do you mean it's not working? [19:53:53] And you disable it by reversing what you did to install it? [19:54:26] Well I enabled the extension on the namespace Talk (1) and User_Talk (3) [19:54:49] so the StructuredDiscussions is showing up on this namespaces [19:54:53] and you can comment [19:55:23] but now I would like to disable this extension and go back to the wikitext comment board [19:55:32] but I can't disable this extension [19:56:17] I don't undestand the instruction here https://www.mediawiki.org/wiki/Extension:StructuredDiscussions/Turning_off_all_StructuredDiscussions [19:56:31] convertToText.php is doing nothing [19:57:18] On the help page is written "Enabling or disabling StructuredDiscussionsTo enable or disable StructuredDiscussions for a namespace in MediaWiki before 1.35, first run populateContentModel.php on the affected namespaces (or you can do it on all)" [19:57:24] I did that [19:57:35] but the extension is still active [19:58:10] and if I do '//wfLoadExtension( 'Flow' );' [19:58:23] then I get [X0LKQCMW7WyiuOk6tN3iPwAAAAI] 2020-08-23 19:57:53: Fatal exception of type MWUnknownContentModelException [19:58:42] >It is not recommended to uninstall StructuredDiscussions when there is existing content. [19:59:01] no I didn't delete the folders [19:59:43] but how do you guys disable this extension Reedy [20:00:00] You commented out wfLoadExtension( 'Flow' ); [20:00:06] That's uninstalling it [20:00:13] ooh ok [20:00:28] so how do I then disable it? [20:00:38] the instructions seem to suggest to stop people creating more pages, but not removing it completely [20:02:31] Reedy so you say if a page is converted to StructuredDiscussions you have to keep it and you can only "disable" it for new created pages? [20:02:42] I'm not sure [20:03:20] puuh isn't that pretty bad. Then you are fucked when they stop supporting this extension [20:03:37] it sounds like convertToText.php should convert it back to text [20:04:01] yeah, but it doesn't or at least I am not using it right [20:04:21] We're still supporting "liquidthreads" [20:04:37] The script seems need manually doing for each page [20:05:28] D: so if you have 1000 pages you have to run it 1000 times [20:05:32] holy moses [20:05:47] pretty much [20:05:49] Or script it [20:06:15] let me see if I can convert a page back [20:08:50] OK, looks like it works [20:09:32] for every page you have to run it =$ but ok at least it's somehow possible [20:10:04] Make a list of the pages names... save it in a file [20:10:13] You probably then only need 3 or 4 lines of bash to do that [20:10:34] yeah I have only to find out how to get that list :D [20:11:11] How's your SQL? [20:12:14] very rusty :D [20:13:21] select page_name from page where page_namespace = 2600; [20:13:57] You'll just need to append 'Topic:' to that I guess [20:14:34] page_title even [20:14:46] ok thank you very much Reedy [20:14:54] great help (y) [20:15:05] select concat( 'Topic:', page_title) from page where page_namespace = 2600; [20:15:33] ok will try that [21:07:03] I found a bug with the grabbers ( https://phabricator.wikimedia.org/diffusion/MGRA/repository/master/ ) and have found a fix, where can I report this? [21:12:49] daniel11420: Phabricator (https://phabricator.wikimedia.org/), remember to tag it with the #Utilities-grabbers tag, it'll then show up on the dashboard for that project (https://phabricator.wikimedia.org/project/view/753/); if you can submit a patch to gerrit, even better! I'd be happy to try to take a look at it or find you some people who could do that (disclaimer: I've contributed a fair amount of code to grabbers but doesn't mean I've [21:12:49] touched 'em recently, so others will probably know more about 'em in their current state than I do, heh) [21:20:11] done =p https://phabricator.wikimedia.org/T261083 [21:21:23] awesome, thank you!