[04:19:48] Hi all - don't know where to ask this question, could somebody help? I'm learning about private wiki setups, but on an instance with disabled read access I can't get a login token to use the API (MW 1..34.2). How can I permit users to get login tokens for a private wiki? [04:21:16] *disabled ANON read access, that is! [04:23:03] Ah... asking the question helped - I had |site_info appended to the token request, triggering the restriction. Sorry for the noise, solved. [04:23:24] You were all great rubber ducks :) [06:21:40] Is it possible to have my local wiki to use external auth servers like mediawiki.org or openid, etc? [06:23:37] Would this be what I needed https://m.mediawiki.org/wiki/Extension:WSOAuth [06:23:37] Forza: yes, you can use an extension for that [06:24:37] Little worried about gdpr though. What kind of information would mediawiki get from my site? [06:25:42] For mediawiki.org, you can use WSOAuth. For OpenID there is another extension, not sure how well it's maintained. [06:26:14] the OpenID extension works just fine for me for my own private wiki :P [06:27:35] That's great :) [06:27:53] The identity provider gets whatever information it gets by interacting with the user during login (IP, user agent, the user's identity on their site). You get the email address of the user from the identity provider (although you can probably configure for that not to happen) and the username. [06:29:18] And I suppose the only data the oauth server gets about the user that is new is that they are generating a token for my site [06:31:02] I notice that MW sets cookies even before logging in. Can those be avoided? If I block cookies on the browser the page seems fine still [06:34:53] no [06:35:05] they are part of the login process [06:35:20] the ones that are set on the login page, anyway [06:36:13] the only other time session cookies are set is when you save an edit anonymously, blocking those is largely inconsequential (you might see old versions of pages) [06:36:55] for non-login-related cookies, it depends. Usually they are related to analytics or user preferences and not really important. [06:39:31] For login etc I understand but a first time visitor gets a couple of mw cookies set too [06:39:58] I don't use any analytics etc [06:42:47] Hmm I need to check. Maybe I'm wrong [06:43:25] Can you check if you get any cookies? https://wiki.tnonline.net/w/Category:Btrfs [06:43:39] Should be one from cloudflare [06:44:33] I only have __cfduid [06:45:46] Oh good [06:47:01] Though it means the IP is logged by cloudflare, which could be a problem from gdpr perspective [12:20:45] hi [12:21:02] whats the mw package manager called again? cargo or something [12:21:09] construcor [12:23:18] I'm talking about the tool used to install and update extensions [12:35:50] composer! [14:24:12] Hi - I have inherited a crashed mediawiki 1.19 - I can export the searchindex table - is there anyway to resurrect this into a working form? - I know it's not ideal but it's what I have to work with and the information is valuable but not useable in this state - thanks in advance [14:26:44] can I do a fresh 1.19 install and then just from mysql import the data from the searchindex table? [14:29:34] The data in the search index table alone isn't going to get you very far [14:30:04] It's also easily enough generated... [14:39:13] you'd need the page, revision and text tables at a minimum to get any useful information out of the database [14:47:24] I work on a public wiki for my organization, and I am working with the PdfBook extension to export a book that volunteers have translated over the last 4 years. I've successfully used the extension for single pages, but when I export from categories I get blank pages. The PDF successfully downloads and has the right TOC, but the pages are blank. We [14:47:25] are using Mediawiki 1.31.8. It would mean so much to our volunteers and community to be able to export their work for others. If you have any guidance, it would be greatly appreciated. The discussion page for the extension did not have any answers addressing the issue. [14:49:02] It's not an extension we host, and I suspect many people won't have much interaction with it generally, unfortunately [14:49:14] I'm guessing https://gitlab.com/organicdesign/PdfBook/-/issues/6 was your report? [14:49:41] It wasn't mine, but it's the exact same issue. [14:49:59] Reedy - thanks for the response. [14:50:23] It looks like there's a linked/proposed patch [14:50:38] https://gitlab.com/organicdesign/PdfBook/-/merge_requests/2/diffs [14:51:08] Don't know if it'll actually fix the issue, but it definitely looks like a bug [14:52:09] I would suggest commenting on the issue to say it affects you too. As it'll need the developers/maintainers to actually fix it in their code repo [14:52:15] But as it's a simple change, you might want to try it locally too [14:55:42] excellent - thank you for digging that up [15:01:12] That worked Reedy. Most excellent. Thanks again! [15:01:38] spelled-geary: Make sure you comment to say it's fixed. Might help the developers bother to merge it [15:01:55] Ok. [16:14:48] Nikerabbit: Is there a task for the problem where translated templates need to be edited and re-marked for translation twice (always one version behind or something?). I've not succeeeded in finding it on Phabricator. [16:18:01] Does runAllJobs have an admin interface? [16:18:40] admin interface? [16:19:15] do you mean runJobs.php? – https://www.mediawiki.org/wiki/Manual:RunJobs.php [16:27:20] Reedy: thank you [16:27:31] tgr: thanks [16:38:41] Krinkle: yeah [16:39:33] Krinkle: https://phabricator.wikimedia.org/T255334 also feedback on https://gerrit.wikimedia.org/r/c/mediawiki/extensions/Translate/+/608620 would be welcome [16:41:05] Krinkle: yeah [16:41:26] Reedy: sorry, I mean webgui [16:44:02] Nikerabbit: ok, will leave some CR. [16:45:33] nope [16:50:51] Krinkle: thanks, I will try the simpler approach you suggested [18:22:46] How do I remove an article from the sitemap? [18:22:55] (any function that already does it) [18:48:26] sitemap_mediawik: unless you do it manually, there's no way to do that. __NOINDEX__ doesn't remove it AFAIK (and that's a bug already reported IIRC) [18:48:47] T229754 [18:48:49] T229754: generateSitemaps.php includes pages marked with __NOINDEX__ - https://phabricator.wikimedia.org/T229754