[00:52:13] does mediawiki provide an api to get section number by its id? i'd like to blank a section from javascript, and all i have is the anchor name for its respective h2 tag [01:20:32] If they did, it would be in apiParse [01:39:31] thank you, bawolff :3 [03:29:03] Hello [03:29:19] Is there a way to edit an entire page (as opposed to a section) via the API? [03:30:59] yeah, that's the normal way it works [03:31:28] txtsd: just don't indicate a section number, and it'll edit the entire page [03:32:21] Oh I tried it in a sandbox and it only did the top section when I post'd [03:32:23] I'll try again [03:33:27] Actually, now it's saying it /needs/ a section number [03:33:38] "info": "The \"section\" parameter must be a valid section ID or \"new\".", [03:34:37] Nevermind it was still checked. Silly javascript. [03:34:47] Thanks for helping me rubberduck. [03:38:37] thats ok [03:38:55] can you think of a way to make this mistake easier to identify for other people in the future, if so then it can be changed [04:48:23] when i have a javascript, i'd like some wiki markupped comment to show at the top, where i could put some uml diagrams and notes in markup as opposed to a plain text comment [04:48:26] is this possible? [04:49:45] think of it like a wiki markup page for https://www.mediawiki.org/wiki/MediaWiki:Gadget-site.js .. templates are well set up for this, templates have doc at the bottom for example [05:02:15] Sveta: no, that isn't (currently) possible [05:02:59] there's preliminary support in core for that sort of thing but no UI built up for it yet as far as I know (MCR, aka Multi-Content Revisions) [05:24:14] There's stuff on commons for structured descriptions [06:17:41] Is there OAuth for logging in? Or do I have to use bot login and bot password every time? [07:23:25] ok so when I compare a page with the text that I'm going to update it with, it shows a diff for a newline. But when I edit the page, it says {'result': 'Success', 'nochange': True} [07:51:50] You can make a consumer only oauth app [07:53:04] newlines at end are trimmed [08:08:06] Is there a quick way to re-send activation emails for users who did not receive one? I think postfix has been misbehaving on a machine hosting everyhing (MySQL, Apache, Postfix). [08:16:39] thanks bawolff [08:16:51] What do you mean by consumer-only? [08:17:34] If the oauth system is available, I'd like to use it to login my bot instead of doing double requests to grab the csrf token. [09:01:28] there's an option in oauth where you can create an oauth thing just for your user [09:33:18] Hi. I want to bring attention to this post about a possible contributor that has his phabricator account unapproved. He also wants to help with MSSQL bugs [09:33:22] https://www.mediawiki.org/w/index.php?title=Topic:Uvchvjq4moxr5prh&topic_showPostId=uxkwre6z2nu6dbka#flow-post-uxkwre6z2nu6dbka [09:33:33] thxbai [11:27:14] Reedy: legoktm If you have some time to review https://gerrit.wikimedia.org/r/c/mediawiki/vendor/+/502463 that'd be super [11:27:21] Not much to review so not a lot of time needed [11:27:40] If it just sits there we'll end up with some merge conflicts, so quick review will save us time [12:16:25] Reedy: is that thing good to merge if I do the .gitattributes thing? [12:16:42] JeroenDeDauw: If you can do a point release, and update your vendor patch to use it, probably [12:16:59] Not actually checked/verified the code [12:17:22] Reedy: thanks for the PR! [12:17:35] np, we're trying to actively put these in upstream to make it cleaner all around [12:17:37] Noticed just in time before I started doing this myself [12:18:16] Reedy: composer.json ignored??? [12:18:26] Yup, it's not needed [12:18:34] hmm seems elsewhere as well [12:18:39] sure [12:18:46] I figured it might contain useful info [12:19:03] Some people argue strongly that it is needed and stuff [12:19:15] I'm not gonna fight too hard over it, but the rest are superfluous [12:19:49] Is alreayd merged [12:21:28] released [12:21:35] I will amend the gerrit patch now [12:21:57] Reedy: this also needs review (and already causes a conflict) [12:21:59] https://gerrit.wikimedia.org/r/c/mediawiki/vendor/+/502784 [12:22:23] Nothing needs to be ingored there thiough I already submitted a gitattributes PR to the library [12:22:46] Would be cool to get this merged already though, the test files will be gone next time we update, and they are not causing any harm till then [12:26:20] Reedy: new version on gerrit: https://gerrit.wikimedia.org/r/c/mediawiki/vendor/+/502463 [12:26:39] Thanks for the review so far. As mentioned before, this being merged quickly will save our team time [12:35:05] Reedy: actually I am not so sure excluding tests from composer install is a good idea in general. Tests provide a great usage example if they are well written [12:35:42] They do.. But they serve no purpose in a vendor checkout [12:35:51] Agree [12:36:04] Though thinking about it, I think forcing them to be included elsewhere is harmfull [13:15:47] Reedy: what does WFM mean? [13:16:02] works for me [14:01:36] Technical Advice IRC meeting starting in 60 minutes in channel #wikimedia-tech, hosts: @amir1 & @subbu - all questions welcome, more infos: https://www.mediawiki.org/wiki/Technical_Advice_IRC_Meeting [14:51:20] Technical Advice IRC meeting starting in 10 minutes in channel #wikimedia-tech, hosts: @amir1 & @subbu - all questions welcome, more infos: https://www.mediawiki.org/wiki/Technical_Advice_IRC_Meeting [15:01:48] Hello everyone, I have some questions about the mediawiki parser and cache, can someone help me or redirect me to other channel? :D [15:11:49] If you care about your sanity, give up now ;P [15:12:41] bsnow: What's your question? Might be easier to work out if anyone knows [15:15:18] Long story short, I want every time I parse a page, to force load the templates from database. Something like automatically purging a page after parsing it. I tried this settings in my local mediawiki, but still not working - https://www.mediawiki.org/wiki/Topic:S0cerwuym8eanybq [15:19:20] can you give more context, what do you mean by "every time you parse a page" [15:19:34] typically pages are parsed only once (upon save), and that parsed version is shown for each page view [15:19:59] because parsing is *really slow* and will literally add seconds to page-load times, which drives away visitors and makes for a frustrating experience for power users [15:20:14] (well, not "will", but "can") [15:20:56] It is part of a project where I want to parse the whole en wiki history from wikitext to HTML [15:21:35] I have isolated the parser, and I send wikitext to it to parse the pages, also I have intercepted the calls to the database to feed the right templates [15:21:39] Old revisions of wikipedia pages won't necessarily render the same as they did on older versions... [15:21:59] We are only interested in the HTML [15:22:39] right, what Reedy is saying is that the parser changes between mediawiki versions, so the HTML generated for an old revision with the current parser may not match the HTML generated for an old revision with a previous software version's parser [15:23:28] will there be a huge difference? The interest is in doing research with the HTML pages afterwards [15:23:49] depends on what sort of research you're doing with the HTML [15:24:32] I think most differences will be pretty minor until you start going *really* far back [15:24:37] The idea is in doing NLP research, not some technical on how the pages looked like [15:25:01] it seems like HTML might not be the best for that then; have you looked into Parsoid? [15:25:37] it expands wikitext into an intermediate representation (not HTML, but with other semantics that preserve the original meaning of the wikitext while being more structured) [15:26:26] aha, thanks I haven't looked at it [15:26:46] the idea of the project was to release a whole dump of the en wiki in HTML format [15:27:00] enwiki already provides dumps ;) [15:27:10] I inspected some of the older pages from 2005 let's say and they looked good [15:27:31] although HTML dumps haven't been run for a while it seems [15:27:32] yeah, but the dump is wikitext, there is no HTML dump [15:29:15] I have everything ready, but only when I parse the pages, the parser fetches only one template and than reuses it for a while, instead of querying the database [15:29:25] as for your main question, you can look into how FlaggedRevs does what you asked about (ensuring that some revision of a page is "frozen" to the versions of the templates that were live when that page was saved) -- https://www.mediawiki.org/wiki/Extension:FlaggedRevs [15:30:51] the extension has dozens of other features though, so digging through it to find that specific functionality may not be the easiest thing in the world [15:31:36] as for Parsoid, see https://www.mediawiki.org/wiki/Parsoid -- it's a node.js service that you can feed wikitext and it translates it to annotated html [15:31:55] aha, thank you I will check them both [15:32:25] one more thing, do you maybe know where the parser or mediawiki stores the templates when it retrieves them from the database? [15:32:47] it'll save the fully-expanded version in the parser cache [15:33:15] (fully-expanded = all templates replaced with their respective contents) [15:33:55] shouldn't the$wgParserCacheType = CACHE_NONE; [15:33:55] $wgEnableParserCache = false; do the job in this case? [15:34:14] and what about the database caching of pages, revisions and text? [16:01:41] (sorry, was busy) bsnow: your original query was unrelated to parser cache. You wanted to fetch old versions of templates corresponding to the version of the page you're parsing [16:01:57] mediawiki doesn't do that, it'll always fetch the latest version of the template, even when viewing an old rev of the page [16:02:25] parser cache is used to improve performance when viewing pages, but it doesn't impact how they're parsed [16:04:17] I linked to an extension which adds in that feature, as well as to a service where you can more easily control what wikitext it's fed compared to directly attempting to use the PHP parser [16:18:32] Yeah, I implemented something similar to fetch the right template and not the last version [16:18:39] I will check the extensions, thanks :D [17:23:56] Hi [17:24:33] I search a ACL Extension for mediawiki 1.32.0 [19:08:19] commonPrint.css is in mediawiki.legacy. Does that mean it should be avoided for new developments? If so, is there a replacement? [19:18:07] it's not marked as deprecated? [19:31:18] andre__: No. Not in the CSS file at least. [19:31:47] Also not in Resources.php [19:33:42] then I guess it's fine to use. hmm [19:35:30] andre__: Ok, I'll leave it in then. Thx! [19:42:53] FoxT: Those styles are "legacy" in the sense that you shouldn't add new features to them and over time we expect to replace them with a new system for skins, but it's not been developed yet. Maybe in another decade. :-( [19:43:34] James_F: So it's not too far off. Few months tops. :) [19:43:47] Ha. [19:44:12] The 2020s start on 2021-01-01 which is a little bit further away, of course. :-) [19:44:43] :) [20:25:31] So is it possible to have some program that reads all the external links present in a wiki and provides an editable (i.e. classification of entries) all-in-one view of all of the external links of the wiki [20:27:17] So everytime someone adds a new ext link it would showup under = Unclassified = in a wiki page? This is non trivial because it would require that when someone changes a link it would need to change in at least 2 places [22:00:18] I gotta bounty someone to do https://develop.consumerium.org/wiki/Voting and https://develop.consumerium.org/wiki/Consumerium:Metrics [22:02:22] They are simple, but the fact is that I can make a lot of spaghettis in the kitchen that are good... at the keyboard, not so..