[00:42:09] Anyone who knows a bit more about the following task: https://phabricator.wikimedia.org/T120962 around? [03:55:42] hello! [04:04:30] anyone around? [04:05:14] hi Gene_ [04:05:29] people are probably around [04:05:32] did you have a question? [04:05:42] if you ask, people will usually respond :) [04:05:48] yes, if possible. [04:07:04] looking to try to add/edit coding within the Citation/CS1 module and it's not going to plan [04:16:23] what specifically? [04:22:16] i DM'ed you (i think?) [09:09:40] https://www.mediawiki.org/api/rest_v1/page/html/Content_translation/cxserver/Setup ??? Restbase failed to process subpage? [09:20:07] KenOokamiHoro: context please. [09:20:18] If you have a question please ask it [09:20:35] might be #wikimedia-tech territory though as it's not about MediaWiki itself [09:30:42] andre__: get it. [09:33:54] How to expand templates using pywikibot core code? Objective: given a page name, need to fetch 'text' that is displayed on the wikipedia page programatically. [09:41:26] Hi [09:41:56] Where can I get help with pywikibot code? I need to know how we can extract text from an wikipedia article page using python code. [14:35:50] There's a wiki I'm dealing with where the edit pages ("?action=edit") are themselves getting cached - you need to refresh a few times to see the real wikitext. [14:36:24] They're using Cloudflare - could that be the cause of that? I don't see anything about it in the Cloudflare page on mediawiki.org. [18:27:34] hello. I intend to use the name of the page "Category:Etudes in 4/4 time". It was created without problems. Can the slash cause any problems for web browsers mw db? [18:28:12] I don't think so [18:28:16] It will think it's technically a subpage though [18:28:25] if subpages are enabled in that NS [18:30:20] I see. I already had to avoid legitimate slash in the page name because the subpages are allowed in the main. If I do not allow subpages in Category ns, I am fine then. [18:31:08] there is no problem about having slashes in mainspace titles [18:32:00] subpages enabled means "stuff after a slash is subpage". otherwise, it's just another page with a slash in its title [18:32:31] e.g. https://en.wikipedia.org/wiki/OS/2 [18:36:42] Thanks. It is clear now. [20:24:39] Is there any way to find all pages which normalize to a particular string (e.g. remove combining/accents, ignoring case, etc) [20:24:45] via the API that is [20:27:03] no, because the api/MediaWiki doesn't know the normalization algorithm and doesn't implement it [20:29:07] bummer, thanks Vulpix [20:29:33] although Mediawiki does have some normalizing function, since it redirects in certain cases [20:31:59] yeah, and it applies that normalizing function in the api. But only that normalizing function, not an arbitrary one [20:33:57] https://www.mediawiki.org/w/api.php?action=query&titles=i__tem|I_tem|i_tem [20:43:59] I see. Is there any way to get the query to return all pages which match the requested title, if they were normalized? [20:44:10] e.g. find "grant" and "Grant" with https://en.wiktionary.org/w/api.php?action=query&titles=grant [20:45:49] wiktionary is a bit different because it doesn't force the first letter to be uppercase [20:46:21] but that's the only difference in titles, so you can request both, uppercase and lowercase: https://en.wiktionary.org/w/api.php?action=query&titles=grant|Grant [20:46:52] yeah, that is essentially what I am doing now [20:47:26] the hoped for result was to get all of the remaining words like graƱt etc [20:47:42] but that seems outside of the scope [20:47:49] Hello - I've built a parser function that accepts some arguments (for example: ). I am noticing is someone uses a > or < in the argument the whole thing blows up (for example: ). When I print the arguments provided to the parser function instead of the array ([first] => "one > two") I get something like ([one] => "one", [two] => "two"). [20:48:35] TheDaveRoss: you may try the search, although it may return more results than what you'd expect [20:49:34] gmv: it should be escaped as [20:51:33] Thank you, I tried that and it worked, wondering I guess that is because wiki gets confused because it thinks the tag is