[01:38:16] if I want to request a new oauth consumer for testing -- one that is for use only by my account -- and later want to expand it to everyone once I test it, should I give it a name like "foo-test" and later have it removed? or can I have it as the real name "foo" and later update? [01:48:22] ningu: you can just request a normal consumer, and tell me not to approve it yet [01:48:31] then you have 30 days to test it [01:49:54] ahhh ok [01:49:54] cool [01:50:14] it's for a toolforge tool called archiveleaf, so I should call it archiveleaf I guess? [01:50:27] probably [01:51:06] can I later change the callback url (and everything else)? [01:51:14] was going to use localhost for now [01:52:43] you need to propose a new consumer if you want to change anything other than the IP range or the consumer secret [01:52:59] our OAuth code kind of sucks [01:53:01] hrm... ok. then I'll just do a test one [01:53:08] as long as it can be deleted later [01:53:27] it will expire eventually if not approved [01:53:33] ah cool [02:01:19] tgr: actually wait, does "new consumer" also mean new version of existing consumer? [02:07:39] ningu: yeah, internally the whole versioning thing is kind of a fake [02:07:50] they are all separate consumers, just similarly named [02:09:45] haha ok [02:10:01] are oauth scopes important? oauth clients ask for you to provide them [02:10:07] and if so is there a list somewhere? [02:16:45] I think we use the term "grants" [02:17:40] https://meta.wikimedia.org/wiki/Special:ListGrants is the list of grants [02:18:47] thanks [02:22:05] ok, progress [02:22:25] my app is now making the authorize request at https://en.wikipedia.org/w/rest.php/oauth2/authorize but getting the response: Application Connection Error [02:22:29] "The authorization server encountered an unexpected condition that prevented it from fulfilling the request. [02:22:52] this is an owner-only consumer, not sure if it needs to be approved for me to use it? [02:24:04] anyway, I dunno how to debug further [02:24:41] https://meta.wikimedia.org/wiki/Special:OAuthListConsumers/view/c44c83dce6755cad8a53101840cb2c57 [02:25:13] that's a bug [02:25:24] (or that error message is terribly phrased) [02:25:49] please file a task and provide the timestamp at which you saw the error [02:27:22] ok will do... on phabricator right? [02:27:48] hmm wait. it says no callback uri is allowed, maybe the issue is that I did pass one [02:28:05] but that was somehow the only way to create an owner-owned consumer [02:28:30] ok, that may well be the issue [02:29:57] or maybe not. I dunno. haha [02:30:02] I'll file a task [02:34:42] FYI https://phabricator.wikimedia.org/T245232 [07:03:28] still having issues with my oauth 2.0 consumer: https://phabricator.wikimedia.org/T245232 wondering if there's something obvious I'm doing wrong [07:12:51] hmm... I guess this is all very recent stuff. https://phabricator.wikimedia.org/T244187 [19:08:24] any suggestions for who can help get to the bottom of the apparent oauth2 bug that I've found? https://phabricator.wikimedia.org/T245232 [19:08:34] like to look at debug logs server-side or whatever? [19:08:56] I'll try to check it over the weekend [19:09:05] thanks [19:10:03] tgr: it happens with two proposed oauth consumers, archiveleaf-test which is owner-only and archiveleaf which is not restricted like that but is currently unapproved. am I right that both should allow me to authenticate as myself? [19:10:23] yeah [19:10:36] I thought maybe the bug was with one vs the other kind of consumer so that's why I made the archiveleaf one [19:10:45] later I'll make the real archiveleaf one with correct callback uri [19:10:49] in any case that's an error message that should only be used for server-side bugs, not client errors [19:10:54] ok [19:11:06] got it, yeah, I don't know what other errors might be possible [19:11:12] so either there's a bug or the messaging is wrong (also a bug) [19:11:18] ok cool [19:11:28] I've been hoping the error was on my end but that looks less likely :P [19:11:37] but I didn't realize how new the oauth2 stuff was [19:11:48] are you familiar with the other bug that mpaa filed? [19:11:54] which has been fixed apparently [19:12:46] https://phabricator.wikimedia.org/T244187 [19:12:56] oh, you commented there :) [19:13:01] anyway thanks for taking a look later [20:38:42] bd808: thanks for approving ... still broken on backend unfortunately. but test code seems to be ok on my end so far, at least [21:50:28] generally speaking, do wikimedia sites all serve their main entrypoints as /w/index.php, /w/api.php etc.? [21:50:38] I dunno why it's taking me so long to get my head around the architecture haha [21:53:03] that path info should be listed on Special:Version [21:53:18] ahh thanks [21:53:27] I keep forgetting about info on special pages [21:55:37] ningu, yeah all the wikis have those scripts [21:55:44] non-mediawiki things won't ofc [21:56:59] sure [21:57:11] but wikimedia is kinda mediawiki all the way down, it seems :) [21:57:41] I kinda know my way around mediawiki better than the whole way wikimedia uses it and organizes all the sites [21:57:59] but it's a big topic so lots of gaps [21:59:11] well by now you've probably ran into all our misc tools and things [21:59:26] well [21:59:27] some of them [21:59:31] yes, I know about bots and tools etc [21:59:36] haven't had much experience writing them yet [21:59:46] what are "gadgets"? [22:00:20] !gadgets https://www.mediawiki.org/wiki/Gadget_kitchen#What_are_user_scripts_and_gadgets? [22:00:48] ningu: ^ user scripts that have been "promoted" [22:01:14] hmmm ok [22:02:16] oh, I didn't realize there was a common.js [22:02:23] ok I see now [22:02:30] and I guess it's all hooked in via mw.* [22:02:45] gadgets are scripts and styles controlled by user preferences, basically [22:03:24] JavaScript running in the user's browser, hooked in via mw, yes. as opposed to bots and tools that run on totally separate VMs [22:03:37] I never quite got an answer yesterday on the "right" way to do automatic transliteration from Balinese script to latin on a wiki page [22:04:06] the idea is the transliteration should be visible below the original to facilitate people who can't read the original. ideally it wouldn't be an alternative but something added on [22:04:31] on palmleaf.org this is done via some hacky extension stuff that ends up storing the transliteration in the saved wikitext, but hiding it from editors [22:05:06] I am ok with using LanguageConverter but unclear to me if it would really work -- we don't want the whole page switched, or at least, not that as the only option [22:05:51] I made a tool at https://tools.wmflabs.org/icu-transliterate/ that already works as a backend [22:06:07] so I guess the client could request the transliteration, every single page load ... but ... [22:09:52] cscott said they'd walk me through the LanguageConverter stuff [22:14:28] More like "where I see the future of language converter" [22:14:36] the split-screen text is very interesting [22:14:57] that's probably also something which would be more widely useful as a languageconverter mode [22:15:30] cscott: hmm, ok [22:15:39] agreed about languageconverter wanting to get more data from CLDR/ICU, I should look into the transliteration rules there [22:15:50] but that's a longer-term project [22:15:57] yeah, ok [22:16:12] it's just a little annoying cause I already wrote this transliterator in ICU's language [22:16:16] I can try to port it though [22:16:27] i guess my tl;dr is that if you manage to do this as a LanguageConverter module, then it's "future proofed" in terms of getting better as I gradually improve language converter [22:16:35] I see [22:16:37] you'll be able to edit the page natively in your choice of variants, eg [22:16:43] (that's the next feature i'm personally working on) [22:16:51] ok, in this case we actually want to forbid that [22:17:06] because the transcription has to follow the image [22:17:12] which is in one script and not the other [22:17:24] well, ok, so not all of my features are useful for your use case ;) [22:17:35] as long as it isn't required to work that way, that's all :) [22:17:56] but if we write a thin extension to (say) display both the regular text and the transliterated text, that's something that would probably be more generally useful to other people [22:18:06] agreed [22:18:37] cscott: but if you look at, say this page: https://palmleaf.org/wiki/carcan-kucing [22:19:41] the text is broken up into leaves (pages) and the transliteration is below each page's transcription [22:19:41] so, for my use case, we need to be able to say which chunks get transliterated and where the result shows up [22:20:10] in the wikitext there is a custom tag that I used for that [22:21:17] cscott: connection is a little flaky at this cafe, not sure if you got that bit about tag [22:27:50] I got it, sorry was off looking at another window [22:27:58] no worries, just making sure it got through [22:28:29] Your implementation generally seems reasonable. [22:28:53] except I guess the custom tag is doing the transliteration client-side? [22:29:19] no, it's currently saved in the wikitext so it doesn't have to be regenerated every time, in a following tag [22:29:22] but that isn't essential [22:29:44] there's some stuff to hide the tag when you edit the wikitext, then regenerate it on save [22:30:32] i guess my suggestion was that if the tag did the transliteration server-side using LanguageConverter::translate(...) then it would be easier for me to add that transliteration functionality to other wikis in the language [22:30:47] ah got it [22:31:05] yes, that could work [22:32:34] maybe just {:Leaf1} in the markup, and have the original text transcluded from {:Leaf1} [22:33:26] hrm... ok, so there would be a separate wikipage for each page of the original manuscript? [22:34:08] wikisource does that a lot, i dunno. it's not great for editing, but they make it work. [22:34:19] can you point me to an example? [22:34:23] it does simplify the question of "which text should i extract from the page to transliterate"? [22:34:33] I definitely want to know other similar things people have done like this [22:34:36] in wikisource [22:34:44] where does proofreadpage store its stuff? [22:35:58] also, can't pages have pages "under" them? like /wiki/Foo/bar [22:36:19] yeah, subpages [22:37:14] so if you look at say https://en.wikisource.org/wiki/Page:NIOSH_Manual_of_Analytical_Methods_-_3516.pdf/1 [22:37:26] https://en.wikisource.org/wiki/Index:William_Blake_in_his_relation_to_Dante_Gabriel_Rossetti_(1911).djvu [22:37:30] ^ sorry, that's the top page [22:37:57] there's a separate article for each page in the source ("leaf" for you presumably?) [22:38:00] like [22:38:03] https://en.wikisource.org/wiki/Page:William_Blake_in_his_relation_to_Dante_Gabriel_Rossetti_(1911).djvu/3 [22:38:20] and then "sections" for reading, like [22:38:23] https://en.wikisource.org/wiki/William_Blake_in_his_relation_to_Dante_Gabriel_Rossetti/Chapter_2 [22:38:31] ok I see [22:38:32] are transclusions of the appropriate pages [22:39:05] as i understand it, at least. honestly, i'm not a wikisource expert, but that's how i understand they work. [22:39:29] ok, and there's a tag that makes it easier, I guess [22:40:02] proofreadpage is doing the multicolumn view in this case [22:40:18] I couldn't see how it's stored because &action=raw isn't allowed [22:40:29] it might actually be reasonable to add your transliteration section as an optional feature of proofreadpage [22:40:33] er, format=raw [22:40:55] yes, I've considered with proofreadpage [22:41:00] but there are some challenges in adapting it [22:41:04] it's stored in a JSON blob as I recall, which just consists of wikitext for the transcription in one property and wikitext for the image in the other property [22:41:06] maybe can be overcome, I dunno [22:41:48] it has a pretty active user & developer community i think [22:41:52] ok [22:42:01] so you could probably get some help [22:42:34] tpt is the current active dev, he's a great guy [22:42:43] https://www.mediawiki.org/wiki/User:Tpt [22:42:51] thanks [22:43:34] and like i said, if the transcription ultimately calls LanguageConverter::translate() to do its work, then your code would be broadly useful for (say) transcribing books in cyrillic but proofreading in latin, etc. [22:44:27] yes, I can see how that would be useful [22:44:35] i just always like to kill as many birds with a single stone as I possible can (sorry, tortured proverb) [22:45:31] so basically what happened with palmleaf.org is, I knew there were some preexisting things but the priority was (1) getting it up and running as fast and cheaply as possible, (2) being able to totally change and redo the interface rapidly in response to user feedback [22:45:46] so we made a small react app and did whatever we needed on the backend via an extension to make sure it worked [22:46:07] I looked at ProofreadPage but it seemed like it would be too much work to adapt it quickly [22:46:29] now though, making it more robust and widely used, is a different matter (assuming we get the grant we're applying for) [22:47:54] i always like getting something running quickly first. it helps to understand what you're doing before you spend a lot of time trying to do it "right" [22:48:08] good luck on the grant [22:49:17] yeah... I mean a super basic issue with proofread page is, palm leaves have a different shape so you really want two rows not two columns :) [22:49:22] that can probably be fixed though [22:52:05] :) [22:52:16] i gotta run pick up my kid from school [22:52:21] ok, I have to go too [22:52:23] thanks for the help! [22:52:39] ping me on irc and i'll try to keep an eye on my backlog if you have more qs