[21:59:33] o/ [21:59:38] #startmeeting RFC meeting [21:59:38] Meeting started Wed Nov 11 21:59:38 2015 UTC and is due to finish in 60 minutes. The chair is TimStarling. Information about MeetBot at http://wiki.debian.org/MeetBot. [21:59:38] Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. [21:59:38] The meeting name has been set to 'rfc_meeting' [22:00:22] #topic Parser::getTargetLanguage | RFC meeting | Wikimedia meetings channel | Please note: Channel is logged and publicly posted (DO NOT REMOVE THIS NOTE) | Logs: http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-office/ [22:01:16] #link https://phabricator.wikimedia.org/E89 [22:02:24] #link https://phabricator.wikimedia.org/T114640 [22:02:47] hello [22:02:58] So, shall we just start? Anyone here to talk about Parser::getTargetLanguage and friends? [22:03:00] * aude waves [22:03:12] "RFC: make Parser::getTargetLanguage aware of multilingual wikis" [22:03:44] https://phabricator.wikimedia.org/E89 / https://phabricator.wikimedia.org/T114640 [22:03:49] sorry, wrong channel [22:04:27] The basic idea is: Parser::getTargetLanguage should tell stuff on a page (parser functions, Lua, renderers for other content models than wikitext) what the desired target language is [22:05:00] the desired target language would depend on the page language (which defaults to the content language), and, optionally, the user language and on-page annotations. [22:05:27] * aude has two related patches https://gerrit.wikimedia.org/r/#/c/232757/ and more importantly https://gerrit.wikimedia.org/r/#/c/232826/ [22:05:47] but appreciate the rfc discussion and greater feedback that this is the right thing to do or not [22:07:15] what sort of optionally? [22:07:44] in practical terms, my goal for now is to have core set the target language to the user language in ParserOptions if a page is considered multilingual.this could be configured per namespace and/or content model. [22:08:13] how would this interact with variant language? [22:08:26] TimStarling: per default, the target language would be the page language, which usually is the content language. [22:08:45] In general I know very little about this topic, so I apologize in advance if my comment is ignorant. [22:08:52] But for some wikis/namespaces/models/pages, mediawiki would know that the target language should be set to the user language [22:09:02] ok [22:09:15] The direction ContentHandler seemed to be pushing in was to treat pages as composites of multiple types of content [22:09:55] so it seems better to think of a page as made up of multiple single-language content objects as opposed to being multilingual [22:09:58] if that makes any sense [22:10:01] Does this mean we can remove MediaWiki:Lang / int:Lang and expose a magic word instead? [22:10:07] (on Commons) [22:10:07] ori: we have moved away from that idea a bit. I now favor the idea of having multiple content objects per revision. this is much more flexible than multi-part content objects. [22:10:49] ori: you could have a composite with separate content objects per page. but how would you know which one to show? [22:10:59] I worry that you are making new systems for wikidata when your needs are almost the same as variants [22:11:01] i think this information should come from the parser options [22:11:22] currently variant language comes from global state [22:11:31] TimStarling: yes, I would want variants to use the same mechanism. and Translate, too. [22:11:34] which is not how it should be, right? [22:11:36] I imagine this would also help streamlining of what the "canonical" version of a page is for e.g. link tables. Since it can use page language for multi-lingual pages, instead of currently (I think?) where canonical is the wiki's default content language for link tables. [22:11:42] TimStarling: exactly [22:11:50] wikidata also uses global state for this. so does Translate [22:12:09] my proposal is to use the (existing!) target language field in ParserOptions for this [22:12:12] that's pretty much all [22:12:47] so that links from "Bonjour" in linktables are for how that page is parsed in 'fr' rather than 'en', if it has fr as its page language. [22:13:06] so you imagine that $wgContLang and ParserOptions::getUserLangObj() would be pretty much unused [22:13:08] Krinkle: that would continue to be the case for now [22:13:17] TimStarling: yes [22:13:27] #info $wgContLang and ParserOptions::getUserLangObj() would be pretty much unused [22:13:47] DanielK_WMDE: deprecated then? [22:14:02] robla: probably. not sure yet [22:14:03] Especially getUserLangObj seems infectious/dangerous to keep around. Except for post-parse things, like TOC and edit sections. [22:14:56] DanielK_WMDE: why not? [22:14:56] (i.e., why not sure) [22:14:56] #info for variants + Ex:Translate + Ex:Wikibase: instead of getting the desired output language from global state, it should be possible to get it from Parser and/or ParserOptions [22:15:11] actually $wgContLang is used in some ways that should probably continue [22:15:14] afaik, there is inconsistency when {{int}} is used in how the cache is split vs. getTargetLanguage [22:15:17] ori: i didn't look at all the code that uses them, yet [22:15:44] what daniel proposes would help remove some of the inconsistency [22:15:47] for example the names of parser functions come from $wgContLang [22:15:53] #info afaik, there is inconsistency when {{int}} is used in how the cache is split vs. getTargetLanguage [22:16:29] TimStarling: that should probably use the page language as returned by ContentHandler, not the global [22:16:43] yeah, fair enough [22:16:53] TimStarling: that way, you could use french parser functions on pages written in french, and german names on pages written in german, on the same wiki [22:16:54] What about user chrome that is drawn outside the context of the parse operation? [22:16:55] it doesn't tell the parser atm does it? [22:17:12] #info use page language for localized parser function names, etc [22:17:33] ori: that's already in the user language anyway, and should continue to be so [22:17:53] TimStarling: the parser can get that from Title. [22:17:55] stuff like displaytitle (a page prop) should also be made multilingual [22:18:02] but a separate, follow up topic [22:18:22] aude: there was some doscussion on the mailing list about that [22:18:23] link trails and prefixes fall into the same category [22:18:31] and then could be pulled in for display, since it would then be in parser output [22:18:37] #idea need to decide whether to deprecate $wgContLang and ParserOptions::getUserLangObj() [22:18:38] #link https://lists.wikimedia.org/pipermail/wikitech-l/2015-November/083932.html [22:18:41] DanielK_WMDE: yep [22:18:57] #info link trails and prefixes fall into the same category [22:19:00] TimStarling: good point! [22:19:12] I'm not sure we should obviously use page language for parser functions [22:19:18] TimStarling: localized namespaces?... probably not. [22:19:35] yeah, namespaces are a property of the wiki [22:19:47] Krinkle: why? [22:20:02] at this point they are only in the Language object by accident [22:20:08] (note that the question what we should do with the page language is outside the scope of this rfc) [22:20:18] Being able to parse interface messages in the skin using target language seems like a win though. That would simplify Things conceptually. [22:20:28] TimStarling: like a lot of other things ;) [22:20:50] that's kind of a problem then, no? namespaces, I mean. It's another thing that has some kind of language context but which exists outside the context of a parse [22:20:56] in CoreParserFunctions there is also {{lc:}}, number formatting [22:21:13] difficult to know what the user wants from {{lc:}} [22:21:20] Krinkle: i think for system messages this already works. the code in Parser::getTargetLanguage has special case code for MediaWiki:Foo/fr [22:21:53] but switching from $wgContLang to page language would be conservative [22:22:38] The reason I worry about parser functions behaving the page lang instead of content Lang is transclusion [22:22:43] TimStarling: the effective target language (Parser::getTargetLanguage). which, for wikitext, would usually be the page language, even if ParserOptions::getTargetLanguage is something else. [22:23:00] Currently we only support 2 levels in variation. Localised and canonical [22:23:10] I'm not quite sure about this mechanism yet, but I think we need to distinguish between the requested and the effective target language [22:23:20] We'd have to support three layers to do it right [22:23:41] Krinkle: at least if {{int}} is used, then i think getTargetLanguage should then be user lang [22:23:45] number formatting, for example {{numberofpages}} [22:23:55] something less hacky than int would be better [22:24:06] #info transcluding pages into pages with a different page language could cause confusion wrt parser function names, etc [22:24:19] well, someone has accidentally migrated some of it but not all of it [22:24:47] if you do {{NUMBEROFPAGES}} you get the target language [22:24:51] Krinkle: good point, noted. Luckily, we don't need to solve that issue to agree on setting the target language, since it concerns the handling of the page language. [22:25:04] if you do {{NUMBEROFPAGES:R}} you get ASCII, this is fine [22:25:24] DanielK_WMDE: Right. I see. [22:25:24] if you do {{NUMBEROFPAGES:}} you get $wgContLang, that is apparently a migration accident [22:25:32] * aude looked at the code again, {{int}} causes split of cache but doesn't set target language [22:25:56] #idea distingiush between requested and effective target language. use *effective* target language when calling parser functions. [22:26:02] DanielK_WMDE: The first step here is to consolidate global and parser-bound variables and make them less fragile/implicit. Not changing behaviour [22:26:20] yes, exactly [22:26:34] Do you foresee any cases we inevitably have to break or change? [22:26:42] one goal for this meeting is to avoid accidentally changing the behavior [22:26:52] in even the first step, or are we only refactoring how the language is chosen, not which one ends up chosen for each purpose. [22:27:09] Krinkle: i don't know of any, but my guess is that *something* will break ;) [22:27:18] I was hoping for people here to tell me what it will be... [22:28:02] Krinkle: for now, this is only about what is returned by ParserOPtions::getTargetLanguage [22:28:03] Hm.. so right now we're not setting the page language when the user views a page in a language different than the default page language for that page, correct? [22:28:07] And it all works because of global state? [22:28:09] and perhaps about who sets it and when and where [22:28:21] Krinkle: yes [22:28:46] or rather, per-user-language display works because of global state [22:28:48] Yeah, I think it'll be easier to tell you what breaks if we more concretely know how we intent to implement it. That's where the edge cases lie unfortunately. [22:28:56] "normal" wiki pages just use the page language [22:29:07] which is what, by default? [22:29:27] Yeah, does it default to wiki content lang or user preference / uselang query string? [22:29:37] if we made it, at least to start, a per-wiki setting (e.g. commons in multilingual => target language is user language) that might be more straightforward [22:29:42] commons is* [22:30:07] also, how do you determine the appropriate user language for anonymous users? [22:30:10] Krinkle: my current idea is to always the the target language in the ParserOPtions. set it to the page language per default, and the the user language if the content model or namespace is configured for that. [22:30:24] aude: i'd make it per namespace, even [22:30:33] ori: uselang? (and maybe setlang - ULS, if we can sort out caching) [22:30:34] ori: variant selection for anons is done with Accept-Language [22:30:50] ori: for now, i just use whatever core defines the user language to be. [22:31:00] the detection mechanism isn't relevant to this rfc [22:31:18] if we migrate variants to this, all the logic for variant selection needs to stay in, in backwards compatible form [22:31:19] no, but what to default to when detection fails is! [22:31:30] ori: then content language [22:31:33] ori: content languagre [22:31:45] that means respecting the variant URL parameter, and Accept-Language [22:31:47] this just stays as it is [22:32:00] I thought default was user language? When my user pref lang is 'nl', or uselang=nl, and viewing a French content page on Wikimedia Commons. Is it parsed as fr or nl? [22:32:18] Krinkle: when logged out, i think that was the question [22:32:27] user language in turn defaults to content lang [22:32:32] TimStarling: but variant selection is reflected by $wgLanguage, right? So what i propose shouldn't change anything about how it works. [22:32:35] if no preference or uselang query string is passed. [22:33:14] But they are not consistently implemented right now. When an anonymous user views a French content page, they get the parser output in French I think. But UI interfaces default to EN unless uselang is passed. [22:33:24] ok, so there is still some global notion of language that is independent of user and page [22:33:32] Emulating this in a single target language field will be hard. [22:33:47] Krinkle: if your language is nl, and the page is fr, you see nl per default, and fr if the namespace is marked as "use user language". [22:34:30] is that how it works now and it will stay or is that what you propose we change it to? [22:34:32] hm, for Translate, it wouldn't necessarily be per namespace. Foo would be user language, but Foo/de would be page language, which would not be content language here. [22:34:44] Exactly [22:35:04] no, variant selection is apparently not reflected in $wgLang/RequestContext::getLanguage() [22:35:14] #info for Ex:Translate, Foo would be user language, but Foo/de would be page language, which would not be content language, but overwritten by the suffix. [22:35:43] It's now always page language then user language or vice versa. Sometimes it's the other way around. I'm not sure how they override each other in what circumstances. [22:35:44] Krinkle: the only thing that would change is how Translate would know to show you nl content on Foo. It now uses global state. then it would just ask the parser. [22:35:45] not* [22:35:54] Right [22:36:05] So it's not just updating MW core, we'll need to update extensions as well. [22:36:07] That's doable [22:36:16] And it'll be more explicit. [22:36:26] And it's not a breaking change. The old mechanism would still work [22:36:30] * cscott pops in late and starts reading backlog [22:37:06] anything else to discuss, then, or should we wrap up and start discussing PageRecord? [22:37:18] cscott: ah, good to have you here! i was just thinking we could wrap up and talk about the second topic... but I'm curious what you think. [22:37:33] try e.g. https://zh.wikipedia.org/w/index.php?title=%E7%A7%91%E5%AD%A6&variant=zh-cn&uselang=fr [22:37:40] versus https://zh.wikipedia.org/w/index.php?title=%E7%A7%91%E5%AD%A6&variant=zh-tw&uselang=fr [22:38:14] variant= sets the translation target but not the user language [22:38:45] TimStarling: and i think there's an equivalent where the variant is buried in the URL instead of in the GET parameters, right? [22:38:48] TimStarling: right... here, the target language wouldn't be set to the user language, but to the requested variant language code. [22:38:53] good point! [22:39:23] DanielK_WMDE: i'm mostly concerned that we don't break LanguageConverter [22:39:48] #info A request lile https://zh.wikipedia.org/w/index.php?title=%E7%A7%91%E5%AD%A6&variant=zh-tw&uselang=fr should result in the effective target language being zh-tw (for content language zh). It should not be come fr, and not default to zh. [22:39:53] cscott: me too :) [22:40:31] i think tim just gave me the crucial pointer for that (the variant= parameter). [22:40:50] And for some reason the left-hand tabs "Page" and "Discussion" are in English on https://zh.wikipedia.org/w/index.php?title=%E7%A7%91%E5%AD%A6&variant=zh-tw&uselang=fr [22:40:55] But nevermind that >> [22:41:29] cscott: you mean https://zh.wikipedia.org/zh-cn/%E7%A7%91%E5%AD%A6 ? [22:41:57] * ori recommends moving on. [22:42:06] cscott: do you have any urgent comments, or can we move on to PageRecord? [22:42:10] yea... [22:42:23] and it has NOTHING to do with the fact that I don't have a strong opinion about the next topic but not on this one :P [22:42:31] hehe [22:42:31] that I do, rather [22:42:38] TimStarling: move on? [22:42:52] yes [22:43:11] do we have to tell meetbot somehow? [22:43:19] #topic PageRecord | RFC meeting | Wikimedia meetings channel | Please note: Channel is logged and publicly posted (DO NOT REMOVE THIS NOTE) | Logs: http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-office/ (Meeting topic: RFC meeting) [22:43:27] my final comment re languages is that we should consider the performance impact of fragmenting caches / storage [22:44:01] gwicke: for the cases i have in mind, we already do fragement the parser cache. the web cache is out of scope of the rfc [22:44:21] yeah, but we are moving away from most of that fragmentation [22:44:25] link for page record rfc_ [22:44:28] * ori wonders what "fragmenting storage" (vs. cache) means [22:44:28] So, PageRecords: https://phabricator.wikimedia.org/T114394 [22:44:28] ? [22:44:32] thanks [22:44:39] ori: one cache entry per target language [22:44:56] anyway, moving on. [22:44:56] ori: HTML storage, say [22:45:10] jzerebecki: yes, that looks right. [22:45:22] if we can't swap out elements in a targeted way [22:45:25] PageRecord is intended to represent the information from the page table (maybe plus some extras) [22:45:27] DanielK_WMDE: i'm still reading backlog, i don't have urgent comments. [22:45:45] the idea is to factor more code out of Title, and replace more usages of Title for more light weight "dumb" objects [22:46:09] PageRecord is intended to bridge the gap between TitleValue (which doesn't even have a page ID) and Title. [22:46:26] cscott: ok, please comment on the ticket of you have any thoughts [22:47:00] ori: so... what's your strong oppinion on this? [22:47:25] In my opinion, the single greatest source of confusion in the MediaWiki codebase is the proliferation of objects to represent what is (or ought to be) one thing [22:47:38] Title, WikiPage, Article, TitleValue, & so on [22:47:50] Article should die, imho :) [22:47:58] be replaced with better things [22:48:02] ori: in my oppinion, the greatest drag of working with mediawiki is unrelated functionality being mashed together in jumbo classes :) [22:48:14] #info https://zh.wikipedia.org/zh-cn/%E7%A7%91%E5%AD%A6 is another way of writing an explicit 'variant=zh-cn' parameter, and should also be supported on zhwiki [22:48:32] a wiki is a collection of pages, but try and map notion to anything in the code base and you find that there is no single coherent ontology that is reflected in the class hierarchy [22:48:37] *try and map that notion [22:48:48] ori: in my mind, there is something to represent a title (as in: link target), and one thing to represent a page, and one to represent a revision, and one to represent content. Then there are different services that act on these. [22:48:49] I agree that article vs Wikipage vs Title is super confusing [22:49:00] sorry, that #info probably got associated with the wrong topic. oh well. :( [22:49:18] ori: if we want to move away from artice records to a DAO model, we have to duplicate some classes in order to migrate [22:49:18] So it seems Article and Title in their current forms are meant to be deprecated. TitleValue being a PHP value object for namespace+title. WikiPage being for the editable application-level entity to the end user (holding title, pagerecord, content and other things). [22:49:21] for that reason, I am _extremely_ reluctant to add to that mess, so my default attitude is to be against it [22:49:37] ori: the alternative woiuld be to break compatibility [22:49:46] but i think a lot of that is due to lack of clear distinction rather then having many objects [22:49:46] but i want to comment about the specific merits / demerits of the thing you are proposing to introduce [22:50:10] ori: you are saying it is fine as it is? [22:50:13] The two classes shown in the RFC I'm not worried about, but initial patches around it do worry be about gigantic Java troll class explosion. I really want to try and avoid that. [22:50:15] ori: some of the mess is things like that WikiPage and Article implement the Page interface [22:50:22] bawolff: i agree. and it's a shame that the old classes take up all the good names :) [22:50:24] which is empty [22:50:28] or that we should unsplit all the things that have been split off already? [22:50:32] aude: +1 [22:50:32] TimStarling: not sure what you mean. Is what fine, the current status quo? [22:50:44] WikiPage used to be part of Article, would moving it back in make things easier for you? [22:50:53] there is confusion sometimes regarding this [22:50:57] yes, some sort of consolidation is urgent, IMO [22:51:00] * Krinkle thinks WikiPage is a better name for it. [22:51:24] We can turn Article in to a thing forgotten includes/compat/ class though [22:51:40] Shouldn't house any methods more than 1 line. [22:51:50] Article is really a view object for producing output. WikiPage is a storage layer service object. Neither of them represent a "page" as an entity. [22:52:08] I get that you sometimes want a page with the foreknowledge that you'll need most of the data fields about it that are available, and that sometime you need a page for a very specific purpose, and that consequently depending on the use you either do or don't want to eagerly retrieve a lot of data [22:52:16] Krinkle: I agree that WikiPage is a better name, but it's taken :D [22:52:37] I don't think it makes sense to consolidate everything into >10 kiloline classes [22:52:49] Article as view object? That's WikiPage and ViewAction. Afaik all uses of Article can be removed. [22:52:58] 10 kiloline is a logically separate problem imo [22:53:17] the reason we are splitting things up is because there is complexity [22:53:17] re: eager / lazy retrieval -- this is an implementation detail. it's one that we have to think about and tackle [22:53:20] ori: it's not about loading less data. that would be pressty much the same. the crucial thing is to use dumb value objects and services, so we can use DI consistently. [22:53:22] then with wikipage etc., it's bound to the database layer (and has no real interface) [22:53:35] the complexity doesn't go away when you mix everything together in a single class [22:53:39] but we should not pretend this is not a simply an implementation challenge but somehow a matter of there being different types of things [22:53:43] ori: with the old jumbo objects, DI is awkard at best, and often hardly possible [22:53:45] so it's hard to mock / create abstractions useful for testing etc and other stuff [22:54:02] there are other solutions to jumbo [22:54:10] ori: like what? [22:54:32] deleting all the features [22:54:40] \o/ [22:54:45] no, deduplicating functionality [22:54:56] by splitting and re-using? [22:54:57] the splitting up of objects IMO has only led to there being more code than before [22:55:08] Lee's Article.php was only 1500 lines [22:55:08] because you often have the same logic in different classes [22:55:30] ori: it takes some time unfortunately to deprecate and migrate away from old stuff [22:55:39] in core, and then be able to remove it :/ [22:55:47] ori: really? then somethinjg went wrong. splitting things up usually means move "dead" lines of code, more declarations, more files. but it shouldn't mean more code. [22:56:01] B/C of course does add complexity [22:56:19] aude: either we (a) shouldn't be so conservative, or (b) should be equally conservative about introducing new things [22:56:31] i only see two ways to avoid B/C overhead: a) never change b) break stuff [22:56:41] either option does not entail introducing PageRecord now [22:57:19] ori: no, moving away from Title does. Which is needed for moving towards DI. [22:57:36] * robla looks at the clock [22:57:38] and Title was designated as a primary target for code experiments for extracting logic from Jumbo objects [22:57:53] Can we implement PageRecord in a way that moves/re-implements logic out of existing classes in a way that makes the PageRecord class usable in places that do it manually, today? Or would the signature and required parameters be incompatible and require two implementations. [22:57:55] * aude really wants more separtion between database interaction code and value type objects like a page [22:58:03] robla: right. [22:58:35] i guess there were no comments on the particulars. and more dicussion convincing is needed as to why we want to do *any* refactoring on the codebase [22:58:36] https://gerrit.wikimedia.org/r/#/c/244586/ did not break back-compat, did not introduce new classes (it replaced an interface with a class) [22:58:43] * bawolff finds that both extremes are bad. Super jumbo objects isnt good. But super micro classes are also hard to reason about imo [22:58:47] that particular change doesn't have much to do with pages / titles / articles [22:58:54] but there is plenty of work to be done with those objects too imo [22:59:05] the suggestion that our hands are tied and it's break all the things or add new things is a false dichotomy imo [22:59:34] plenty of such opportunities for consolidation to keep all of us busy [22:59:35] I think DanielK_WMDE should write the code so that we can continue this discussion on gerrit [22:59:40] continue conversation on Phab? [22:59:45] ori: so you would rather gradually move code out of Title, than remove move usages of Title? [23:00:09] TimStarling: "daniel should write the code" <-- story of my life ;) [23:00:19] robla: yes [23:00:30] ori: did you comment on the ticket? [23:00:44] https://phabricator.wikimedia.org/T114394 [23:00:47] not yet [23:00:47] we could probably finish up with some cleanup and move all type hints from Page to either WikiPage or Article and then remove Page [23:00:54] it seems PageRecord duplicates Revision methods and Title methods. I don't think adding an extra class helps. [23:01:06] it's just cleanup that no one got around to [23:01:13] Joe Armstrong (Erlang guy), talking about the dangers of OOP, once compared it to "You wanted a banana but what you got was a gorilla holding the banana and the entire jungle. " [23:01:25] I agree that Title is now a jungle with a gorilla holding a banana, rather than just a banana [23:01:33] but i don't think adding a Banana class is going to fix that [23:01:42] spagewmf: it helps with no longer using Revision and Title, which are two of the jombo objects that keep us from a DI (and eventually SOA) based architecture [23:02:14] also remember Lee's Article.php was only 1500 lines [23:02:17] everything starts out simple [23:02:18] are there any action items for the notes before I end the meeting? [23:02:25] ori: that'S actually a pretty good picture. my response is that we really want a banana class anyway, because half of the time we use title, all we need is a banana. [23:02:27] I don't think so [23:02:29] ori: So if the implementation is tied to a restriction like: It centralises/deduplicates existing functionality in a way that must be re-usable by those older variants. [23:02:30] The worst thing would be instead of 3 classes that represent a page which noone understands the difference between (article title wikipage) we instead have 4 [23:02:39] Would that be good enough? [23:02:59] * robla clears throat [23:03:04] Krinkle: I think that's pretty good. Complexity should be measured in terms of number of entities the developer has to grok, and not just LOC, but yes, it should reduce complexity. [23:03:05] and we deprecate those older method and remove in 1.28 [23:03:06] DanielK_WMDE: OK, but PageRecord doesn't eliminate Title and Revision. Something newer and smaller in front of them doesn't simplify the jungle [23:03:35] it'll just turn into another jungle [23:03:43] spagewmf: but less code relying on the "jungle" classes reduces overall code complexizty [23:03:45] anyway, time is up [23:03:51] #endmeeting [23:03:52] Meeting ended Wed Nov 11 23:03:52 2015 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) [23:03:52] Minutes: https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-11-11-21.59.html [23:03:52] Minutes (text): https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-11-11-21.59.txt [23:03:52] Minutes (wiki): https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-11-11-21.59.wiki [23:03:53] Log: https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-11-11-21.59.log.html [23:03:54] we'll talk about this again, i guess [23:04:00] * AaronSchulz strongly suggest the final Page cleanup is done before any lookup stuff [23:04:07] +1 [23:04:18] yeah we need a full hour for this, but more broadly than the current proposal [23:04:20] it's mostly just menial work [23:04:26] yep [23:04:30] not conceptually sexy, but badly needed [23:04:34] anyway, thanks for all the input folks! [23:04:39] thanks DanielK_WMDE! [23:04:45] but combining article back into wikipage would be horrible [23:04:46] just a general meeting about "what to do about Article/Title and their bastard children" [23:04:53] heh [23:04:54] thanks DanielK_WMDE [23:04:56] yeah, that would be good [23:05:55] and thanks TimStarling / robla for charing / organizing etc. [23:06:01] chairing [23:06:04] chairing is caring [23:06:09] thanks everyone :) [23:06:25] TimStarling: my idea was to quietly start using an alternative somewhere, but I guess posting this as an rfc kind of blew the "quietly" aspect. now we'll have to have the actual Big Discussion ;) [23:07:07] #mediawiki-core for followup discussion? [23:07:15] aude: let's talk about your language related patches next week. It's not quite what I had in mind, but food for thought and discussion [23:07:24] thanks TimStarling for chairing! [23:07:36] no followup for me today, sorry. [23:07:43] AaronSchulz: I'd say a fair bit of Article does currently belong to WikiPage (as a first step) and also ViewAction should be more fleshed out. [23:07:44] time to catch some sleep [23:07:55] It's not going to be fixed in one migration. [23:08:16] gn8 DanielK_WMDE [23:08:21] > #mediawiki-core [23:08:50] which is short for good neight in case anyone is wondering