[19:31:31] WMF Product Q&A pt. II now on Commons: https://commons.wikimedia.org/wiki/File:Wikimedia_Foundation_-_Technology_and_Product_Q%26A_-2.webm [19:31:58] thanks brendan_campbell [20:51:01] Hello [20:52:56] howdy [20:56:56] heloo [20:57:39] hello [20:58:44] * brion goes to refill coffee quickly :D [21:00:37] * addshore goes to brush teeth quickly :O [21:01:28] o/ [21:02:13] ok meeting will start shortly :D [21:02:13] #startmeeting ArchCom RFC Meeting [21:02:14] Meeting started Wed May 10 21:02:13 2017 UTC and is due to finish in 60 minutes. The chair is DanielK_WMDE_. Information about MeetBot at http://wiki.debian.org/MeetBot. [21:02:14] Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. [21:02:14] The meeting name has been set to 'archcom_rfc_meeting' [21:02:42] #topic Revision table refactor, T161671 [21:02:42] T161671: Compacting the revision table - https://phabricator.wikimedia.org/T161671 [21:02:55] #topic https://gerrit.wikimedia.org/r/#/c/350097/ [21:03:01] #info the revision refactoring provisional work plan is at https://www.mediawiki.org/wiki/Revision_refactor#Work_plan -- this will be updated with comments and new ideas after :) [21:03:12] Lots of little bits ;) [21:03:14] whoops, wrong command [21:03:16] #topic Revision table refactor, T161671 [21:03:21] #link https://gerrit.wikimedia.org/r/#/c/350097/ [21:03:33] #link https://phabricator.wikimedia.org/T161671 [21:03:48] #link https://www.mediawiki.org/wiki/Revision_refactor [21:04:13] brion: ooooh, i havn't seen the fancy work plan yet, let me have a look! [21:05:00] it's a little rough but i've spelled out more of the bits that need fixing [21:05:20] oh! and i forgot a high-level bit, which i'll just lay out here for now: [21:05:35] the rough work-and-transition plan is to run in a couple of steps. [21:05:57] first, we'll land provisional but not yet actively used updaters [21:06:14] along with some basic support in Revision for the new fields [21:06:28] second, we'll start landing support for use of new fields/tables [21:06:47] which'll have to either be compatible with both old and new schemas, or switch based on a transition mode [21:07:02] that means we can start landing usage of new tables without breaking existing code, and it may take some time to do it :) [21:07:27] while we can have testing instances that switch to the new schema and run batch tests etc [21:07:55] third, once all those are landed, we'll be in a position where production instances can start migrating [21:08:13] the transition state will cover three states: old-schema, transitional, and new-schema [21:08:20] Hm... I'd really like to put code for writing to the new content and slot tables into a separate dao-style service from the start. Same for the comments table. Revision would use them internally. [21:08:36] DanielK_WMDE_: agreed! [21:08:38] That should also make it easier to keep the code sane in the presences of db schema feature switches [21:09:00] transitional schema will be the 'fun' mode where we've added the new tables and fields, but haven't migrated all rows [21:09:13] A difficulty with DAO stuff is bulk querying. [21:09:21] in transitional mode, code will keep querying the old columns [21:09:33] that'S "write both, read old" or "write both, read old if new is missing (and maybe migrate on the fly)" [21:09:37] :) [21:09:42] Fetching 5000 revisions => 5000 queries for one comment row each => probably bad [21:10:00] lazy loading 5000 comments would suck yes :D [21:10:33] anomie: DAO is not ORM. it'S not bound to individual rows. YOu can wrap any query you want in a DAO. All it sais is "wrap queries in stateless objects" [21:10:47] during production transition, background updater can run through the whole DB, migrating comments and user/ip actor info and content rows, until everything's done [21:11:08] Lazy loading and pre-fetching are complementary strategies. We'll proably have need for both [21:11:09] once a whole db's been done, then we can switch to 'new schema' mode, and can drop the old columns during maintenance [21:12:05] brion: we can keep writing both for a trial period, so we can still back out if need be [21:12:08] I feel like one of the common problems when writing a DAO is that you don't know in advance what the query patterns will be, so you support things you don't need and don't support things you do need. But here we already know what the query patterns are [21:12:12] DanielK_WMDE_: Presumably you can't wrap *any* query you want. Just the queries the DAO thing allows for. [21:12:36] you just write a DOA thing for the query you want. [21:12:43] the current Revision class has some (hacky ugly) interfaces for semi-arbitrary queries, but is awkward to use [21:13:00] Would making that nicer be in scope? [21:13:02] i'd love to replcae that with a proper fetcher/lookup/query interface based on our last decade of experience :) [21:13:08] James_F: hells yes [21:13:13] Or is that (yet another) follow-up? Cool. [21:13:15] RoanKattouw: a common trap is to write "the" DAO class for Revisions. It should be one DAO class per use case. But let's not get into that too much today, it's distracting us from the conversation about the schema change [21:13:52] Right -- anyway, it seems we agree that as part of writing the old/transition/new switching code, we'll make the interface nicer permanently [21:14:02] For some definition of "nicer" to be detailed in the future [21:14:02] At what point then are you replacing every $db->select() with an object that wraps that one $db->select()? [21:14:08] oh, btw: meetbot commands like #info or #link or #idea can be used by anyone [21:14:10] * brion idly wishes for a magical graph database with infinite space and no latency, which would simplify many things [21:14:16] TIL about #idea [21:14:20] so if you want to add anything to the minutes, please go ahead! [21:14:48] fyc: https://wiki.debian.org/MeetBot [21:14:56] #info  the current Revision class has some (hacky ugly) interfaces for semi-arbitrary queries, but is awkward to use; i'd love to replcae that with a proper fetcher/lookup/query interface based on our last decade of experience [21:15:03] anomie: i think it's most important to wrap the kinds of ways we can query -- which mostly depends on what is sane to query based on indexes [21:15:12] brion: does it have a mind reading interface, so i don't have to type out queries? [21:15:17] and then provide a simple way to extend that with where clauses [21:15:21] DanielK_WMDE_: i wish ;) [21:15:31] it's always easier on star trek [21:15:38] "computer: collate all revisions by user id!" [21:15:57] I'm afraid I can't do that, Brion... [21:16:00] hehe [21:16:01] so: [21:16:04] have you seen ApiQueryRevisions [21:16:10] TimStarling: yes it's frightening :D [21:16:13] brion: Adding additional filters to WHERE clauses is one of the sources of bad queries, when it has to run through 10,000,000 rows to find the 3 that pass the filter :/ [21:16:17] Yeah that's a "fun" one [21:16:23] It has like three different modes, too [21:16:50] * RoanKattouw tries to dodge blame by pointing out it was already like that when he got there [21:16:55] anomie: yep, maybe important though to make it easy to distinguish 'hard to query' via 'hard to get at in the api' ;) [21:17:52] half of that calss is a giant run method :D it probably has about a billion possible code pathes... [21:17:56] performance targets are not really the job of the DAO, right? [21:17:57] * anomie is too lazy to look up the task number for the bug where "only include pages with language links" that fails horribly on Wikidata, because none of the pages have language links [21:18:36] I mean, if you want to do something expensive for a good reason, should you have to write your own SQL? [21:18:54] 'make the easy things easy and the hard things possible' [21:19:16] TimStarling: IMHO, yes. And wrap it in a DAO, for testing and re-use. [21:19:19] danger of course is when everything uses the hard path [21:19:52] brion: make common things fast, and uncommon things possible [21:19:59] DanielK_WMDE_: ++ [21:20:02] so: generally the transition/work plan doesn't seem to scare people so far, except for the size of some of the tasks :) [21:20:05] If the DAO is flexible enough, then it can't prevent bad things as was being proposed. If it's not, then I have to listen to people complain that it's a code smell that I didn't use the DAO. [21:20:23] sometimes code smells because the use case smells [21:20:32] cf "everybody poops" ;) [21:20:49] just cause it's smelley doesn't mean it's always evil [21:21:12] * anomie seems to only deal with the smelly code, because the easy code is too easy [21:21:17] sounds like a good DAO API for revision queries is going to be The Fun One [21:21:40] we'll also want good dumb-object APIs and fetchers/storers/formatters for actor & comment info [21:21:42] A good first step would indeed be to 'fake it until you make it' - e.g. introduce interfaces first to reduce usage of direct queries in most places. E.g. aside from maintenance scripts, almost nothing should query revision directly. [21:21:42] Yeah and you'll need something of that sort to transparently work with the different schemas [21:21:49] since those'll get reused for logging, and eventually rc etc [21:21:54] i think the art is to reist the ur4ge to make it multi-option multi-purpose. [21:21:59] single purpose, few options [21:22:00] I'm curious if we can add the joins automatically though, given a Revision::select() method like Database::select(). [21:22:30] * DanielK_WMDE_ is going to write a blog post: The Way of the DAO. [21:22:43] or maybe the DAO te Wiki [21:22:54] some sort of abstraction layer was planned already, right brion? [21:23:05] Krinkle: like exposing Revision::fetchFromConds() more generally? [21:23:07] For MCR, yes [21:23:10] you're not agreeing to a bunch of extra work because DanielK_WMDE_ wants it? [21:23:34] TimStarling: yes, we need at least some abstraction :D [21:23:46] if you have schema modes then obviously you need an abstraction layer [21:23:57] as probably it'll help migrate stuff that's doing direct queries now [21:24:16] brion: Perhaps yeah. I mean, I'd rather have Revision::select() than revision::byUser(), Revision::byX(), and a whole bunch more methods, which just encourages querying too much and filtering in PHP instead. [21:24:24] but we don't have to do a complete rework immediately, can land it in pieces. [21:24:33] Also, we need not just batching, but also pagination. [21:24:34] unless the schema modes can be wholly implemented in SQL with triggers and views [21:24:46] Imagine something simple like Special:Contributions [21:24:52] just 1 condition. [21:25:05] at least with the abstraction approach, we know what we are doing [21:25:06] But does need "the whole works" in terms of current revision schema [21:25:19] Krinkle: *nod* though access to fields doesn't solve everything, eg rev_user vs rev_actor [21:25:28] would still hvae to be manually tweaked [21:25:32] or else abstracted [21:25:34] brion: Yeah, it can be more abstract than that. [21:25:38] 'user' [21:25:42] userId [21:25:48] A few narrowly supported conditions [21:25:54] and ranges (timestamps) [21:25:57] or offsets [21:26:47] yeah we'll probably want some consistency of the query apis between revision and logging etc [21:26:51] so this'll bear some thought. [21:26:55] Re access to fields, you can write queries like SELECT actor_userid AS rev_user, ... FROM revision JOIN actor .... [21:27:03] i.e. query the new schema in a way that fakes the old one [21:27:17] The downside of that is that eventually the old schema is gonna be gone and now we have all these weird aliases that don't make sense any more [21:27:27] RoanKattouw: hmm, clever [21:28:24] #info Roan ponders SQL field compatibility using 'AS' aliases in the query to allow consistent where clauses. consider options for back-compat API [21:28:44] apergos: got any questions about how this will affect dumps? :D [21:29:06] brion, not yet, I am expecting to have to do a bunch of work [21:29:23] between the transitional and the final stage of the migration [21:29:38] hm... as long as we don't introduce MCR, XML dumps should not be affected at all, right? [21:29:56] SQL dumps would change a lot. And stuff on labs is going to break. [21:30:07] #info some consideration of exposing a $db->select() like interface for addign where clauses & joins to a query: vs complete DAO method-per-lookup-method [21:30:23] apergos: *nod* [21:31:02] DanielK_WMDE_: *most* of the xml dumps logic should stay the same, though I can worry about performance anew :-P [21:31:07] apergos: that reminds me we keep going back and forth on whether to keep sha1 hash on rev content (or whether to change it for multi-content, or drop it entirely) [21:31:19] if we kill it, that'd change one visible thing in dumps [21:31:25] brion: I noticed that in your docs [21:31:30] FTR, I'm not a friend of complex configurable magic DAOs. I like to wrap queries in a class, with a handful of parameters. [21:32:16] * brion once read a recommendation that all SQL queries in your app should be constant strings in a single PHP file you can replace to support other DBMSs. this was a looooong time ago though ;) [21:32:18] brion: oh, on a related note - if we kill content_format from the database, we'd still want it in XML, right? [21:32:31] DanielK_WMDE_: ah, for back-compat.... prolly [21:32:45] though since i believe it's opitonal on the xml schema we could just drop it too [21:32:49] but i don't know if that'd break things [21:32:51] also for consumers that know mime types, but don't care about content models [21:33:02] * brion hmms [21:33:11] is it optional? i don't think it is. [21:33:48] DanielK_WMDE_: you're right it's not optional in export-0.9.xsd [21:33:54] basically, in an XML file that contains serialized blobs, it's a good idea to annotate these blobs with a mime type that allows them to be deserialized [21:33:55] One idea there is to keep content formats in the DB, but only as part of a content_models_and_formats table that maps ID => (model, format). Most models would still only have one row. [21:34:09] brion, as to the sha field, we don't use it for integrity checks directly, preferring the much quicker byte length for that. but it is a popular field with researchers [21:34:38] anomie: yea, that can work. or just use the default format provided by the handler class. [21:34:41] apergos: would it be more useful to have explicit revert/undo-tracking info though? [21:34:47] i don't think anything uses a non-default serialization format anyway [21:35:18] If nothing uses different formats, sure. Drop support entirely for anything other than the default format. [21:35:34] #info considering merging content format and model pairs into a single table reference, since things don't seem to use different formats for a odel in practice [21:35:48] I think content model and format are mandatory on 0.10 as well [21:35:51] brion: +1 for explicit revert/undo tracking. [21:36:18] DanielK_WMDE_: On revision or slot level? [21:36:31] #info explicit tracking of reverts/undos may be preferrable to hashes [21:36:33] brion: what would explicit revert/undo tracking look like? potentially that would be more helpful but I'd need to hear more [21:36:34] so for revert tracking, i believe RoanKattouw will be working on that sort of thing and we plan to do it in a separate tracking table, so that'll get done on its own schedule [21:36:37] #info Or drop support for different formats entirely (if nothing uses it by now), always use the default format for a model. [21:36:47] James_F: revert/undo? Revision. These are edits. Edit = Revision. [21:37:18] apergos: basically a table that links up revision of action, the rev it affected, etc, and whatever other metadata can be placed in [21:37:35] that might be better than hashes [21:37:36] that could also be backfilled perhaps from logs and sha1s perhaps [21:37:41] Yeah [21:37:58] DanielK_WMDE_: Hmm. But if I undo your new-JSON-slot bit of your edit and add some extra "DON'T TOUCH THIS" to the wikitext-documentation-slot bit, is that a revert? [21:38:11] The use case from my end is "I would like to filter recentchanges so that it doesn't show me edits that were already reverted" [21:38:28] What the definition of "revert" is and how that changes in an MCR world are good questions [21:38:51] James_F: no. the page is not the same as before. sure, you undid my change to one slot, but that's not a "revert" of the page. [21:39:07] Hmm. OK. [21:39:13] the researcher typically wants to understand why certain classes of reversions happen, or what user groups perform them, or which contributions are more likely to be reverted [21:39:28] * apergos waves hand and makes sweeping gneralizations [21:39:38] We already have that sort of confusion without MCR. Click undo, but then add a "don't touch this" comment. Or don't click undo, but copy-paste an old revision into the edit box. Or click undo on an old revision, which gets merged with later revisions to give a different hash. [21:39:40] tracking 'undo's that aren't single-rev reverts similarly is tricky :) [21:39:40] RoanKattouw: I my mind, the defintion of "revert" should not change: it makes the *page* the same as it was. [21:39:43] Right, so they want to know which revisions were reverted and for each reverted revision, which revision reverted them [21:39:57] for that t's better to have the info about the rev explicitly instead of guessing from the hash (suppose there was an edit war with multiple revisions?) [21:40:03] anomie: Indeed. [21:40:15] *multiple reverts [21:40:24] RoanKattouw: how do non-admins revert these days? undo? bot? manually? [21:40:32] DanielK_WMDE_: All of those. [21:40:33] undo I think [21:40:36] For the most part [21:40:51] James_F: can explicit tracking work for all of those? [21:41:03] Only for the first, I'd imagine. [21:41:06] Multi-rev reverts (or rollbacks) aren't too hard because they just put the page back how it was N revs ago where N>1. There'll still be a hash match [21:41:08] explicit undo action easy [21:41:12] manual... hard :D [21:41:17] but possible perhaps [21:41:18] Undoing an old rev is a bit more annoying [21:41:19] yea. [21:41:23] And something would need to evaluate on-save whether they'd fiddled with the content after pressing undo. [21:41:23] through complex diff analysis :D [21:41:27] hashes make detecting manual reverts easy [21:41:39] Indeed. [21:41:54] RoanKattouw: well if the person is reverting the other person's revert, you want to see that rather than "it's back to where it was at the start" [21:41:58] #info hashes make detecting manual reverts easy. is that reason enough to have them? they are big. [21:41:59] * brion refrains from refactoring per-revision data storage into diff actions ;) [21:42:04] it's a smidge more info [21:42:06] Good. [21:42:08] Yeah good point [21:42:18] brion: with MCR, we can do both at once \o/ [21:42:22] :D [21:42:22] Anyway -- for the DB schema it doens't matter terribly how we handle all these cases [21:42:42] no, only the decision about keep/toss the hash [21:42:56] Hashes would be helpful for backfilling past revert info [21:43:02] But I don't think we'd use it for future reverts [21:43:05] one thing they do.... [21:43:09] And whether we keep rev_hash, content_hash, or both [21:43:23] is allow us to detect mw errors once in a blue moon [21:43:23] it's not just keep/toss. For MCR, we'd have at least two hashes per revision [21:43:46] apergos: has that ever be unseful in practice? [21:43:48] apergos: Errors in the DB or in the dumps? [21:43:50] but I don't know if it's worth keeping them around for that [21:43:51] Making a revision that changes 2 slots and reverting one would bring essentially an entirely new and unique state into being and not a revert. [21:43:59] Cause for dump purposes, hashes can be recomputed on the fly [21:44:00] On the other hand, given transclusion, a revert is never true anyway. [21:44:17] for the dumps, no, we don't check; the length check is enough [21:44:41] Krinkle: A "cherry-pick revert" (i.e. undo a non-latest revision, but the undo cherry-picks/rebases cleanly and applies) has the same property [21:44:51] Indeed. [21:44:58] 15 minute warning! [21:45:06] change of topic? [21:45:17] If you want content-based revert (e.g. the entire change a user made was reversed) you'd need transactions. Which will also allow you to insert text at the same time. [21:45:18] ok, quick check -- anything else major anyone wanted to bring up today? [21:45:20] brion: What more do you need from us? [21:45:24] DanielK_WMDE_: I don't know if the hash has been used; I do know that once in a while we used to have revision corruption though it's been years since I've seen a bug of that kind [21:45:24] I don't think we can enforce that at the revision level. [21:46:04] James_F: any remaining major concerns about the schema, the transition model, or thoughts on internal APIs that'll help us convert [21:46:08] or surprise me ;) [21:46:16] "decide on slot role being in slots vs being in content (feels cleaner to keep it in slots, and relatively low cost)" [21:46:26] ah yes! [21:46:27] Is that decided as slots? [21:46:44] I'd like to hear more of your thinking on the import changes, brion, but it doesn't have to be today [21:46:49] For now you can keep the same logic as before: Check revision sha1/length or revision/text_id and if a previous one was re-used (as revert does) you can assume revert. it's a subset of all possible undoes/reverts, but covers it well. I imagine new revert buttons might become extant after MCR to 'rollback' a particular slot only, similarly re-using the [21:46:49] internal store reference. [21:47:10] we considered an option to move the role integer from the slots association table over to content, since content items will (?) not get reused on different slots, probably [21:47:31] this could make queries for specific slots less efficient, but saves a few bytes here and there [21:47:44] i'm happy keeping it in the slots association table, it's small [21:48:03] it feels better to have it there, yes [21:48:05] more logical [21:48:11] and only a tiny bit wasteful [21:48:20] Sounds reasonable to me. Using e.g. an image JSON object as another revision's documentation object sounds unlikely. [21:48:38] apergos: the internals of WikiImporter work with revisions directly, bypassing the Article class and friends, I don't know if we should refactor that [21:48:40] if we drop the hash, we can add 10 more role IDs :) [21:48:47] Makes sense [21:48:58] and what happens if you import data of a content type not supported locally? potential... interesting stuff to handle [21:49:14] i think there's already code checking for that [21:49:56] ouch [21:49:57] brion: about actor... i think it's important that we may want to support more than 2 types of actors. [21:50:03] apergos: hey what's the state of the art for importing / dump processing tools? [21:50:21] ...we already support three, really. and we may want more [21:50:27] thinking in terms of 'what will need to be updated to handle multi content' [21:50:31] well imesho it's convert to sql and shovel in [21:50:38] rather than rely on MW import code [21:50:53] 1) local user 2) IP 3) imported user 4) oauth user 5) maintenance script 6) ... [21:50:59] DanielK_WMDE_: yeah right now we have basically 'user' and 'not-a-user' which are defined by presence or absence of a user id [21:51:01] there are a few sets of converter tools, doesn't really matter which you use, they will all have the same basic problems to deal with [21:51:21] i think we might want to think more about representing 'remote' users [21:51:27] for imports for instance :) [21:51:30] and maint & all that [21:51:45] yea. if we know the import source, that would be easier... [21:51:55] apergos: cool, i'll have to dig out mwdumper if people still use that [21:51:58] I still say "oauth user" and "maintenance script" are not the same type of type. [21:52:27] there's some use of that, and there are similar tools (I have a set of little icky c programs for example) [21:52:30] anomie: can both apply at once? [21:52:38] Yeah imported users cause some trouble already, some code does not expect rows where rev_user=0 but rev_user_text is not an IP [21:52:51] #info need to consider more deeply how to represent user types beyond the local user & anon binary (remote imports, maint scripts, oauth, etc). may affect details of actor table [21:52:52] DanielK_WMDE_: Both "oauth user" and "maintenance script" are probably going to be "local user" [21:53:05] apergos: that makes me think of ward cunningham's way of "reading" wikipedia [21:53:21] #info (compat issues already exist with imports that aren't IPs but have user id 0) [21:53:26] anomie: but with no user id? [21:53:38] DanielK_WMDE_: which way was that? [21:53:40] DanielK_WMDE_: Why would an oauth user or a maintenance script not have a user ID? [21:53:53] i'm inclined to keep actor bare for now, but we can expand it [21:53:54] They always do currently. [21:54:07] maint scripts sometimes do not [21:54:11] or at least used to [21:54:17] sounds good [21:54:21] anomie: he has a web interface to a parser generator, which he uses to write a grammar that extracts what he wants, compiles it via C, and runs it over a dump. [21:54:30] apergos: --^ [21:54:33] heh [21:54:36] oh my [21:54:38] the actor table will not be so large, we can alter it later [21:54:50] *nod* [21:55:13] It would also be nice not to lose as much semantic/meta data upon export/import. [21:55:25] But I suppose some of that is unevitable. [21:55:52] ok, we're almost out of time [21:56:01] anomie: it's kind of awkward to give them "fake" accoutns. making them actors but not users makes mroe sense in my mind. but it's just a thought. and yea, "change everythign at once" is never a good idea. [21:56:15] btu i'm all for "talk abotu what we may want to change later" [21:56:19] :D [21:56:23] 5 minute warning [21:56:28] 4, actually [21:56:29] what concerns me more is the dependence on wikidata, when setting up a local copy; at this point ni order to render certain projects, setting up a local wikidata copy (with aaaaaalll those revisions) is a requirement, even for some little bitty projects [21:56:33] but that's very off topic here [21:57:04] apergos: that's something we should def consider for sustainability but yeah , for later :D [21:57:07] as long as we can get the same table info out, good enough [21:57:08] DanielK_WMDE_: OAuth won't be fake accounts, to use OAuth it needs a local account. Maintenance scripts... meh, it probably breaks fewer assumptions to just User::newSystemUser() for your user. [21:57:16] for the current work, I mean [21:57:30] i like the idea of bot sub-accounts that work for you though :D [21:57:52] you like it until they demand minimum wage [21:57:57] * brion puts his fallout 4 settlers to work on wiki [21:57:59] apergos: I very much want InstantWikidat". It would work liek InstantCommons. No need to set up a copy. [21:58:12] apergos: it wouldn't be horribly hard to do. but it's on no ones road map right now... [21:58:18] DanielK_WMDE_: yes but offline readers (though I agree I want that too) [21:58:40] ok, sounds like we've got a general agreement on a few things, and a few other things to consider more for later! [21:58:47] apergos: it would have a caching layer. for offline reading, you'd render each page once, and keep the cac he [21:59:11] #action migrate off rev hashes towards future revert/undo tracking tables [21:59:17] #action start thinking about DAO for revision [21:59:34] #action keep Actor table simple for now but consider expanding it with type info later [21:59:52] \o [21:59:55] o/ [21:59:57] \o/ [22:00:04] DanielK_WMDE_: at that point you might as well just use the rendered html fro restbase (dumps of which to be coming soon)... anyhoo [22:00:39] i'm all for it :) [22:00:45] ok, time is up! [22:00:47] thanks everyone! [22:00:50] #endmeeting [22:00:50] Great developments [22:00:50] Meeting ended Wed May 10 22:00:50 2017 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) [22:00:50] Minutes: https://tools.wmflabs.org/meetbot/wikimedia-office/2017/wikimedia-office.2017-05-10-21.02.html [22:00:51] Minutes (text): https://tools.wmflabs.org/meetbot/wikimedia-office/2017/wikimedia-office.2017-05-10-21.02.txt [22:00:51] Minutes (wiki): https://tools.wmflabs.org/meetbot/wikimedia-office/2017/wikimedia-office.2017-05-10-21.02.wiki [22:00:51] Log: https://tools.wmflabs.org/meetbot/wikimedia-office/2017/wikimedia-office.2017-05-10-21.02.log.html [22:00:56] thanks all [22:01:12] :D [22:01:45] woohoo [22:01:47] later all [22:01:55] yep bedtime soon-ish [22:01:57] 1 am already [22:01:59] thanks DanielK_WMDE_ for chairing :) [22:02:18] i think i'm starting to get the hang of this ;) [22:03:00] see you in the other channels [22:04:17] TimStarling: thanks for asking brion about me making life hard for him, btw. it's not my intention, but it can easily end up that way. [22:04:30] hehe [22:05:43] someone has to think about project management [22:06:02] all of you are like "let's do ALL the things" [22:08:04] how about: let's make a list of all the things we want to do, and then discuss dependencies and priorities... [22:08:48] anyway. we'll need abstraction to handle the schema modes. finding the right level of abstraction to minimize effort in the short, mid, and long term - that's a real challange. [22:10:53] :) [22:10:57] yep :)