[21:00:25] #startmeeting RFC meeting [21:00:25] Meeting started Wed Oct 7 21:00:25 2015 UTC and is due to finish in 60 minutes. The chair is TimStarling. Information about MeetBot at http://wiki.debian.org/MeetBot. [21:00:25] Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. [21:00:25] The meeting name has been set to 'rfc_meeting' [21:00:50] #topic Overhaul Interwiki map, unify with Sites and WikiMap | Wikimedia meetings channel | Please note: Channel is logged and publicly posted (DO NOT REMOVE THIS NOTE) | Logs: http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-office/ [21:01:13] o/ [21:02:07] * DanielK_WMDE wibbles [21:02:39] so, anyone here to talk about overhauling the Interwiki class? [21:02:46] yes [21:02:51] #link https://phabricator.wikimedia.org/T113034 [21:02:55] excellent :) [21:03:11] thanks Marybelle, you beat me to it! [21:03:24] o/ [21:03:35] I wonder if configs from other (local of course) wiki aka SiteCOnfiguration::getConfig() are in scope [21:03:36] ? [21:03:42] I restructured my proposal a bit a few hours ago, and split it into parts to discuss: [21:04:11] 1) The refactor 2) introduction of a file based backend 3) performance bits [21:04:57] SMalyshev: possibly... what do you want to get the config for? what bits of the config do you actually need? [21:05:26] DanielK_WMDE: I needed it for search (i.e. searching Russian wiki from English wiki) but there may be other use cases [21:06:03] yea... that touches on wgConf, which is a whole different can of worms. I don't want to go into that too deeply today. [21:06:05] I like the separation of WikiMap from $wgConf [21:06:07] DanielK_WMDE: it's ok if it's not in scope, it's a complicated matter [21:06:19] er, that we are separating the two [21:06:24] DanielK_WMDE: ok, fair enough [21:06:35] but the whole purpose of the Sites class is "expose information about other sites/wikis so we can use it locally" [21:07:02] What information specifically? API endpoint URL? Localized site name? Other things? [21:07:18] DanielK_WMDE: would that information include wiki classification - i.e. project (wikipedia, wikines, sources, etc.)? It's not easy to get this info now [21:07:36] legoktm: at the meoment, we are not really separating them. But WikiMap no longer relies on wgConf to be there. It can now also work based on Sites. [21:08:15] Marybelle: API endpoint, article URL, database name, content language, that sort of stuff [21:08:43] SMalyshev: yes. " Allow sites to be a member of multiple groups (e.g. "wikipedia" and "english" for enwiki)." [21:09:13] The group concept is pretty important, i think. We'd need to extend it beyond what SiteStore currently supports (one group per site) [21:09:23] So each site would have a series of configurable properties, then? family = 'wikipedia'; lang = 'en'; [21:09:52] DanielK_WMDE: cool, that would be very useful/ Also, a ctor class that allows to instantiate Site or list of Sites by variety of things (ID, family/language, interwiki prefix) [21:09:54] Marybelle: yes. multiple groups, global id, interwiki prefix, etc [21:10:15] s/ctor class/factory class/ I guess :) [21:10:21] SMalyshev: that's what SiteLookup should become, yes [21:11:02] legoktm: I could actually imagine having a SiteLookup implementation based on wgConf. Could be useful [21:11:14] #info SiteLookup implementation based on wgConf could be useful [21:11:15] DanielK_WMDE: cool. right now we're doing some ridiculous things to map between interwiki prefix and internal wiki name. [21:11:39] that should be simple and straightforward (both directions) [21:11:50] Marybelle: SiteStore actually already has that. it's just not easy to get anythign in there right now. [21:12:40] DanielK_WMDE: that would just end up being what we already have in WikiMap then? [21:13:20] DanielK_WMDE: also, it would be nice to 1) have a script that creates a new wiki from family/language (provided all extensions etc. are there) and 2) have the SiteLookup system to automatically pick up new site once it is created by whatever means [21:13:20] legoktm: it would probably overlap with Wikimap so much that WikiMap would only be needed for B/C, yes. [21:13:25] is that possible? [21:13:28] legoktm: actually, that is already the case. [21:13:32] I'm just reading the RfC now (late to the party, I know). Re: > This proposal assumes that reading (and caching) local files is faster than loading from a database server (or memcached). -- yes! [21:14:22] SMalyshev: like - define new wiki using JSON, run magic script, add JSON description to Sites? Sounds good :) [21:14:37] #info DanielK_WMDE: also, it would be nice to 1) have a script that creates a new wiki from family/language (provided all extensions etc. are there) and 2) have the SiteLookup system to automatically pick up new site once it is created by whatever means [21:14:53] ori: is that true even if the local file is json? (faster than memcached) [21:15:02] yes [21:15:03] DanielK_WMDE: yes, basically. So I can code basicng on Site system and be sure it always works [21:15:04] #info Re: > This proposal assumes that reading (and caching) local files is faster than loading from a database server (or memcached). -- yes! [21:15:08] \o/ [21:15:28] DanielK_WMDE: also, should we have some link between Title and Site? [21:15:37] ori: do you think it is a problem that with JSON, we have to read the whole shebang, instead of just bits with CDB or SQL? [21:16:05] A .php file would be better than JSON. Reading it in one go is not an issue. [21:16:09] SMalyshev: link between Title and Site?... Not beyond defining interwiki prefixes in Site, I think [21:16:25] when it comes to caching something small like this, i have a heavy preference for APC [21:16:27] ori: that's what i thought. thanks [21:16:34] #info A .php file would be better than JSON. Reading it in one go is not an issue. [21:16:43] if it were many megabytes it could go elsewhere, but for what is likely less than 1MB of data the json should be loaded and cached in APC [21:17:09] ebernhardson: a PHP file would be cached in the accellerator cache automatically. no need to do that manually. [21:17:15] DanielK_WMDE: well, I guess if we have a factory that would allow to get Site by interwiki prefix, it's fine. [21:17:16] ori: assuming the blobs aren't huge [21:17:23] I'd imagine they'd be smallish though [21:17:29] DanielK_WMDE: i guess i'm confused where json fits in then? i was worried about marshelling/unmarsheling and such [21:17:31] SMalyshev: SiteLookup already supports that [21:17:33] probably APC worthy ;) [21:17:45] oh i mised ori's comment ) [21:17:54] ori: do you remember what size we measured for the Sites stuff coming from memcached? 300k or something? [21:17:59] ebernhardson: APC has some issues (especially in HHVM) based on eviction. APC does not evict old items when full like memcached instead it rejects new entries. For HHVM the APC storage size is currently unbounded and can OOM the whole fcgi process [21:18:15] maybe sites was 300k compressed [21:18:23] ebernhardson: the proposal includes the option to generate PHP files from JSON, and reading them instead of the JSON, for performance [21:18:24] not a show stopper but contrary to intuition for many that are used to memcached [21:18:37] TimStarling: sounds about right [21:18:43] DanielK_WMDE: one of the issues with the current implementation is the proliferation of abstractions in includes/site -- most of the code there is not good. Would you be removing some / a lot / all of it as part of this work? [21:18:44] DanielK_WMDE: so interwiki prefix and global ID is the same thing? It wasn't clear. But OK. [21:18:48] bd808: interesting, i didn't know that [21:19:10] interwiki prefixes are not global [21:19:43] 300kb sounds right, but how much of that was actual data and how much was bloat added by serialization of instances of a dizzyingly-complicated and customized class hierarchy, I don't know [21:19:46] ori: anomie cut it loose from the abomination that is ORMTable yesterday. That was already a very good step in that direction. [21:20:13] for example en: may go to english wikipedia or english wiktionary depending on where you are [21:20:32] ori: I think things can be simplified without loosing much flexibility, yes. [21:20:50] TimStarling: yes, that's why the sites stuff is per-wiki. [21:21:05] #info DanielK_WMDE: one of the issues with the current implementation is the proliferation of abstractions in includes/site -- most of the code there is not good. [21:21:14] TimStarling: yes, so there's our problem - when we try to do multi-wiki searches, we need some IDs to put in titles [21:21:37] domain name? [21:21:46] TimStarling: for the file based approach, my idea was to read two or three files: a global one, a family one, and a local one, each overriding and extending the info in the earlier ones. [21:21:47] TimStarling: which I guess are interwiki IDs. But to get configs etc. we need internal wiki IDs. and the link between those is not working well now [21:22:00] that way, we can maintain global defaults and local (or family) overrides [21:22:27] you do multi-wiki searches across the whole of WMF? [21:22:56] SMalyshev: wikidata uses the database names as internal IDs. Well, Site has it's one idea of a "global ID", but on the cluster, this is always the database name. [21:22:57] TimStarling: not all but eventually multiple wikis, yes. That's what we're trying at least to get working [21:23:07] there isn't actually any interwiki prefix going diagonally across the matrix, e.g. from fr.wikipedia.org to ja.wiktionary.org [21:23:20] we use a redirect hack to allow such links with double prefixes [21:23:25] DanielK_WMDE: so the idea is that we need to have some unified interface to be able to convert between all those IDs [21:23:27] SMalyshev: SiteLookup already provides a mapping between internal ID and domain name. but only one way, at the moment. [21:23:58] SMalyshev: like this? https://phabricator.wikimedia.org/T114772 [21:24:11] bd808: hhvm apc is a lot better than it was afaik [21:24:31] #link https://phabricator.wikimedia.org/T114772 [21:24:50] TimStarling: so what you do if you are on fr.wikipedia.org and you need to construct a link/Title going to ja.wiktionary.org? [21:24:56] AaronSchulz: We were having OOM problems until we put a global expiration on items recently. [21:25:11] DanielK_WMDE: am I right to assume that none of the changes under discussion would affect the information exposed by the siteinfo end point? [21:25:14] because hhvm doesn't have segment size limits for it like Zend does [21:25:34] if we use php files we shouldn't need apc [21:25:35] SMalyshev: you can do wikt:ja: or ja:wikt: [21:25:38] it will get cached as bytecode [21:25:57] that's what we do with wikiversions nowadays [21:25:58] yeah, raw php arrays in files could work well [21:26:06] DanielK_WMDE: maybe, I'll need to study it for a bit to see if it covers all use cases but it looks like a good direction [21:26:07] gwicke: siteinfo is only about the local wiki, right? so, right, it wouldn't have any impact. we could expose some extra info there (like the local wiki's global id) [21:26:33] #info yeah, raw php arrays in files could work well [21:26:43] DanielK_WMDE: yes, it's per-wiki; sitematrix provides the grouping & project listing [21:26:53] those are chained interwiki prefixes, to find those automatically, you would have to search the interwiki map for wikis with links to the wiki in question [21:26:54] #info gwicke: siteinfo is only about the local wiki, right? so, right, it wouldn't have any impact. we could expose some extra info there (like the local wiki's global id) [21:26:58] like a kind of graph search [21:27:08] gwicke: sitematrix should be overhauled to work with the new system, i think [21:27:10] bd808: it would also depend on whether it would be 1-3 blobs or a bunch of high TTL dynamically generated ones. I wouldn't be comfortable with the later in APC. [21:27:15] #info sitematrix should be overhauled to work with the new system [21:27:25] TimStarling: ok, so we'd need some convenient way to generate those and link them to Site APIs so it'd be easy to say "get me configs from Japanese wictionary, search it and then generate proper links to whatever is found" [21:27:51] or, more predictably, we could expose the underlying logic of lateral and language links [21:27:53] AaronSchulz: i expect 3 blobs [21:28:04] #info Parsoid relies on siteinfo and sitematrix to get projects and per-project interwiki data, so important to make sure that it still gets the info it needs. [21:28:33] dumpInterwiki.php has this concept of lateral links (to other sites in the same language) [21:28:55] DanielK_WMDE: per wiki or globally? [21:29:21] AaronSchulz: one blob per wiki, one blob shared per wiki family, and one blob shared by all. [21:29:27] so if you wanted a predictable, simple algorithm for getting to ja.wiktionary you would probably want to first follow the appropriate lateral link then follow the language link [21:29:31] TimStarling: basically, the person configuring the search should be able to say "if the request comes in Russian, search Russian Wikipedia and Russian Wiktionary". [21:29:50] AaronSchulz: maybe another blob shared per lateral group (all english wikis) [21:30:15] TimStarling: currently it involves configuring both interwiki prefixes and db names, but it looks like duplication of effort and inconvenience [21:30:20] SMalyshev: so you want langauge+family -> wiki-id? [21:30:33] DanielK_WMDE: that would be useful, yes [21:30:49] #info SMalyshev wants a mapping langauge+family -> wiki-id [21:30:54] DanielK_WMDE: and wiki-id => whatever is needed to generate Title for that wiki [21:30:54] SMalyshev: (low priority) and for bonus points, also search the Russian-language pages on multilingual wikis meta-wiki and mw.org. [21:31:19] SMalyshev: from the wiki-id you get everything - api path, database name, etc. [21:31:38] FWIW, Fred Emott of Facebook experimented with using PHP files with arrays for the localization cache, and found that it was substantially faster than using CDB files. The reason we couldn't adopt it is language data was stored separately for each branch, which meant every new branch increased HHVM's memory utilization by 1gb. We won't have this issue with the site group. [21:31:41] spagewmf: yes, good point, mediawiki.org and wikidata.org should be somewhere in the picture too [21:31:41] So, we have been talking about use cases, and about backend performance [21:31:49] what are your thoughts about the refactoring steps? [21:32:17] DanielK_WMDE: will the information returned indicate if it's a multilingual wiki ? [21:32:20] are there objections against making Interwiki a wrapper around a configurable singleton service object? [21:32:38] are there objections against providing an implementation of Interwiki based on Sites? [21:32:46] and should that become the default? [21:33:00] you mean replacing dumpInterwiki.php? [21:33:10] spagewmf: that would be very useful, yes. though mediaWiki currently has no notion of multilingual wikis. [21:33:15] SMalyshev: like loadDBFromSite() in MWMultiVersion? [21:33:36] I thought we were talking about having a CDB backend which would use the files generated by dumpInterwiki.php [21:33:43] although that doesn't meet SMalyshev's goals [21:33:56] TimStarling: that's not really what we are discussing, no... [21:34:04] DanielK_WMDE: I imagine yes, but if I could just do something like Title::makeTitle($string, $site) that would be even better ;) [21:34:05] though the idea is simmilar [21:34:12] and i'm not opposed to a CDB backend [21:34:44] SMalyshev: re mapping . to the wiki id, that info is actually already in sitematrix; parsoid uses that to support retrieval by domain [21:34:46] I have no objection to singleton service objects [21:34:49] though even that won't let you get new mwmultiversion instances that way, just an internal helper that gets the dbname [21:34:50] let's say "a local file backend" instead of CDB, because I am opposed to CDB but not to using a local file, and we shouldn't get mired in that [21:34:56] #info TimStarling wants a CDB backend which would use the files generated by dumpInterwiki.php [21:35:04] ehh [21:35:07] AaronSchulz: that may be helpful too [21:35:07] TimStarling: are you actually advocating for CDB? [21:35:16] that's not what I heard [21:35:18] TimStarling: dumpInterwiki.php misses a lot of info that we want/need for the Sites interface. e.g. API pathes [21:35:28] I suspect TimStarling and DanielK_WMDE are talking across each other a little [21:35:35] heh, the only arbitrary new instance method is newFromDBName() ;) [21:35:56] #info TimStarling would love someone to rewrite dumpInterwiki.php, embracing its gritty complexity and bringing it kicking and screaming into the 2010s [21:36:11] TimStarling: sorry, maybe i'm to quick with the #info, correct me if i'm... [21:36:13] yea, like that :) [21:36:45] so, that would be fine, and also just using it as is would be fine [21:36:50] TimStarling: "Make a maintenance script that generates a PHP file with site definitions from a list of JSON (and PHP) files." [21:37:02] that's the bit of my proposal that comes closes to dumpInterwiki.php [21:37:05] the only thing that is not fine is trying to rewrite it and failing to maintain b/c and breaking all the links and titles [21:37:43] #info don't break all the links and titles [21:37:46] ;) [21:37:58] gwicke: that question is a) is sitematrix always up to date with actual wikis deployed and b) are there any good APIs (i.e. not asking the DB directly but high-level) to query it? [21:38:29] TimStarling: of course. that's one of the reason I want the current code as one backend implementation. If the migration goes wrong, we just switch back to that backend, and use exactly what we use now. [21:38:42] is it just historical that WikimediaMaintenance/WMFSite.php exists as well as includes/site/Site.php ? [21:38:42] the reason it is a script rather than plain data is to make it easier to change [21:39:07] e.g. you can add languages without having to add 10 different links scattered around a file [21:39:20] SMalyshev: yes, there is an api for it [21:39:21] I think we should have a task for rewriting dumpInterwiki.php, if that's a goal. [21:39:25] spagewmf: WikimediaMaintenance is not intended ofr use inside extensions. Site is. [21:40:12] TimStarling: what does it mean to "create a language"? [21:40:15] gwicke: which one? Maybe we should try using that one... [21:40:21] SMalyshev: re up-to-date-ness, I'm not aware of the information exposed by sitematrix differing from the config; my assumption is that it's just providing an API for the config [21:40:55] gwicke: that would be cool but I didn't find how to get that directly from the config (which maybe just because I'm a newbie) [21:41:46] let me get you a url [21:41:49] I mean adding a wiki to the system which is in a language which was not previously known to exist [21:41:55] waiting for my shell to come back to life.. [21:42:00] Marybelle, TimStarling: Am I correct that dumpInterwiki.php's main task is to put what is in the interwiki table into a CDB file? [21:42:17] SMalyshev: https://en.wikipedia.org/w/api.php?action=sitematrix&format=json [21:42:57] spagewmf: WMFSite.php is an interwiki record'ish thing used by dumpInterwiki.php; Site is a completely different system that is mostly used today by WikiBase (AFAIK) [21:43:02] gwicke: Ah, I meant the internal API not api.php but that's fine I can look at what this one is doing and see if it's useful for me, thanks [21:43:03] no, dumpInterwiki.php generates CDB ab initio [21:43:13] the interwiki table is not used [21:43:57] gwicke: one thing I don't see there is interwiki prefix which is needed to create Title... [21:43:57] TimStarling: ah, right. In case of a JSON based setup, you'd just add the section for that wiki to the global JSON file. And a section giving an intterlanguage shorthand to the wiki's family JSON. [21:44:15] TimStarling: ah, thanks for clearing that up! [21:44:16] dumpInterwiki.php would be better named generateInterwikiCDB.php [21:44:31] yeah, blame domas [21:44:35] :) [21:44:40] he wasn't very good at english at the time ;) [21:44:45] funny, ha ha [21:44:51] hi domas! [21:44:55] hey Tim [21:45:13] there's a chance that script did something else once upon a time?!!?!? [21:45:14] tbh, i didn't think about generateInterwikiCDB.php when writing the proposal. to make it work with The New System (tm), it would probably need to be made to emit JSON instead of CDB. Shouldn't be too hard., [21:45:20] SMalyshev: the link is actually an interwiki, which differs depending on the target wiki [21:45:43] you can get the set of interwikis per wiki from siteinfo [21:45:44] #info dumpInterwiki.php would probably need to be made to emit JSON instead of CDB [21:46:33] so, no objections to porting the old Interwiki stuff to use Sites as its backend? [21:46:55] SMalyshev, this is the info that parsoid gets from the API: https://github.com/wikimedia/parsoid/blob/master/lib/baseconfig/enwiki.json [21:47:06] how about going for a JSON (+PHP file) system, and not using the database at all (at least per default, and on the wmf cluster)? [21:47:18] SMalyshev: interwikimap starting at https://github.com/wikimedia/parsoid/blob/master/lib/baseconfig/enwiki.json#L1332 [21:47:25] the history is: originally interwiki.sql was a big SQL file checked in to CVS with all the interwiki links in it [21:48:13] gwicke: on my local install, sitematrix API does not produce any interwikimap [21:48:15] eventually I wrote rebuildInterwiki.php to generate the SQL file from arrays and langlist etc. [21:48:16] TimStarling: my idea is similar, actually. one of my goals was to allow us to maintain interwiki info by making patches in gerrit. [21:48:24] SMalyshev: as TimStarling says, interwiki prefixes are only unique per project [21:48:42] then domas copied my whole file, changed a few things to make it generate CDB instead of SQL, and called it dumpInterwiki.php [21:48:42] SMalyshev: that output is from siteinfo [21:48:54] then eventually rebuildInterwiki.php was deleted [21:49:03] https://en.wikipedia.org/w/api.php?action=query&meta=siteinfo&siprop=interwikimap [21:49:23] TimStarling: I suppose much of what dumpInterwiki does now would be replaced by combining the info from three or four JSON files on the fly. [21:49:47] gwicke: well, there must be some way to produce a Title from string and wiki ID, right? so I thought it's interwiki link since that's what newFromText accepts [21:49:55] hehe, domas left :P [21:50:05] scratch that, makeTitle() accepts [21:50:22] DanielK_WMDE: I guess so [21:51:16] #todo clarify relationship of Sites based proposal to dumpInterwiki.php [21:51:22] gwicke: and also some way to return title in api.php search results [21:51:45] DanielK_WMDE: I think a pure config backend is a good idea. Much like the current interwiki CDB backend that we use for the WMF cluster [21:52:20] bd808: what do you think of replacing CDB with JSON resp. generated PHP files? [21:52:22] the interwiki and wiki id business is tricky; in APIs we have mostly moved to domains in order to avoid that issue, but it's certainly hard to avoid when you are linking [21:52:49] gwicke: i wrote this yeserday: https://phabricator.wikimedia.org/T114772 [21:52:58] gwicke: right. so I wonder if we can't use this opportunity to un-tricky them a bit :) [21:53:10] gwicke: i proposed that for Wikibase, but perhaps it should be in core [21:53:22] DanielK_WMDE: it *should* work and if it doesn't scale it would be easy enough to generate cdb instead of php from some maintenance script [21:53:54] DanielK_WMDE: +1 for making domains more prominent [21:54:03] DanielK_WMDE: I think making interwiki better is way beyond wikibase. Search will definitely need it as soon as we try to search more than one wiki [21:54:09] and we do want to do that [21:54:16] but really this data set should be small(ish) right and the ops code cache should hold it just fine for the whole WMF prod farm [21:54:26] s/ops/op/ [21:54:54] bd808: yes, that was the idea [21:55:38] * gwicke really needs to get food, bbiab [21:55:47] ok, 5 minutes to go, let's wrap up [21:55:49] Unifying the two systems (interwiki and Site) would be nice for mw-vagrant. My attempts to fully setup wikidata there have hit this problem of needing both before [21:56:22] bd808: ...and the JSON bit would get rid of the need to fiddle setup into the database. [21:56:30] please write only summaries and action items [21:57:14] #action daniel to update the rfc based on today's notes [21:57:29] #action daniel to clarify relationship of Sites based proposal to dumpInterwiki.php [21:58:22] #action daniel to shanghai some poor soul into implementing all the nice things we discussed here today [21:58:26] it sounds like there's plenty of uncontroversial work to do [21:58:35] \รณ [21:58:45] cool :) thanks everyone! [21:59:06] don't hesitate to add more comments on phabricator! [21:59:33] next week... [21:59:52] it was https://phabricator.wikimedia.org/T114443 right? [22:00:10] DanielK_WMDE: thanks for taking this on [22:00:18] TimStarling: yes, i think so [22:00:23] and for soliciting feedback [22:00:58] ori: let's hope i also manage to actually get this done ;) [22:00:59] ok, next week we have "EventBus MVP" which is about kafka and related fun things [22:01:25] #endmeeting [22:01:25] Meeting ended Wed Oct 7 22:01:25 2015 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) [22:01:25] Minutes: https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-10-07-21.00.html [22:01:26] Minutes (text): https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-10-07-21.00.txt [22:01:26] Minutes (wiki): https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-10-07-21.00.wiki [22:01:26] Log: https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-10-07-21.00.log.html [22:09:11] DanielK_WMDE (if you're still up), what does "resp." mean in "replacing CDB with JSON resp. generated PHPfiles" ? [22:10:01] DanielK_WMDE: http://ell.stackexchange.com/questions/6491/what-does-resp-mean-in-these-sentences is inconclusive :) [22:10:30] spagewmf: "respectively". the idea is to use JSON files per default, but allow PHP files to be generated from the JSON, so they can be read in more quickly. [22:11:46] spagewmf: so we would use JSON or optionally PHP files instead of CDB.