[06:48:22] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [06:52:45] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) >>! In T267090#6633378, @Marostegui wrote: > So, enwiki transfer from db1124:3311 to clouddb1013 and clouddb1017 finished. And as soon as I started mysql on them,... [08:05:45] 10DBA, 10Wikimedia-General-or-Unknown, 10Security: Move private wikis to a dedicated cluster - https://phabricator.wikimedia.org/T101915 (10Aklapper) 05Stalled→03Declined Thanks! Boldly declining. [08:21:16] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Deploy labsdbuser and views to new clouddb hosts - https://phabricator.wikimedia.org/T268312 (10Marostegui) [08:21:32] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [08:21:34] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Deploy labsdbuser and views to new clouddb hosts - https://phabricator.wikimedia.org/T268312 (10Marostegui) [08:21:45] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Deploy labsdbuser and views to new clouddb hosts - https://phabricator.wikimedia.org/T268312 (10Marostegui) p:05Triage→03Medium [08:23:12] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Deploy labsdbuser and views to new clouddb hosts - https://phabricator.wikimedia.org/T268312 (10Marostegui) @Bstorm we should start with this "early" (meaning: before all the hosts are ready, in case we find issues). Next week I will deploy the user, ro... [09:24:13] 10DBA, 10Operations, 10ops-eqiad: db1139 memory errors on boot (issue continues after board change) 2020-08-27 - https://phabricator.wikimedia.org/T261405 (10jcrespo) Still happening. Log from last night: ` DIMM Initialization Error - Processor 2 Channel 1. The identified memory channel could not be proper... [09:31:54] 10DBA, 10Operations, 10Orchestrator: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10Kormat) [09:32:33] kormat: we have this: T141968 [09:32:33] T141968: Display lag on grafana (prometheus) and dbtree from pt-heartbeat instead (or in addition) of Seconds_Behind_Master - https://phabricator.wikimedia.org/T141968 [09:32:46] not sure if it is worth merging both [09:33:49] marostegui: i think they're separate things. the actual sql query will probably be the same, but one is run by orchestrator, the other will be run by some custom metrics exporter [09:34:20] Ah yeah, I didn't notice it was about orchestrator indeed :) [09:35:35] marostegui: i wonder if we should be doing this: https://github.com/openark/orchestrator/blob/master/docs/configuration-failure-detection.md#mysql-configuration [09:36:07] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Deploy labsdbuser and views to new clouddb hosts - https://phabricator.wikimedia.org/T268312 (10LSobanski) Cheeky question, should we go with a "clouddb" user and role to be consistent or would that be too much work to untangle from the current set up? [09:37:12] kormat: worth testing yeah [09:38:40] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Deploy labsdbuser and views to new clouddb hosts - https://phabricator.wikimedia.org/T268312 (10Marostegui) That's a good point - up to #cloud-services-team, as they'd need to change their scripts. From our side it doesn't make much difference. But prob... [09:42:38] 10DBA, 10Operations, 10Orchestrator: Configure mariadb to notice/recover from replication issues quicker - https://phabricator.wikimedia.org/T268320 (10Kormat) [09:47:08] marostegui: oh, that's very cute. when you provide hook commands to orchestrator, it check to see if they end with ` &`. if they do, they are run async (and errors are ignored), otherwise they are run sync. [11:09:05] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) clouddb1013:3313 and clouddb1017:3313 have been cloned from db1124:3313. - Root passwords changed - Triggers removed on all the wikis - Configuration: ` # for i in... [11:09:21] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [12:00:48] 10DBA, 10Operations, 10CAS-SSO, 10User-jbond: Request new database for idp.wikimedia.org - https://phabricator.wikimedia.org/T268327 (10jbond) p:05Triage→03Medium [12:12:43] 10DBA, 10Operations, 10Puppet, 10User-jbond: Request new database for pki.discovery.wmnet - https://phabricator.wikimedia.org/T268329 (10jbond) p:05Triage→03Medium [12:24:33] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [12:27:33] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [12:31:07] 10DBA, 10Operations, 10Puppet, 10User-jbond: Request new database for pki.discovery.wmnet - https://phabricator.wikimedia.org/T268329 (10LSobanski) @jbond What is your preferred delivery date for this? [12:31:47] 10DBA, 10Operations, 10CAS-SSO, 10User-jbond: Request new database for idp.wikimedia.org - https://phabricator.wikimedia.org/T268327 (10LSobanski) @jbond What is your preferred delivery date for this? [12:34:07] 10DBA, 10Operations, 10Puppet, 10User-jbond: Request new database for pki.discovery.wmnet - https://phabricator.wikimedia.org/T268329 (10jbond) >>! In T268329#6636560, @LSobanski wrote: > @jbond What is your preferred delivery date for this? for this one ~1 week would be nice however it wont really becom... [12:35:05] 10DBA, 10Operations, 10CAS-SSO, 10User-jbond: Request new database for idp.wikimedia.org - https://phabricator.wikimedia.org/T268327 (10jbond) >>! In T268327#6636565, @LSobanski wrote: > @jbond What is your preferred delivery date for this? This is an improvement to a current service so there are no block... [12:43:06] 10DBA, 10Operations, 10CAS-SSO, 10User-jbond: Request new database for idp.wikimedia.org - https://phabricator.wikimedia.org/T268327 (10LSobanski) [12:43:26] 10DBA, 10Operations, 10Puppet, 10User-jbond: Request new database for pki.discovery.wmnet - https://phabricator.wikimedia.org/T268329 (10LSobanski) [13:13:32] 10DBA, 10Operations: Cleanup heartbeat.heartbeat on all production instances - https://phabricator.wikimedia.org/T268336 (10Kormat) [13:15:46] 10DBA, 10Operations: Cleanup heartbeat.heartbeat on all production instances - https://phabricator.wikimedia.org/T268336 (10jcrespo) Let's backup them just in case- the reason why this was not done before is because the heartbeat table helps track past replication history in case a replication problem (e.g. an... [13:33:45] 10DBA, 10Operations: Cleanup heartbeat.heartbeat on all production instances - https://phabricator.wikimedia.org/T268336 (10jcrespo) I have created a dump at: `dbprov1003:/srv/backups/dumps/latest/heartbeat_tables.2020-11-20.tar.gz`, so that is no longer a blocker. [13:39:56] 10DBA, 10Operations, 10Orchestrator, 10Patch-For-Review: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10Kormat) [13:39:59] 10DBA, 10Operations: Cleanup heartbeat.heartbeat on all production instances - https://phabricator.wikimedia.org/T268336 (10Kormat) [13:40:38] 10DBA, 10Operations, 10Orchestrator, 10Patch-For-Review: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10Kormat) Trying to do this in a reasonable fashion doesn't seem possible without {T268336} being done first. [13:43:33] 10DBA, 10Operations: Cleanup heartbeat.heartbeat on all production instances - https://phabricator.wikimedia.org/T268336 (10Kormat) [13:48:44] 10DBA, 10Operations, 10Orchestrator, 10Patch-For-Review: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10jcrespo) You can check both the [[ https://phabricator.wikimedia.org/source/operations-puppet/browse/production/modules/icinga/files/check_mariadb.pl$182 | rep... [13:54:46] 10DBA, 10Operations, 10Orchestrator, 10Patch-For-Review: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10Kormat) @jcrespo: Orchestrator only supports a single query for all instances. This means we can't supply per-DC/-section/-etc parameters. It also means we can... [13:56:58] 10DBA, 10Operations, 10Orchestrator, 10Patch-For-Review: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10jcrespo) > @jcrespo: Orchestrator only supports a single query for all instances. This means we can't supply per-DC/-section/-etc parameters. :'-( Then indee... [14:01:29] 10DBA, 10Operations, 10Orchestrator, 10Patch-For-Review: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10Kormat) Considerations: - A non-primary master instance may be running heartbeat (e.g. all parsercache nodes, section masters in backup DC) -- Ignoring any hea... [14:03:06] see my updated comment- I give you a "best effort" option, but pt-heartbeat without parameters will not work for multisource :-( [14:04:21] we may want to stick to seconds_behind_master, and ignore it as a metric [14:04:45] jynus: we won't have multisource anymore hopefully [14:04:50] that's true [14:05:29] so just trying to help as I setup the original pt-heartbeat system, but needed additions for mw particularities [14:05:35] jynus: that query produces 1 result per row. one of the difficulties is selecting a row [14:05:47] 1 result per row? [14:05:48] actually _the_ difficulty is selecting a row [14:06:02] oh, maybe I am lacking a group by [14:06:13] you're missing a LIMIT [14:06:18] ah, that [14:06:19] sorry [14:06:23] ameding [14:06:31] but yeah, it is not technically correct [14:07:10] I edited it, kormat [14:07:27] jynus: ok. in this form, it will never ever show lag on secondary masters [14:07:28] it is just a "best effort" [14:07:32] because they run their own heartbeat [14:07:34] yeah [14:07:52] though maybe that's also something we should look at changing [14:08:01] hmmm [14:08:14] marostegui: if we made only _primary_ masters run heartbeat, then we get a very simple query again [14:08:31] so e.g. for pc1 we'd only have it running on pc1007 [14:08:43] we're overloading the "is_master" concept [14:09:05] so there is a few things here, you got me until you mentioned pc1007 [14:09:11] pc2007 is active [14:09:31] jynus: hmm? eqiad is the active DC [14:09:40] yes, but write can heppen on codfw too [14:09:46] it pcs are write-write [14:09:50] *pcs are [14:09:58] that's what i'm talking about re: overloading [14:10:04] we're conflating 'is_master' with 'is_writeable' [14:10:13] what do you mean with master? [14:10:15] being a master should be purely about replication [14:10:22] for me master: location were you write :-D [14:10:27] or primary if you prefer [14:10:36] but we replicate from both, it is circular [14:10:41] i'm happy to use other terms [14:10:56] but the point is there's a concept of "here's a node that only has replicas, no masters" [14:10:58] both are priamary and replicate to the other dc, at least in theroy [14:11:01] and "here's a node which accepts writes" [14:11:14] (ignoring circular replication for a minute) [14:12:22] but you cannot ignore circular replication- it is a primary-primary topology, other thing is if mediawiki itself is pasive/read only [14:12:40] a different thing is if you want to change that, and that is ok [14:12:51] and be fully active-passive [14:12:53] for circular replication we'd have both nodes set as master-for-replication [14:12:56] and run heartbeat on both [14:13:03] which is what we do [14:13:20] we replicate from eqiad to codfw and from codfw to eqiad, and run heartbeat on both [14:13:47] in my time here we've only done that just before DC switchovers [14:14:01] it's not the case the rest of the time [14:15:09] otherwise there's a ton of maintenance we couldn't do [14:15:45] so what I described is the documented design: https://phabricator.wikimedia.org/T111266 [14:15:58] it is ok if it wants to be changed, just describing it [14:17:59] https://phabricator.wikimedia.org/T111266#2322024 in particular [14:19:56] are you referring specifically to running pt-heartbeat on secondary masters? or something else? [14:20:25] both [14:22:00] note that was done in preparation of the active-active work [14:23:29] so the query I suggested satisfies all your considerations, even if not ideal [14:23:41] but I was just trying to help with context [14:24:05] it doesn't satisfy the first consideration [14:24:24] as above, it would never detect lag on a secondary master [14:24:53] I see, so you want lag from immediate master, except itself? [14:25:15] but then some cannot return any results [14:25:17] i'm saying that if the secondary master is lagging behind the primary master, we need to be able to see that [14:25:48] not possible with pt-heartbeat and without parameters [14:26:07] ok. so you're saying your solution _does not_ satisfy the considerations [14:26:19] no, I haven't read well the first [14:26:24] no, it doesn't [14:26:29] ok. [14:26:52] but this is a defect on the tool, not the architecture [14:27:10] I would either go to seconds_behind_master [14:27:25] or hack the source code to allow some parameters [14:27:38] actually [14:27:46] don't we have the local dc on a variable? [14:27:52] we could maybe use that? [14:28:18] not really, because we need mw_primary, not local dc [14:28:37] consideration 3 specifically says we should not use mw_primary. [14:28:49] yeah, misc_primary [14:29:03] it was a writing shorthand [14:29:17] "mw" is shorthand for "misc"? :) [14:29:46] ah, come one, don't get me that bad time, I think you understood me :-D [14:30:01] I am trying to help here [14:30:08] i actually don't know what you're suggesting [14:30:33] so we need the primary dc/active master host to ignore it [14:30:49] on a per-section basis, which we can't do [14:30:58] we have the local dc on a variable, but not the primary thing [14:31:04] without exernal parameters [14:32:11] that is why I send a working query- which is imperfect [14:32:14] *sent [14:33:00] ok, if your suggestion is to stop heartbeat on cofw misc, I am ok [14:33:14] it is the mw part that is more complicated [14:34:19] as we control how we manage misc, but mw is something I had to discuss with performance [14:34:34] that should make it work? [14:34:48] not perfectly, but better [14:35:23] alternatively, not using pt-heartbeat [14:35:34] and going to seconds_behind_master [14:36:14] i don't think that's a realistic option unless we're planning to change mw to use that too [14:36:44] we should be consistent in how we measure latency [14:37:08] I agree, so we should hack orchestator source code? [14:37:11] :-) [14:37:25] i don't think that's realistic either in this case [14:37:32] the logic would be significantly non-trivial [14:37:46] i see two reasonable options: [14:37:55] a) clean up heartbeat table, and keep it clean [14:37:55] yes, but the alternative is to change the mediawiki architecture and code? [14:38:06] b) stop running pt-heartbeat on secondary masters [14:38:09] I have not problems with a) [14:38:29] neither of which should require mw changes, but we'd check anyway to make sure. [14:38:47] yes, it does [14:38:59] mw does the check based on parameters [14:39:30] i'm sure it does, but right now i don't see how that would be a problem [14:39:38] and cleaning it up doesn't fix the problem [14:39:59] you will still have 2 rows- one for codfw and one for eqiad on both dcs [14:40:15] if we go with a)? sure. that's fine. [14:40:30] but that doesnt' satisfies your conditions [14:40:36] it does [14:40:44] last line in https://phabricator.wikimedia.org/T268316#6636717 [14:40:50] no, if master <-> replica breaks, it won't alert [14:41:03] it will. please read what i wrote in the task. [14:41:33] how they will show lag, that I don't undestand [14:42:05] can you paste me the query in the last line of the comment? i added it as an edit about 40mins ago, so maybe you don't see it. [14:42:32] "Otherwise if master in the secondary DC is lagging, _all_ instances in that DC will show lag, which just adds a lot of noise for no information gain." [14:42:39] ah. please reload the page. [14:43:24] wait [14:43:36] but "If there are no obsolete entries in the heartbeat table, then we can simply do MAX(NOW()-ts) (roughly)." is literally my query [14:43:49] no [14:43:54] not literally [14:43:59] your query finds the latest entry [14:44:02] that's not what we want in that case [14:44:05] order by ts [14:44:05] we want to find the _oldest_ entry [14:44:10] ahhhhh [14:44:20] that is the part I didn't understand [14:44:25] because that's the one that's no longer being updated, and therefore is the real latency [14:44:38] sorry, that part was unclear [14:44:41] to me [14:44:47] (`order by ts asc limit 1` would also work fine) [14:44:50] yeah [14:45:11] we can't do it right now because the stale entries would always be the result [14:45:22] ok, that part wasn't clear to me based on the comment [14:45:37] now it is [14:45:41] grand. i'm glad we could clear it up :) [14:45:55] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Deploy labsdbuser and views to new clouddb hosts - https://phabricator.wikimedia.org/T268312 (10Bstorm) >>! In T268312#6635950, @Marostegui wrote: > @Bstorm we should start with this "early" (meaning: before all the hosts are ready, in case we find issu... [14:45:57] I thought you wanted to redo the whole architecture [14:46:02] without cosidering other options [14:46:08] like that, for example [14:46:21] and I am not saying current arch is perfect [14:46:32] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Deploy labsdbuser and views to new clouddb hosts - https://phabricator.wikimedia.org/T268312 (10Marostegui) >>! In T268312#6636808, @Bstorm wrote: >>>! In T268312#6635950, @Marostegui wrote: >> @Bstorm we should start with this "early" (meaning: before... [14:46:36] but it was going to be more work to change how mw checks lag [14:46:54] the only sane way for mw to check lag would be to do the query based on the active mw_primary dc [14:47:01] and it was prepared for active-active [14:47:03] so i would be very surprised if that's not what it does [14:47:13] (or the _other_ dc in an active-active situation) [14:47:19] well, the sane way is to play with parameters :-D [14:47:58] I added complexity because at the time, it was unclear if we are going to write-reads or even writes-writes on eqiad on codfw [14:48:10] so increased compelexity for more flexibility [14:48:47] also, to clarify, I wasn't against T268336, I just mentioned whay it was kept [14:48:48] T268336: Cleanup heartbeat.heartbeat on all production instances - https://phabricator.wikimedia.org/T268336 [14:48:57] ah hah. just figured something out [14:49:12] solution b) would work fine without a) so long as we're not doing active-active (or at least circular replication) [14:49:16] but once I did a backup, it had 0 needs [14:49:21] but once you add circular replication, you're back in trouble [14:49:43] sorry, can we put a) and b)s on the ticket, or I will get lost [14:49:59] just copy and paste, so I don't lose track of your thought [14:50:06] sure. i'll write another summary comment [14:50:34] that would make sure I don't miss what you mean (my fault, not yours) [14:50:42] I think that applies to a bunch of comments in this discussion. I got a bit lost a while ago :) [14:54:37] 10DBA, 10Operations, 10Orchestrator, 10Patch-For-Review: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10Kormat) Ways to achieve this: 1. Clean up the heartbeat table so that it only contains entries that are supposed to be there ({T268336}) -- With this, we can u... [14:54:38] https://phabricator.wikimedia.org/T268316#6636831 [14:56:55] so, summarizing my position: 1) I don't see problems with this, 2) I would only do this on misc, as we have more decision power there (vs mw) [14:57:08] I can put it on a comment too [14:57:52] the confusion was that I didn't understood at first that you wanted to check the "oldest" ts [14:58:28] is there a situation in which ts from remote dc would be higher than local one? [14:58:39] higher as in, higher lag [14:58:56] that is the expected case [14:59:02] sorry, lower [14:59:05] the unexpected case [14:59:15] not unless heartbeat was stopped locally [14:59:17] I am thinking of flaws [14:59:31] or clock drift [14:59:42] but that should be ok to alert on [14:59:51] yeah. and the docs would mention to check these cases [15:00:08] is heartbeat running locally? is the local clock out of sync? etc [15:00:14] so short term, my vote is for 1 [15:00:42] it's a Lot more work, but yeah, me too :) [15:00:42] long term, I would like to setup a service or something that worked as a proper measure [15:01:01] to avoid 3 methods of measureing lag [15:01:25] it feels bad to have that [15:01:30] with 1) in place, we can convert the other 2 methods to use the same as orchestrator [15:01:42] or hearbeat in a proxy that failovers with mw [15:03:45] the main issue for long term what mw arch decisions will be taken [15:04:00] as in, we we go active-active at some point? [15:06:02] now we need to make sure that no extra heartbeat entries go anywhere, becuase if they do- the whole dc wil alert [15:07:11] 10DBA, 10Operations: Cleanup heartbeat.heartbeat on all production instances - https://phabricator.wikimedia.org/T268336 (10herron) p:05Triage→03Medium [15:07:31] 10DBA, 10Operations, 10Orchestrator: Configure mariadb to notice/recover from replication issues quicker - https://phabricator.wikimedia.org/T268320 (10herron) p:05Triage→03Medium [15:08:21] 10DBA, 10Operations, 10Orchestrator, 10Patch-For-Review: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10herron) p:05Triage→03Medium [15:09:58] kormat: so look at https://phabricator.wikimedia.org/T268336 description by itself and see why my confusion [15:10:33] without the `MAX(NOW()-ts)` that made no sense [15:10:46] to me, not with the context make a lot of sense [15:11:20] +orchestrator not allowing different queries per server [15:12:20] 10DBA, 10Operations: Cleanup heartbeat.heartbeat on all production instances - https://phabricator.wikimedia.org/T268336 (10Kormat) [15:12:24] sorry for my confusion [15:13:14] i dunno, to me that reads pretty clearly on it's own. it doesn't cover the _motivation_ for fixing it, but i think it points out the cruft, and the conceptual problem with it [15:13:33] (that you can't tell the difference between what should be ignored and what's relevant) [15:14:10] fair, I just was justifying my comment on the other ticket with "It can be ignored with this query" [15:15:16] also because, technically, all hosts should have 2 rows, it just doesn't happen normally lately [15:16:59] you understood my comment about value of old records? [15:17:27] they don't have to be live, ofc, just as a log [15:17:46] after being archived they can be deleted [17:57:38] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Deploy labsdbuser and views to new clouddb hosts - https://phabricator.wikimedia.org/T268312 (10Bstorm) [17:58:47] 10DBA, 10Data-Services, 10cloud-services-team (Kanban): Deploy labsdbuser and views to new clouddb hosts - https://phabricator.wikimedia.org/T268312 (10Bstorm) @Marostegui Random question: where does centralauth live in this setup? We are so far planning on keeping meta_p on s7 for historical reasons (or po... [18:05:40] ^bstorm: on production it lives on s7 [18:05:52] Thanks! [18:06:13] jynus: ^ [18:06:36] Hopefully that also means it will on the replicas? [18:06:38] you welcome, he is normally away at this hour [18:06:52] I figured I'd find out much later :) [18:06:57] or monday [18:06:58] bstorm: I can check, but I am guessing yes [18:07:10] at least I think it is on sanitarium [18:07:18] I'll proceed as though that is true for scripting purposes for now. I can always fix it later [18:08:16] it is on sanitarium on s7 [18:08:51] I don't think there is yet a clouddb with s7, but I cannot see a reason why it shouldn't be there [18:16:39] cool [19:01:53] marostegui: green light for the pending decom, whenever you want :) [21:42:38] bstorm: you never saw me, etc, but just fyi there's a cr open that I'd like your +1 on before merging [21:43:31] https://gerrit.wikimedia.org/r/c/operations/puppet/+/641986 [21:43:42] (not urgent) [21:43:48] 👀 [21:46:27] +1 [21:59:22] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Deploy labsdbuser and views to new clouddb hosts - https://phabricator.wikimedia.org/T268312 (10bd808) >>! In T268312#6637396, @Bstorm wrote: > @Marostegui Random question: where does centralauth live in this setup? We are so far...