[05:10:41] 10DBA, 10Operations: Predictive failures on disk S.M.A.R.T. status - https://phabricator.wikimedia.org/T208323 (10Marostegui) [05:11:09] 10DBA, 10Operations: Predictive failures on disk S.M.A.R.T. status - https://phabricator.wikimedia.org/T208323 (10Marostegui) root@db2070:~# hpssacli controller all show config Smart Array P420i in Slot 0 (Embedded) (sn: 0014380337FADD0) Port Name: 1I Port Name: 2I Gen8 ServBP 12+2 at Port 1I,... [05:59:05] 10DBA, 10Data-Services, 10Operations, 10cloud-services-team: Prepare and check storage layer for hywwiki - https://phabricator.wikimedia.org/T212625 (10Marostegui) This wiki is triggering some false positives on our labs private data checking methods, even if it is correctly sanitized (T212625#5062038) it... [06:36:44] 10DBA, 10Wikimedia-Site-requests: Global rename of Foundling → Expósito: supervision needed - https://phabricator.wikimedia.org/T219693 (10Marostegui) p:05Triage→03Normal When do you want to do this? [06:51:08] 10DBA, 10Wikimedia-Site-requests: Global rename of Foundling → Expósito: supervision needed - https://phabricator.wikimedia.org/T219693 (101997kB) If you have time we can do right now. [06:53:12] 10DBA, 10Wikimedia-Site-requests: Global rename of Foundling → Expósito: supervision needed - https://phabricator.wikimedia.org/T219693 (10Marostegui) go for it! [07:00:52] 10DBA, 10Wikimedia-Site-requests: Global rename of Foundling → Expósito: supervision needed - https://phabricator.wikimedia.org/T219693 (101997kB) https://meta.wikimedia.org/wiki/Special:GlobalRenameProgress/Exp%C3%B3sito [07:01:19] 10DBA, 10Wikimedia-Site-requests: Global rename of Foundling → Expósito: supervision needed - https://phabricator.wikimedia.org/T219693 (10Marostegui) Thanks for the URL [07:01:32] 10Blocked-on-schema-change, 10MediaWiki-Database, 10MW-1.32-notes (WMF-deploy-2018-07-17 (1.32.0-wmf.13)), 10Schema-change: Add index log_type_action - https://phabricator.wikimedia.org/T51199 (10Marostegui) [07:01:38] 10DBA, 10Data-Services: Discrepancies with logging table on different wikis - https://phabricator.wikimedia.org/T71127 (10Marostegui) [07:01:57] 10DBA, 10Data-Services: Discrepancies with logging table on different wikis - https://phabricator.wikimedia.org/T71127 (10Marostegui) 05Open→03Resolved All done! One less drift between production and tables.sql [07:02:05] 10DBA, 10Schema-change, 10Tracking: [DO NOT USE] Schema changes for Wikimedia wikis (tracking) [superseded by #Blocked-on-schema-change] - https://phabricator.wikimedia.org/T51188 (10Marostegui) [07:02:08] 10Blocked-on-schema-change, 10MediaWiki-Database, 10MW-1.32-notes (WMF-deploy-2018-07-17 (1.32.0-wmf.13)), 10Schema-change: Add index log_type_action - https://phabricator.wikimedia.org/T51199 (10Marostegui) 05Open→03Resolved All done! One less drift between production and tables.sql [07:13:59] 10Blocked-on-schema-change, 10Growth-Team, 10Notifications, 10Patch-For-Review, 10Schema-change: Add index on event_page_id - https://phabricator.wikimedia.org/T143961 (10Marostegui) a:03Marostegui [07:14:48] 10DBA, 10Wikimedia-Site-requests: Global rename of Foundling → Expósito: supervision needed - https://phabricator.wikimedia.org/T219693 (10Marostegui) All done, right? [07:15:49] 10DBA, 10Wikimedia-Site-requests: Global rename of Foundling → Expósito: supervision needed - https://phabricator.wikimedia.org/T219693 (101997kB) yeah successfully renamed. Thanks! [07:16:38] 10DBA, 10Wikimedia-Site-requests: Global rename of Foundling → Expósito: supervision needed - https://phabricator.wikimedia.org/T219693 (101997kB) 05Open→03Resolved a:031997kB [07:16:46] 10Blocked-on-schema-change, 10Growth-Team, 10Notifications, 10Patch-For-Review, 10Schema-change: Add index on event_page_id - https://phabricator.wikimedia.org/T143961 (10Marostegui) x1 hosts [] dbstore2002 [] dbstore1005 [] dbstore1001 [] db2096 [] db2069 [] db2034 [] db2033 [] db1120 [] db1064 [] db1069 [07:18:48] 10Blocked-on-schema-change, 10Growth-Team, 10Notifications, 10Patch-For-Review, 10Schema-change: Add index on event_page_id - https://phabricator.wikimedia.org/T143961 (10Marostegui) x1 is running ROW based, as this is only changing the indexes and not touching, it should not affect replication - I will... [07:22:29] jynus: o/ dbstore1001 is almost full on /srv I saw there are 3T of snapshot/ongoing [07:22:38] I didn't want to delete anything just in case you were testing stuff [07:22:49] 3T? [07:23:01] nothing should go to dbstore about snapshots [07:23:09] Almost 4T actually :) [07:24:35] for some reason, snapshots are still being sent there [07:24:40] ah, dbstore1001 [07:24:50] not dbstore2001 [07:25:05] Yeah, 1001 is what I said [07:26:51] [22:40:03]: ERROR - The compression process failed [07:27:03] probably due to lack of space [07:27:32] and then the other transfer did, too [07:27:42] ERROR - Problem while trying to find the backup files at /srv/backups/snapshots/ongoing [07:29:53] Ah right [07:33:34] did that affect replication? [07:34:20] nope [07:34:26] (that I saw) [07:35:42] the backups or x1 on codfw seem to be going well [07:35:54] but we will need a decision about the solid state disk usage [07:39:39] yeah [07:39:47] I was thinking about that during the weekend [07:39:59] Do we have a list of sections that (for now) would fit on the SSDs? [07:40:55] almost all would fit, but only for some time, and never if conbined with other types of backups or other snapshots [07:41:20] dumps should fill all even if combined, because they are stored already compressed [07:41:45] yeah, probably for now we should try not to combine dumps+snapshots just for cleaness I think [07:41:48] s1 is 1.1TB + 400GB compressed, barely fitting on 1.6TB [07:42:21] sure, that is easy, but then you either have to send snapshots to the spining disks [07:42:37] or something else [07:43:50] you mean to be able to do more than 1 snapshot at the time? [07:44:00] no, to even do only 1 [07:44:23] for example, I don't think s1 will fit for a long time [07:44:31] s8 will likely never fit [07:44:39] and so will m2 [07:45:02] others it is just a question of growth/time [07:47:02] I think for now I would just leave them for the dumps [07:47:11] this will complex things more but, maybe we can do some sort of mechanisim to allow/deny some hosts to use that directory or something? [07:47:46] I don't think that is ok, sure it will work for some time [07:48:01] but then data will grow, and things will fail [07:48:16] and this creates one every day, we cannot micromanage them [07:48:26] you've got any idea? [07:48:34] I think for now I would just leave them for the dumps [07:48:56] them == the 2 solid state disks [07:48:57] yeah, I mean long term [07:49:06] the same :-) [07:49:13] or purchase larger disks [07:49:17] haha [07:49:33] are those 2.5"? [07:49:41] if so we can re-use those on other DBs anyways (as spare) [07:50:19] we would need (1.5+5)* 4 TB of SSDs on 2 disks [07:50:52] or half of it if we do one copy at a time, but I think that may take longer than 24 hours [07:51:21] 1.5+5? [07:51:23] actually, (1.5+5)* 4 TB + 200GB for the dumps, if we do them on the same location [07:51:27] I don't get that [07:51:42] 1.5TB for a backup + 500GB for the backup, compressed [07:51:47] ah ok :) [07:52:13] so 2 1.6TB SSD disk would work [07:52:13] no? [07:52:44] I don't think so, more like 1.9TB [07:53:00] and that is being on the short side [07:53:19] the good thing is that those 800GB SSDs ones can be re-used on our core hosts [07:53:50] actually yes [07:54:00] 1.36TB [07:54:10] anything on top of that [07:54:40] do we have more slots available on the chassis? [07:54:44] no [07:54:45] I don't think so [07:55:00] Yeah, then we can replace those 800GB, get them to spare for the core hosts and buy bigger ones when needed [07:55:20] we should wait, however [07:55:33] as needs may change with incrementals and binary logs [07:55:40] sure, not saying we have to do it now, but it is good that we can re-use the 800GB ones [07:55:43] or as we see what our best workflow is [07:56:10] for now, if you don't disagree, I would like to send them to the spining disks, and start creating more at the same time [07:56:43] note the network for the sources is 1Gb [07:56:53] so that is already a big bottleneck [07:57:09] sure! lets do that [07:57:13] ssds, however, will be happy for the many files on the dumps [07:58:20] 10Blocked-on-schema-change, 10Growth-Team, 10Notifications, 10Patch-For-Review, 10Schema-change: Add index on event_page_id on echo_event table - https://phabricator.wikimedia.org/T143961 (10Marostegui) [08:21:57] 10Blocked-on-schema-change, 10Growth-Team, 10Notifications, 10Patch-For-Review, 10Schema-change: Add index on event_page_id on echo_event table - https://phabricator.wikimedia.org/T143961 (10Marostegui) >>! In T143961#5072975, @Marostegui wrote: > x1 is running ROW based, as this is only changing the ind... [10:28:26] marostegui: I might give it another try today. Tell me when you're around to drop the database and do it again from scratch if possible [10:28:41] (I can do another wiki first to check) [10:28:52] Amir1: I will ping you after 3pm or so, that ok? [10:28:59] sure [10:29:38] Amir1: will ping you to drop the DB then [10:29:39] thanks [10:29:57] <3 [10:49:58] Amir1: So dropping a wiki isn't that easy, as it implies dropping it on s3, x1, es etc... [10:50:22] Amir1: Can you try to run the process for the same wiki and see what you get? (without me dropping the database) [10:50:37] yeah, I know, that's my plan for now [10:51:57] cool, let's stick to that for now [10:52:07] marostegui: just to double check, we have the wiki on s3 and es but not on x1? [10:52:10] without dropping or creating a new wiki [10:52:15] Amir1: let me recheck [10:52:27] (hywwiki, two ww) [10:52:44] Amir1: correct, hywwiki exists on s3, and on es but not on x1 [10:55:37] marostegui: how can I check if it's on x1 or not? [10:56:00] (so when running the script, I won't bother you again) [10:58:31] you can try to connect to an x1 DB and do: use hywwiki; show tables; [10:58:34] for example [11:02:46] Amir1: db1064 for instance [11:04:26] thanks! [11:09:25] marostegui: btw. We are deploying UrlShortener this week, its table on x1 is going to grow quickly (at least at first) this week. [11:09:27] Is it fine? [11:13:22] what, table on x1? [11:24:23] jynus: it's already there. It's already deploy (it's read-only now) [11:24:32] that is bad [11:24:33] let me get the name [11:24:35] that is really bad [11:25:33] wikishared database [11:25:49] table urlshortcodes [11:25:58] can you disable the extension? [11:26:07] it's already disabled [11:26:12] ok [11:26:31] send a task [11:26:48] jynus: https://phabricator.wikimedia.org/T108557 [11:27:31] no, that is not a dba task, neither manuel or me are as a reviweers [11:28:53] your irc comment is the first time I notice that was going to be done [11:29:57] I sent an email to ops around two weeks ago [11:31:10] jynus: I just made one: https://phabricator.wikimedia.org/T219777 [11:31:12] 10DBA, 10MediaWiki-extensions-UrlShortener: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10Ladsgroup) [11:38:00] 10DBA, 10MediaWiki-extensions-UrlShortener: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10jcrespo) Do you know if the table is public (could be 100% published as is with all its columns)? [11:39:16] 10DBA, 10MediaWiki-extensions-UrlShortener: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10Ladsgroup) >>! In T219777#5073721, @jcrespo wrote: > Do you know if the table is public (could be 100% published as is with all its columns)? The content should be public unless usc_deleted is... [11:52:51] 10DBA, 10MediaWiki-extensions-UrlShortener: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10jcrespo) I will talk to @Marostegui to see what is the best way to proceed. I will meanwhile review the queries and structure. If you have more info on the predicted usage of the table (it is o... [12:37:09] 10DBA, 10MediaWiki-extensions-UrlShortener: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10Ladsgroup) First, sorry for not informing you directly, the table is already in production for years now. I though it went through DBA review. >>! In T219777#5073761, @jcrespo wrote: > Number o... [12:51:15] 10DBA, 10MediaWiki-extensions-UrlShortener: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10jcrespo) Would making the table made fully private ok for now? We can review later and change it, but it would speed the process for now (filtering per rows may involve creating custom views, e... [12:56:32] 10DBA, 10MediaWiki-extensions-UrlShortener: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10Ladsgroup) >>! In T219777#5073892, @jcrespo wrote: > Would making the table made fully private ok for now? We can review later and change it, but it would speed the process for now (filtering p... [13:01:11] marostegui: I found out why the echo cluster doesn't exist, it's in the main database now. Some config issue. Can we drop echo tables from the hywwiki in s3? [13:03:16] 10DBA, 10MediaWiki-extensions-UrlShortener: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10jcrespo) Just to be clear, aside from a sanity review, there are 3 things we make sure of: * wikireplicas filtering is in place * backups are produced so we can recover the data in case they ge... [13:08:21] 10DBA, 10Performance-Team, 10TechCom, 10TechCom-RFC (TechCom-Approved): RFC: Re-establish the development policies - https://phabricator.wikimedia.org/T190379 (10mark) There has been some concern from our DBAs the archiving of the old policy will make it even harder for developers to find out about what da... [13:10:40] 10DBA, 10MediaWiki-extensions-UrlShortener: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10jcrespo) Unrelated, but FYI:{F28547460} (missing schema change on beta) [13:12:31] 10DBA, 10Performance-Team, 10TechCom, 10TechCom-RFC (TechCom-Approved): RFC: Re-establish the development policies - https://phabricator.wikimedia.org/T190379 (10jcrespo) I commented some options at https://www.mediawiki.org/w/index.php?title=Topic:Uvtmdzoq11pbeunp&topic_showPostId=ux1cwnyqc09tnuwx#flow-po... [13:13:05] @marostegui @jynus hi. I've updated https://gerrit.wikimedia.org/r/c/mediawiki/extensions/Wikibase/+/499142 accord. to the recent discussion + comments that were on the patch.. I'd very much appreciate if you can have a final look (and maybe a +1 from your side if that's smth you'd usually do to patches on DB level :) [13:14:00] alaa_wmde: can you add me as reviewer and/or send me a mention on the phab ticket [13:14:11] otherwise I may forget 0:-) [13:14:25] yes right away! [13:14:59] or a mention on the gerrit, so if we don't answer, you can demonstrate that :-D [13:15:23] we have like too many ongoing stuff to keep track of everthing on irc [13:15:33] 10DBA, 10Multi-Content-Revisions, 10SDC General, 10Wikidata, and 3 others: MCR schema migration stage 0: create tables - https://phabricator.wikimedia.org/T183486 (10CCicalese_WMF) [13:15:35] 10DBA, 10Multi-Content-Revisions, 10Multimedia, 10SDC Engineering, and 6 others: Deploy MCR storage layer - https://phabricator.wikimedia.org/T174044 (10CCicalese_WMF) [13:16:08] oh, I remember what that is, I have a general concern about the plans [13:16:16] done :) yes I had only marostegui as reviewer befire .. now you are as well! [13:16:23] but didin't have time to mention it [13:16:59] there is a lot of staff on-flight right now (MCR migration, actor migration) and that has increased the overal temporary size of all databases [13:17:40] I was starting to worry to have one more that may duplicate the size of another large table for some time [13:18:20] nothing against the process, more like about timing [13:19:08] technically doesn't affect the new schema, just when the copy happens [13:21:08] and of course only on very large wikis, don't know if you follow what I am trying to say, alaa_wmde [13:21:42] is it more or less known when would be a good timing to go with same plan? [13:21:42] I'm preparing some phab ticket to share with you our migration plan so far (https://phabricator.wikimedia.org/T219145). it is still conceptual. [13:22:10] I was asking the same question, if you had any idea of when you planned to do the copy jobs? [13:22:13] :-D [13:22:17] yeap I guess I'm still following enough :) just want to understand eventually how we should plan it differently according to the situation [13:22:27] :D [13:22:44] then I can go to the MCR and actor people and ask what is their planning and coordinate a bit [13:22:54] basically we have two stages. one quite small and one is big and might really be problematic with the situation you just mentioned [13:24:38] this is not a hard blocker, one thing that could make it enough is if you have any numbers on new schema + old schema size [13:24:43] the first stage will only copy properties related terms.. those are quite small and probably have much slower growth rate.. we want to begin with them to also check our migration work end-to-end before we begin with the monstrous part, aka item terms [13:24:55] makes sense [13:25:04] definitely only a soft blocker for the items part [13:25:39] yeap I can get some numers (we might even have them already) .. will check that and append to migration plan ticket and mention you there [13:25:44] maybe by that time the other changes will be done [13:26:09] let me ask the other people to see what is their status/schedule [13:26:17] 👍 [13:26:19] thanks [13:26:27] sorry about creating burdens [13:26:45] but MCR and actor made the db grow (we know temporarilly) a 50% [13:27:20] maybe a bit less [13:27:48] so even if it is only temporary we need to be prepared in hw, not only in live dbs, but in backups, etc. [13:29:14] 10DBA, 10MediaWiki-extensions-UrlShortener: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10Ladsgroup) >>! In T219777#5073906, @jcrespo wrote: > That is why we want to be part of the "review" process, if that makes sense, not to be a burden I totally understand it, I thought it went t... [13:31:34] 10DBA, 10MediaWiki-extensions-UrlShortener: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10Marostegui) +1 to filter the table for now so we don't have to worry about any possible issues. We don't replicate x1 to labs for now anyways, so maybe putting a note to review this table linki... [13:32:34] @jynus all good that's why we wanted to make sure we make our plans as visible and clear here as possible, even before we start working on the details if possible [13:33:15] Amir1: no need to apologize, I was just like surprised this was going to go (or technically already went) on x1, and I didn't know about it [13:34:08] It went there around four years ago. So :/ [13:34:42] lol [13:35:34] technically is my fault for not reading all emails, but please never assume we have read everything [13:36:09] do like alaa_wmde and ping if in doubt, we are short on people and large on backlog :-P [13:38:49] alaa_wmde: yeah, no much concerns with the schema from my side, more on what jynus already mentioned [13:39:50] @marostegui that's great then we can already merge that change once it gets enough +1s .. thanks so much. re migration plan, I'll add/mention you in that ticket and attach as much info on sizes as possible [13:40:03] *merge that patch [13:40:24] alaa_wmde: definitely coordinate with us for that, yes, please :) [13:40:38] alaa_wmde: I will give it another look before the +1 [13:44:51] I am going away for lunch [13:45:00] enjoy! [13:45:54] 10DBA, 10Operations, 10ops-eqiad: db1078 s3 primary DB master BBU pre-failure - https://phabricator.wikimedia.org/T219115 (10Marostegui) Any update from HP? [14:38:35] jynus: fyi, I'm upgrading db1114 to latest buster [15:18:24] 10DBA, 10Data-Services, 10Operations, 10cloud-services-team: Prepare and check storage layer for hywwiki - https://phabricator.wikimedia.org/T212625 (10Marostegui) >>! In T212625#5072845, @Marostegui wrote: > This wiki is triggering some false positives on our labs private data checking methods, even if it... [15:24:37] 10DBA, 10Data-Services, 10Operations, 10cloud-services-team: Prepare and check storage layer for hywwiki - https://phabricator.wikimedia.org/T212625 (10Marostegui) #cloud-services-team this is ready for the views creation. I have added the usual GRANT so the views can be created for this wiki: Please rem... [15:25:59] 10DBA, 10Data-Services, 10Operations, 10cloud-services-team (Kanban): Prepare and check storage layer for hywwiki - https://phabricator.wikimedia.org/T212625 (10bd808) [15:26:46] 10DBA, 10Data-Services, 10Operations, 10cloud-services-team (Kanban): Prepare and check storage layer for hywwiki - https://phabricator.wikimedia.org/T212625 (10Marostegui) a:05Marostegui→03None [15:41:22] jynus: should we put line 210 of the SRE etherpad on the backups section? [15:45:03] let me see [15:46:15] it probably changed to 212, if it is that one, not part of the goal [15:46:42] ah yeah, hehe it changed [15:46:54] yep, that disk shelf is the one I mean we might want to get moved to the backups section [15:47:08] it is part of database backups, but not on the scope of the goal [15:47:22] but in theory should be bought this Q [15:47:24] ah I thought it was part of the HW procurement key point [15:48:01] it could be, but it is more bacula [15:48:10] sure, np [15:51:45] could you mention that specifically [15:52:11] as he no longer has an excuse (kinda) [15:58:57] yeah [15:59:08] i will mention it [16:20:08] 10DBA, 10MediaWiki-extensions-UrlShortener, 10Patch-For-Review: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10Ladsgroup) It's not added here: https://gerrit.wikimedia.org/r/c/mediawiki/extensions/UrlShortener/+/500480 [16:24:54] 10DBA, 10MediaWiki-extensions-UrlShortener, 10Patch-For-Review: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10Ladsgroup) @jcrespo @Marostegui: What is the limit of character for the blob you find acceptable? [16:43:30] 10DBA, 10Operations, 10ops-eqiad: db1078 s3 primary DB master BBU pre-failure - https://phabricator.wikimedia.org/T219115 (10Marostegui) Thanks for the update @Cmjohnson! Are the HP hosts that can have the BBU changed with no disruption or should we plan a failover for this host? Thanks! [16:57:15] 10DBA, 10MediaWiki-extensions-UrlShortener, 10Patch-For-Review: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10jcrespo) So the issue with the length is for ~4GB, reading https://stackoverflow.com/questions/417142/what-is-the-maximum-length-of-a-url-in-different-browsers 8000 bytes... [17:14:14] 10DBA, 10MediaWiki-extensions-UrlShortener, 10Patch-For-Review: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10jcrespo) I guess x1 will have to do for now, but let's reevaluate if this ends up being a huge bottleneck in terms of reads and writes. Other than that I do not have furth... [17:45:27] 10DBA, 10MediaWiki-extensions-UrlShortener, 10Patch-For-Review: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10Marostegui) Any objection to merge: https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/500470/ then? [17:52:43] 10DBA, 10MediaWiki-extensions-UrlShortener, 10Patch-For-Review: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10Marostegui) +1 for Innodb compression! Good idea! [18:29:32] 10DBA, 10MediaWiki-extensions-UrlShortener, 10Patch-For-Review: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10Ladsgroup) >>! In T219777#5075124, @jcrespo wrote: > I guess x1 will have to do for now, but let's reevaluate if this ends up being a huge bottleneck in terms of reads and... [18:31:35] 10DBA, 10MediaWiki-extensions-UrlShortener, 10Patch-For-Review: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10Marostegui) It is not compressed (I only checked the master): ` root@db1069:~# mysql wikishared -e "show create table urlshortcodes\G" *************************** 1. row... [18:32:57] 10DBA, 10MediaWiki-extensions-UrlShortener, 10Patch-For-Review: DBA review of UrlShortener - https://phabricator.wikimedia.org/T219777 (10Ladsgroup) I'm okay with its being compressed. [18:54:40] 10DBA, 10Analytics: Proposal: Make centralauth db replicate to all the analytics dbstores - https://phabricator.wikimedia.org/T219827 (10Bawolff) [18:58:50] 10DBA, 10Analytics: Proposal: Make centralauth db replicate to all the analytics dbstores - https://phabricator.wikimedia.org/T219827 (10Bawolff) > (e.g. Checking what percentage of enwiki admins have 2FA enabled). Another use case I recently encountered, is I wanted to see skin statistics for active users on... [19:08:06] 10DBA, 10Analytics: Proposal: Make centralauth db replicate to all the analytics dbstores - https://phabricator.wikimedia.org/T219827 (10Marostegui) That's not possible cause it would require setting up multi-source replication on all the sections that are not s7 (where centralauth database lives). We are not... [20:43:18] 10DBA, 10Analytics: Proposal: Make centralauth db replicate to all the analytics dbstores - https://phabricator.wikimedia.org/T219827 (10Milimetric) @Bawolff I ran into the same issue, and sadly as Manuel pointed out we can't do that going forward. Our team's proposal is to replicate mysql to Hadoop via a sol...