[03:08:44] 10DBA, 10Performance-Team, 10conftool: #dbctl: manage 'externalLoads' data - https://phabricator.wikimedia.org/T229686 (10aaron) >>! In T229686#5454294, @CDanis wrote: > In db-eqiad/codfw.php we currently provide IP addresses as the values of the keys in `externalLoads` instead of using `hostsByName` to tran... [05:05:05] 10DBA: Change PK and remove partitions from the logging table - https://phabricator.wikimedia.org/T233625 (10Marostegui) [05:15:15] 10DBA: Remove ar_comment from sanitarium triggers - https://phabricator.wikimedia.org/T234704 (10Marostegui) [05:16:55] 10Blocked-on-schema-change, 10DBA, 10Core Platform Team: Schema change for refactored actor and comment storage - https://phabricator.wikimedia.org/T233135 (10Marostegui) s2 eqiad progress [] labsdb1012 [] labsdb1011 [] labsdb1010 [] labsdb1009 [] dbstore1004 [] db1129 [] db1125 [] db1122 [] db1105 [] db110... [05:16:58] 10DBA: Remove ar_comment from sanitarium triggers - https://phabricator.wikimedia.org/T234704 (10Marostegui) s2 eqiad progress [] labsdb1012 [] labsdb1011 [] labsdb1010 [] labsdb1009 [] dbstore1004 [] db1129 [] db1125 [] db1122 [] db1105 [] db1103 [] db1095 [] db1090 [] db1076 [] db1074 [05:17:11] 10Blocked-on-schema-change, 10DBA: Schema change to rename user_newtalk indexes - https://phabricator.wikimedia.org/T234066 (10Marostegui) s2 eqiad progress [] labsdb1012 [] labsdb1011 [] labsdb1010 [] labsdb1009 [] dbstore1004 [] db1129 [] db1125 [] db1122 [] db1105 [] db1103 [] db1095 [] db1090 [] db1076 []... [05:56:59] 10Blocked-on-schema-change, 10DBA, 10Core Platform Team: Schema change for refactored actor and comment storage - https://phabricator.wikimedia.org/T233135 (10Marostegui) [05:57:54] 10Blocked-on-schema-change, 10DBA: Schema change to rename user_newtalk indexes - https://phabricator.wikimedia.org/T234066 (10Marostegui) [07:14:19] 10DBA: Recompress special slaves across eqiad and codfw - https://phabricator.wikimedia.org/T235599 (10Marostegui) [07:14:33] 10DBA: Recompress special slaves across eqiad and codfw - https://phabricator.wikimedia.org/T235599 (10Marostegui) p:05Triage→03Normal a:03Marostegui [10:21:28] 10DBA: Remove ar_comment from sanitarium triggers - https://phabricator.wikimedia.org/T234704 (10Marostegui) [10:51:57] 10DBA, 10Operations, 10serviceops, 10Goal, 10Patch-For-Review: Strengthen backup infrastructure and support - https://phabricator.wikimedia.org/T229209 (10jcrespo) Reminder: ` # TODO The IPv6 IP should be converted into a DNS AAAA resolve once we # enabled the DNS record on the director ` [12:18:31] 10DBA: Change PK and remove partitions from the logging table - https://phabricator.wikimedia.org/T233625 (10Marostegui) [12:18:43] 10DBA: Change PK and remove partitions from the logging table - https://phabricator.wikimedia.org/T233625 (10Marostegui) [12:19:12] 10DBA: Change PK and remove partitions from the logging table - https://phabricator.wikimedia.org/T233625 (10Marostegui) 05Open→03Resolved All partitions removed from the `logging` table of all the special slaves. [12:19:14] 10Blocked-on-schema-change, 10DBA, 10Core Platform Team: Schema change for refactored actor and comment storage - https://phabricator.wikimedia.org/T233135 (10Marostegui) [12:19:16] 10DBA, 10Core Platform Team, 10MW-1.34-notes (1.34.0-wmf.24; 2019-09-24), 10Performance Issue, 10mariadb-optimizer-bug: Review special replica partitioning of certain tables by `xx_user` - https://phabricator.wikimedia.org/T223151 (10Marostegui) [12:24:08] 10DBA, 10Core Platform Team, 10MW-1.34-notes (1.34.0-wmf.24; 2019-09-24), 10Performance Issue, 10mariadb-optimizer-bug: Review special replica partitioning of certain tables by `xx_user` - https://phabricator.wikimedia.org/T223151 (10Marostegui) So we've got rid of all the partitions on `logging` at T23... [12:57:12] transfer.py --type=file <3 [13:29:53] 10DBA, 10Operations, 10serviceops, 10Goal, 10Patch-For-Review: Strengthen backup infrastructure and support - https://phabricator.wikimedia.org/T229209 (10akosiaris) Sorry I missed that, thanks for pinging me on T234900. >>! In T229209#5565968, @jcrespo wrote: > @akosiaris We have reached an impass. We... [13:30:55] jynus: so, for https://phabricator.wikimedia.org/T229209#5565968. Let's do both. We can run puppet with the new permissions and fix whatever shows up. [13:31:09] I 'll have a look at the TODO you mentioned. Should be easy enough (TM) [13:58:18] that was for me [13:58:32] I would prefer if you had a look at things like +1 https://gerrit.wikimedia.org/r/c/operations/puppet/+/541523 [13:59:34] bacula seems to be running file, but I will check the logs more deeply [13:59:38] *fine [14:00:51] just to be clear I am not asking you not to do certain things, I am happy to get help; just that only on certain ones I am not confident to do no my own (mostly the migration tasks) [14:02:24] there is a scheduled backup in one hour on cobalt, that will confirm the deploy is ok [14:02:38] akosiaris: do you have now or later time to discuss next steps? [14:02:48] btw, tomorrow the PDU maintenance is on helium's rack, just fyi [14:03:14] marostegui: thanks for the heads up, I wouldn't have imagined to check [14:03:37] This is the task: https://phabricator.wikimedia.org/T227133 [14:03:43] there is helium and helium-array [14:04:00] elukey: BTW, that should have unblocked your deploys^ [14:05:00] jynus: I have time now [14:05:18] not sure if this is ontopic here or better I pm you [14:05:46] I 'd say let's keep it public so that others may spot errors in our way of thinking [14:05:51] ok [14:05:54] thanks for the ping! [14:06:00] but if that's not the best channel, we can switch to another one [14:06:14] so belive you want to have 2 directors [14:06:17] we could also flesh it out quickly in an etherpad [14:06:30] I have the working doc [14:06:44] then I can paste a summary on the ticket [14:06:56] ok, so. 2 directors... I don't think that's really feasible [14:07:00] I would like to have only 1 because I don't think it works well [14:07:19] ok agreed then ;-) [14:07:50] so what I wanted is to see what is missing to do the migration, then keeping helium around as it will be useful for another 3 months at least [14:08:32] I checked and the db migration would be trivial [14:08:41] that's nice [14:08:46] but I want to change pool names [14:08:51] I guess we will need to rsync over the archive [14:08:57] and other stuff, so I would prefer to not do it inplace [14:09:15] hmm pool names are in the configuration IIRC, lemme doublecheck [14:09:48] so this is a conversation still, but I was thinking of restoring everything and backing it up [14:10:20] restoring everything? [14:10:27] on archive [14:10:46] obviously doesn't have to happen all at the same time [14:10:49] I think you will lose the associations with hosts [14:10:56] mmmm [14:11:34] ok, then I guess it should be copied away and migrated logically, is that supported? [14:11:42] wouldn't the keys be different? [14:12:22] keys/certificates [14:12:45] I am open to any solution, specially if you say "I know this works" [14:13:58] the keys are the puppetmasters keys so they wouldn't change [14:14:17] em, I mean the master key is the puppetmaster's public key [14:14:34] the backups are also encrypted with the host's keys as well but for archive this is irrelevant [14:14:43] all hosts in archive have long been decomissioned [14:14:53] I have to ask what we keep archive for around btw anyway [14:14:59] we have restored from that like ... twice? [14:15:01] ok, but then, and I am not pushing for that [14:15:07] but anyway, I digress [14:15:21] your initial statement "losing association with hosts" is not too strong either [14:15:26] :-) [14:15:39] yeah we could probably work around it after all [14:15:49] especially after all this time [14:15:49] anyway, let's focus on steps [14:15:56] first production [14:16:03] as in production storage [14:16:14] I just wanted to not migrate anything [14:16:38] stop puppet on the old one so nothing changes [14:16:46] start backing up on the new one [14:16:51] and keep both around [14:17:07] but we can also recover from the old one [14:17:20] the reason is that pool name changes and struictur in general [14:17:31] makes things hard to maintain beyond the storage daemon [14:17:49] new path, new pool names, etc [14:17:49] yup, the director is also aware of them as those are the pools [14:17:55] yes [14:18:12] so technically not both directors, but both systems at the same time [14:18:19] so we don't touch the old one [14:19:05] I could even tested it already, but bacula doesn't start if you don't have any config files [14:19:47] ok, so we need a couple of changes. one thing that is important is that bacula-fd authenticates the director when it connects to it [14:20:07] there is also something weird [14:20:14] definitely via a password and IIRC via TLS client auth [14:20:23] because the new filestorages have been created on the director [14:20:50] the other thing is I don't understand how remotearchive works [14:21:00] archive on heze [14:21:10] because I don't see it configured on puppet [14:21:59] IIRC we don't have it enabled [14:22:11] ok. in my todo [14:22:17] don't worry about it [14:22:42] but the idea is https://www.bacula.org/9.2.x-manuals/en/main/Migration_Copy.html [14:23:06] essentially copy jobs running that copy stuff from one sd to the other sd [14:23:24] that required though bacula 7.x IIRC, so we did not enable it [14:23:56] wait, are you talking helium->backup1001 migration or archive? [14:24:06] ah, archive [14:24:24] helium -> heze [14:24:59] so back to the taking successfully the 1st backup [14:25:18] I think we need a) fixing that TODO you mentioned in order to have the iptables hole opened [14:25:39] b) figure out how to switch a FD's director from helium to backup1001 [14:25:51] so that it authenticates the director and accepts it fine [14:26:03] I've sent you the url of the doc [14:26:49] go to the end [14:27:01] I will paste the steps for the actual migration there [14:28:06] I think the migration is going to be gradual btw [14:28:12] lol [14:28:19] host by host, role by role, cluster by cluster :( [14:28:36] or at least a number of hosts first [14:28:37] and then everything? [14:28:44] then keeping around helium/heze for those 3 months [14:28:55] note that we will not be able to really restore from those easily though [14:29:19] so you want to keep the old pools and write to the new ones? [14:29:29] just to be sure I follow you [14:29:31] good q [14:29:58] yeah, there is a lot of choice here, that is why I wanted to talk to you [14:30:46] I just don't know how many of those are feasable, and relatively easy/worth it, and safe [14:31:07] the other thing we can do is prove with a couple of hosts that it work [14:31:12] rsync the volumes to the new hosts [14:31:17] (will take a couple of days) [14:31:27] and just switchover everything [14:31:37] uf, not a fan of rsync if they are being written [14:31:51] well, I guess it only writes mostly to one at a time [14:31:54] ok we can stop that for a couple of days [14:31:56] so we can pause [14:32:10] let's go a bit higher level [14:32:11] but yes it only writes at one mostly [14:32:14] and we know which [14:32:28] because I want to know how would you do it in general [14:32:37] note that the best point in time for this is middle of the month [14:32:43] and then I can prove/test it works, etc. [14:32:47] mostly cause all the backups are really cheap there [14:33:02] yeah I am thinking the pros/cons of the various approaches [14:33:23] so one approach you are describing is moving physically the pools [14:33:32] then reconnect them with the new director [14:33:56] yup. Reconnect being "just place them in the correct directory" [14:33:58] risks: config, keys or something else getting confused [14:34:13] the problem with that is the pools and paths have changed [14:34:19] so it is not 1:! [14:34:32] not saying it is not doable, just as a risk [14:34:45] then we test we can recover [14:34:52] and disconnect the old hosts, etc [14:35:04] is backup1001 still using the "new" bacula db? [14:35:07] we can also maybe start writing to the new pools? [14:35:07] I guess so? [14:35:11] yes [14:35:16] I didn't want to touch the old bacula [14:35:21] until now [14:35:33] when puppet run and did some trivial changes, mostly permissions [14:35:34] I would then amend puppet to allow us to switchover 1 host to backup1001 and test everything is working as expected [14:35:41] backup/restore etc [14:35:47] that is the problem [14:35:56] as a fully new client that is [14:35:57] I configured backup1001 [14:36:01] to be its own director [14:36:08] but it created the pools on helium [14:36:30] also manually to backup itself [14:36:35] but it didn't work [14:36:42] ? [14:37:05] puppet on bacup1001 is disabled [14:37:20] so I manually copied the backup of itself config to it [14:37:37] but it doesn't connect to itself [14:37:52] as an sd [14:38:08] maybe it will now that puppet run on helium [14:38:21] is it even doing that? I don't recall the backup director backing up itself [14:38:31] unless you create the config specifically for that [14:38:38] let me remember what I did [14:39:34] yes, I manually created one for a backup of itslef [14:40:14] ah, but doesn't have a job configured [14:40:18] anyway, ignore that [14:40:33] I just did that as the safest way for bacula to run [14:40:53] what I don't understand is why the pools were configured on the old one [14:41:36] what do you mean by configured? [14:41:42] check /etc/bacula/storages.d on helium vs backup1001 [14:41:57] ah, I see [14:42:06] oh that must be the exported resources! [14:42:08] when the director of backup1001 [14:42:11] lemme make sure [14:42:15] is itself [14:42:32] so I guess it assumes it takes all collected resources [14:42:47] yeah, that make sense [14:42:57] hence that is why I could not test with only 1 host [14:43:08] 55 # We export ourself to the director | 57 } [14:43:08] 56 @@file { "/etc/bacula/storages.d/${::hostname}-${name}.conf": [14:43:13] yup, that's it [14:43:30] so we have to semi-manually set it up [14:43:37] and 64 tag => "bacula-storage-${director}", [14:43:47] so they are tagged to the specific configured director [14:43:59] hence my comment about 2 directors being neigh to impossible [14:44:06] plus the protocol isn't all that crazy about that [14:44:11] sure [14:44:18] s/protocol/configuration scheme/ of bacula [14:44:20] but not sure how to test then [14:44:34] except purely manually [14:45:07] 41 $director = $::bacula::storage::director [14:45:08] sigh [14:45:11] not easily it seems [14:45:25] dammit the director is not really overrideable [14:45:37] well it is, partly though [14:45:48] yeah, I overrided it on hiera [14:45:57] but only for the sd an non-agregated resources [14:46:06] anyway [14:46:22] an issue, but I want to go back to your plan [14:46:33] so I can understand [14:46:34] it's an interesting issue since we can't really [14:46:40] test my plan [14:46:41] sigh [14:47:12] can I at least suggest an alternative? [14:47:16] damned puppet. All this code predates hiera so greatly we will need to make some changes to it [14:47:22] sure, why not? [14:47:38] so the point is always be able to come back [14:47:44] if something is wrong [14:48:13] my suggestion is to freeze the old hosts, so that at any time we can revert puppet and nothing is lost [14:48:33] but not migrate anything we don't have to [14:48:50] forget for now about archive [14:48:51] that does imply that we need to update puppet quite a bit to allow the revert to be simple and easy [14:49:40] which is fine btw, I don't yet see a way around updating my old puppet code [14:49:42] so right now is relatively compatible [14:49:58] we did today the worst change (permissions) [14:50:06] we can keep monitoring to make sure nothing broke [14:50:21] but at some time, we start from 0 (obviously we plan for archive first) [14:50:35] new backups are on the full new stack [14:51:08] restore? if new, new host, if old, we go to old bacula (with puppet disabled?) [14:51:28] maybe even we set old host as an empty role [14:51:43] so no file is touched [14:51:55] now, that is not free, we still need to migrate archive [14:52:05] but will give us more freedom to change puppet [14:52:36] Should we have a procedure for migrationg easily data- yes [14:52:52] so, essentially a big switchover from helium to backup1001 [14:53:05] big but on service [14:53:07] not on data [14:53:08] at some point we just globally turn the knob [14:53:17] not on data? [14:53:28] we keep data "safe" with no change to the old one [14:53:41] ah you mean helium/heze is left untouched [14:53:44] yes [14:54:04] so my biggest concern is that we migrate both service and data at the same time [14:54:14] 3months with puppet disabled? that's gonna make other SREs happy ... [14:54:21] not unheard of though [14:54:25] we could live with it [14:54:26] no, I said later with puppet role:none [14:54:43] spare or old_backup with no code [14:54:54] hoping that does not removing configs (it shouldn't I guess) [14:54:58] yeah [14:55:08] but if at some poing we make a mistake, we revert [14:55:23] only hurt backups during the switch [14:55:36] maybe some extra puppet code [14:55:48] I don't know, I was just giving options [14:55:56] not strongly suggesting that [14:56:04] plus we still need to handle archive [14:56:08] it's not without it's merits as a plan [14:56:18] it is less involved for sure [14:56:29] it is about handling risks [14:56:35] but there are the issues of easy revert and ofc archive [14:56:39] we still need a migration procedure for next time [14:56:43] which btw we can start a discussion about keeping [14:56:46] 10DBA, 10Core Platform Team, 10MW-1.34-notes (1.34.0-wmf.24; 2019-09-24), 10Performance Issue, 10mariadb-optimizer-bug: Review special replica partitioning of certain tables by `xx_user` - https://phabricator.wikimedia.org/T223151 (10Marostegui) I have done a couple of tests, one on `enwiki` and another... [14:56:54] last time anything was archived in there was back in [14:57:05] ? [14:57:05] 2015-10-08 14:55:4 [14:57:08] tell me more [14:57:14] how was it migrated? [14:57:26] so, my guess is that 4 years later NOBODY really cares [14:57:38] oh, I didn't want to delete it [14:57:53] I don't either, but it's a valid question to ask [14:57:54] but it would be nice to review it [14:58:09] if 90% is "keeping it only just in case" [14:58:22] I think it's exactly that [14:58:32] hysterical raisins [14:58:45] yeah, but probably there are things to be archived in the future [14:58:51] *for the future [14:59:07] I am not sure I want to do a review right now [14:59:21] I'd prefer to get it out of helium [14:59:40] note btw that the archive could just be rsynced. It's almost readonly as you just saw [14:59:55] just keep a 1TB LV there for it [15:00:05] just in case we want to archive something in the future [15:00:11] the procedure you mentioned [15:00:13] which we have for 4 years now [15:00:16] haven't* [15:00:17] could it be used for migration? [15:00:29] which procedure? [15:00:33] instead of copying physically [15:00:43] migrate it from one pool to another? [15:00:47] no, I think not. Mostly because it never worked ok in bacula 5 [15:00:58] ah, so it doesn't work in the CURRENT version [15:01:05] sorry, I missunderstood [15:01:11] for one definition of "current" [15:01:16] sorry [15:01:22] in the one that is on helium [15:01:23] for the other definition (7.x+) it works ok [15:01:41] so I see 3 options [15:01:42] that being said, it can be used from heze to backup1001 [15:01:46] * akosiaris doublechecking [15:01:59] copy phyisically the device [15:02:00] nope, scratch that [15:02:10] make it attach to backup1001 [15:02:17] the device :P. It's literally 2 files :P [15:02:26] 2) try to copy it logically, which I think you mean it is not possible [15:02:33] or 3) restore and backup it again [15:02:47] on restore and backup we can do a review [15:02:58] yeah, I think that the only one that we won't waste too much time on is 1) [15:03:22] 2) is useful to do in general, but it won't work for this specific case [15:03:35] 3) is probably too much of a nuisance [15:03:37] 1 worries me because you seem to think it is easy to reatach a "device" on a new install [15:04:03] it's literally 2 files of a specific format [15:04:20] sure, compressed and encypted [15:04:25] I 've done it already multiple times [15:04:29] good thing it is the easiest to test [15:04:34] that why you hear me saying it again and again [15:04:36] we copy, then make it work and restore [15:04:40] rsyncing them works [15:04:42] I trust you [15:04:57] but I have fears if I have not done it ever :-D [15:05:13] skeptical about all vendors promising easy migration :-D [15:05:24] specially between versions [15:05:45] good news is I don't have to trust you [15:05:56] we copy it, fix it, restore it [15:06:04] and if it works, it works :-D [15:06:08] :D [15:06:30] so let me summarize the archive thing, which I woudl do first [15:06:30] the issue with the archive is that we haven't restored anything for it in a really long time [15:06:36] I don't even remember since when [15:06:48] up to now it has literally been a black hole [15:06:50] yeah, but that is separate from the migration [15:06:55] data goes in... never comes out [15:07:04] the plan is to setup regular restores [15:07:12] for testing, but not short term [15:07:32] also maybe a dashboard for better visibility [15:07:58] but that is for later [15:08:18] so we agree the plan is to copy, migrate director, then reconnect? [15:08:41] not sure if it has to be in that order [15:08:53] but I would copy the files out ASAP [15:08:56] I was starting to be sold on your plan [15:09:00] lol [15:09:11] let's copy the files anyway [15:09:20] so if case the worst thing happens [15:09:23] *în [15:09:29] we have a copy elsewere [15:09:42] then we migrate production, then we see [15:09:50] sure, if we have the space (I think we do, right?) [15:09:56] yeah [15:10:06] we should have more disk than before [15:10:11] note I am talking about archive [15:10:23] aah, ok thanks for clarifying it [15:10:37] cause I thought it was the full plan [15:10:45] I don't think it is worth copying the full production [15:10:55] so I am leaning towards proceeding forward with your plan [15:11:02] well, it wasn't mine [15:11:08] It was 50/50 [15:11:21] but I would like first to make sure we can perform 1 single backup/restore from backup1001 [15:11:25] yes [15:11:28] let me see [15:11:48] 1. manually setup backup1001 schedule of a host [15:12:17] 0. Move archive files into backup1001/2001 [15:12:41] 2. change bacula role on helium to a noop [15:12:54] 3. setup backup1001 as the new backup role for all hosts [15:13:19] 4. update references to helium all over puppet [15:13:21] 4. check backups run as expocted and they can be recovered (also maybe try recovering from the helium) [15:13:43] yeah that helium part on 4. will not work [15:13:46] yep, lots of work there [15:13:48] what? [15:14:00] restore will not work if it doesn't have the right role? [15:14:04] cause with the puppet change the fds will not longer trust it [15:14:15] I see [15:14:25] but will them trust it if puppet runs again? [15:14:33] after a revert? yes [15:14:41] yeah, I guess lots of things change, firewall [15:14:47] "allowed hosts" [15:14:47] yup [15:14:49] that is fair [15:14:52] I can live with that [15:15:05] my worry is on any permanent change [15:15:07] there is one thing we can do as well. Keep it around as an sd for 3 months [15:15:10] and we can reevaluate [15:15:21] well, if no migration is done that is for sure [15:15:33] an inactive one but still one that can be used for restores [15:15:37] ah, you mean to add as an extra sd [15:15:49] yup, same as heze is essentially today [15:15:56] then no need to do anything really special [15:16:29] as in, that looks ok, if it doesn't complain on new configuration, etc. [15:16:46] I think we shoudl reevaluate all steps as we move along [15:16:54] I think it should not, but I can double check [15:17:07] 5. reattach old archive to new hosts as well [15:17:19] in fact maybe 2. should be change bacula role on helium to a bacula::sd [15:17:22] old archive on new hosts (new sd) [15:17:40] will puppet easilly know about it? [15:17:41] I have to run for an errand in 10' btw [15:17:48] yes it will [15:17:56] I quess with minimal puppet changes if any [15:17:57] the sd does not really have special firewall rules [15:18:04] not authenticating rules of any kind [15:18:07] nor* [15:18:10] ok, I will write this down on the doc [15:18:19] cause it does not really connect to anything. Everything connects to it [15:18:22] will ask you for ok later [15:18:30] we can maybe follow up tomorrow [15:18:32] ok [15:18:37] yeah, I 'd like to sleep on it [15:18:43] maybe some epiphany will show up [15:18:50] thanks, this was very helpful [15:18:59] I know you are very bussy, sorry if I stressed out [15:19:00] thanks as well [15:19:05] but I needed help [15:19:18] *stressed you [15:19:24] I am sorry for missing that ping [16:44:27] 10DBA, 10Operations, 10serviceops, 10Goal, 10Patch-For-Review: Strengthen backup infrastructure and support - https://phabricator.wikimedia.org/T229209 (10jcrespo) I have discussed with alex a plan, there is a preliminary, but timid suggestion of steps on the design (more like diary) document. For now I... [17:33:10] 10DBA, 10Data-Services, 10Security-Team, 10cloud-services-team, and 3 others: Totally exclude 'abusefilterprivatedetails' from 'logging' in public replicas - https://phabricator.wikimedia.org/T187455 (10sbassett) p:05Triage→03Normal [17:46:01] 10DBA, 10Data-Services, 10Security-Team, 10cloud-services-team, and 2 others: Totally exclude 'abusefilterprivatedetails' from 'logging' in public replicas - https://phabricator.wikimedia.org/T187455 (10sbassett) [21:18:51] 10DBA, 10Machine vision, 10Product-Infrastructure-Team-Backlog: DBA review for the MachineVision extension - https://phabricator.wikimedia.org/T227355 (10Mholloway) Hi @jcrespo, just checking in on this. Our goal is to deploy to production the last week of this month. Have you had a chance to look this ove... [23:17:19] 10DBA, 10Operations: Decommission db2043-db2069 - https://phabricator.wikimedia.org/T228258 (10Papaul) [23:18:32] 10DBA, 10Operations: Decommission db2043-db2069 - https://phabricator.wikimedia.org/T228258 (10Papaul) [23:19:25] 10DBA, 10Operations: Decommission db2043-db2069 - https://phabricator.wikimedia.org/T228258 (10Papaul)