[04:31:32] marostegui: hi, you disgusting morning person [04:32:34] isn't this the best time of the day! [04:32:41] all quiet and peaceful [04:32:47] no :P [04:32:47] hi [04:33:01] Jaime! o/ [04:33:25] kormat: so right now if you check the tree for S4 you'll see what's happening [04:33:42] all the slaves are being moved under the new master [04:34:27] so at 07:00 we set the read only and promote that new master that already have all the slaves, to master and the current master, to become a slave [04:35:52] How we normally organize is: one humanoid does all the CLI stuff and another one does monitoring and help confirms RO=ON and RO=OFF [04:36:55] ack [04:37:18] the three dashboards I sent you yesterday is what we also use to make sure we are good [04:37:34] especially when the failover is done and we need to monitor that everything works back to normal [04:38:01] Another good thing is to check the recentchanges page of the affected wiki: https://commons.wikimedia.org/wiki/Special:RecentChanges [04:39:20] The banner on commons has the wrong date [04:39:28] I have commented it here https://phabricator.wikimedia.org/T253825#6175447 [04:39:43] Anyways, topology changes done [05:08:09] 10DBA, 10Operations, 10ops-eqiad, 10Patch-For-Review, 10Wikimedia-Incident: db1138 (s4 master) crashed due to memory issues - https://phabricator.wikimedia.org/T253808 (10Marostegui) [05:09:04] marostegui: that was super smooth (from the outside at least) [05:09:18] i'm finally beginning to believe you might know what you're doing ;) [05:09:49] 10DBA, 10Operations, 10ops-eqiad, 10Patch-For-Review, 10Wikimedia-Incident: db1138 (s4 master) crashed due to memory issues - https://phabricator.wikimedia.org/T253808 (10Marostegui) The master failover was done successfully. This was done successfully RO started at 05:01:54 RO stopped at 05:02:25 Total... [05:10:05] it used to be a lot more complicated, but jaime worked a lot on switchover.py and it is now super fast and less error prone [05:10:21] however, there are still steps that can be automated and integrated on switchover.py [05:10:25] * marostegui looks at kormat [05:10:34] * kormat grins [05:12:59] if we think everyhing is done I will say so on the village pump [05:13:06] the old master is depooled right? [05:13:07] yep, all done [05:13:10] yep [05:13:21] I am doing the clean up tasks [05:13:23] because the only strange thing I saw is lower amont of open connections [05:13:38] which would be explained by a lower number of servers pooled [05:19:52] 10DBA, 10Schema-change: Remove image.img_deleted column from production - https://phabricator.wikimedia.org/T250055 (10Marostegui) 05Stalled→03Open db1138 is no longer a master, so I am going to run the schema change there so we can get this over with! [05:19:59] 10DBA, 10Datasets-General-or-Unknown, 10Patch-For-Review, 10Sustainability (Incident Prevention), 10WorkType-NewFunctionality: Automate the check and fix of object, schema and data drifts between mediawiki HEAD, production masters and slaves - https://phabricator.wikimedia.org/T104459 (10Marostegui) [05:22:02] 10DBA: Compress new Wikibase tables - https://phabricator.wikimedia.org/T232446 (10Marostegui) db1138 is no longer s4 master, so this can be done as well there [05:46:32] jynus: re: https://gerrit.wikimedia.org/r/c/operations/puppet/+/599596, that change on its own won't work, as the path for mbstream is still hard-coded [05:46:56] so i think it's better that i do that change as part of my CR [05:47:52] we can make mbstream on path [05:48:56] we don't need to upgrade all packages, we can just deploy a 1 line puppet patch for now [05:49:44] ok, how do you want to proceed? you do that in your CR? [05:50:15] send an update to the package script [05:50:21] to put mbstream on patch [05:50:23] to put mbstream on path [05:50:51] I will send a puppet change to do it based on the installed package as an imedidate change [06:04:12] jynus: do we bump the package version when we make a change like this? [06:05:11] oh, I was just suggesting to implement it, because we cannot just install it everywhere I wasn't going to create a new package until next mariadb version and just use the puppet shortcut [06:05:32] but if we were to create a new package we would increase the -patch version [06:06:07] ah, i'm a bit confused. i asked "how do you want to proceed?" you said "update the package" [06:06:14] yes [06:06:20] update the packaging repo [06:06:36] literally I said "send an update to the package script" [06:06:48] we can create a new package [06:07:32] but puppet will take care of the hardcoding for now [06:07:39] i feel like i'm missing something here... tell you want, i'll send the CR, and you can respond there. [06:13:28] check my latest description to see if that helps clarify: https://gerrit.wikimedia.org/r/c/operations/puppet/+/599596 [06:14:59] that part i understand fine [06:15:31] see the comment on packages_wmf.pp [06:15:40] which clarifies the package versioning [06:15:57] i've seen it, yes [06:16:40] if it will be 10.1.44-2 or 10.1.45-1, whatever happens first [06:19:23] so while the patch is correct, I am not sure we longer maintain the jessie packages [06:19:43] or the 10.2 or 10.3 branch, we could delete those [06:20:23] but the general idea is that I would merge that only if you were to build the new packages [06:20:46] do you want to do that? [06:21:45] please review me https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/599596/ [06:21:48] so I unblock you [06:22:05] and then we can create an updated package if you have the time [06:22:51] we may not update 10.2, 10.3 and probabaly even 10.1, depending on what manuel thinks [06:23:19] i'm happy to build new packages, that's a workflow i know nothing about, [06:23:35] so we can do that, just it is more involving [06:23:36] my original transfer.py CR isn't blocking me fwiw. it's just something i noticed. [06:23:50] yeah, it is kinda my fault because I had this fix lined up [06:24:03] but then I forgot [06:24:15] and I hope you understand my worry about added complexity [06:24:25] plus I forgot about mbstream [06:24:29] hardcoded path [06:24:43] which worry? i might have missed that comment [06:24:58] more lines to transfer.py [06:25:06] vs a cleaner package [06:25:26] do you agree with doing the version check for compatibility purposes? [06:25:27] xtrabackup just should be on path so trasnfer works the same on any version [06:25:45] we can thing about that at a later time [06:25:55] i initially ran into this when i trued to use xtrabackup from a 10.1 host to a 10.4 host [06:25:59] *tried [06:26:16] so there is some aditional worries [06:26:20] about version upgrade [06:26:29] it failed because of paths, but manuel was also concerned about the version diff [06:26:43] exactly [06:27:30] the thing is transfer.py is supposed to work with any package [06:28:01] and the package installed is not necesarilly the package running [06:29:06] on the other side, we may want to stream to a host with a different version, asumming we don't want to install it there [06:29:23] so I am not saying we shouldn't have a check [06:29:36] I am saying we have to think carefully about it [06:29:43] mm. so if we're not doing `--prepare` on the destination host, the mariabackup version is irrelevant [06:29:52] (and, in fact, it may not even be installed) [06:30:02] well, if has to be installed for streaming [06:30:16] oh that's not netcat? ok [06:30:21] but yes, just for receiving we don't care [06:30:33] I would add such a check to the automatic provisioning [06:30:38] so outside of transfer [06:30:49] or maybe [06:30:55] to the prepare stuff [06:31:00] store the original running version [06:31:08] and making sure we prepare with the same or later version [06:31:17] I say keep the patch [06:31:30] because we are likely to reuse it [06:31:38] but maybe on a different codebase? [06:31:53] e.g. on backup_mariadb.py [06:32:25] I would prefer to keep transfer.py as dumb an non-wmf specific as possible [06:35:39] I am all for checks, but when we are in an outage and transfer.py doesn't work because version mismatch you will not want those checks in an emergency [06:35:50] I would move them to a higher level tool [06:36:22] so keep the patch, and do you want to upgrade the packages? [06:36:39] or we leave that for next week? [06:38:05] we could package 10.1.45-1 instead of 10.1.44-2 [06:39:55] let's wait for manuel to come back and he gets to decide what we update and what we don't [06:41:43] let's deploy https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/599596/ with puppet disabled everywhere ? [06:47:50] an actual example of why https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/599343/ would break backups is misc backups [06:48:18] currently 10.4 hosts are sent to dbprov, with stretch/10.1 [06:48:23] *backups [06:48:45] we need that to continue working, what we don't want is a 10.1 backup be restored on a 10.4 final host [06:49:01] * marostegui reading backlog [06:50:47] marostegui: save yourself while you still can [06:53:30] mmm, I packaged 10.1.44, it is actually on the repo [06:53:52] As far as I know 10.1.45 isn't out [06:54:19] Oh [06:54:22] They just released it [06:54:32] Why didn't I get the email for that? :-/ [06:55:22] in any case, we can decide to upgrade the packages at a later time, lets unblock 10.4 xtrabackups [06:55:46] jynus: i've +1'd your CR [06:55:59] do you want to deploy the patch, so you are familiar with puppet-wide dangerous changes? [06:56:12] disable puppet/test/enable puppet? [06:56:30] I think we should keep your patch, but probably for backup_mariadb.py [06:56:39] or remote_backup_mariadb.py [06:57:01] or maybe --decompress? [06:57:25] I will wait, ofc, for manuel's ok [06:57:42] so both CR are independent no? [06:58:06] I think 599343 needs more work [06:58:33] but 599596 would unblock kormat's work now [06:58:41] jynus: i'm not blocked by this [06:59:08] ok, let me put it like this: I am blocked by it, as I was about to reiamge a source mysql host into buster :-D [06:59:52] T250602#6155640 [06:59:53] T250602: db1140 (backup source) crashed - https://phabricator.wikimedia.org/T250602 [07:00:16] otherwise backups won't work there [07:01:45] that should fix the path issue and then we can see how/where to do the version check [07:02:15] I think https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/599596/ is safe to go, no? [07:02:27] I would do it disabling puppet [07:02:30] but yeah [07:02:32] yeah sure [07:02:45] that is why I proposed kormat to deploy it after your ok [07:02:56] that doesn't invalidate his initial patch at all [07:03:28] yeah, I think both patches complement each other [07:03:49] I think he told me you were worried about version checking, and I agree [07:03:58] not sure if on transfer, though [07:04:20] yeah, I am worried about doing an xtrabackup from a host that has 10.1 installed to one with 10.4 [07:04:31] marostegui: well, except jynus has pointed out that the version check i've added is.. suboptimal, as there are more considerations involved [07:04:34] well, that should be allowed [07:04:46] what we don't want is a) run prepare with a lower version [07:04:53] b) recover to a higher version [07:05:01] transfer just moves files, doesn't prepare [07:05:02] yeah, that is my point [07:05:27] so I would move the check to backup/recover logic, not low level transfer? [07:05:55] jynus: the issue there is that it wouldn't catch the thing i tried to do yesterday [07:06:02] what was it? [07:06:10] xtrabackup from 10.1 to 10.4 [07:07:08] that is why I say it needs more thinking [07:07:27] we may want to do that in an emergency, if we just want to store the file temporarilly [07:07:34] think labsdb issues [07:07:38] to summarize: we should merge and deploy jynus' puppet change. i should close my transfer.py change, for now. [07:07:46] don't close it [07:07:56] we should do that change too [07:08:06] but in a different way [07:08:28] that still sounds to me like "close it, we'll do it better/differently later" :) [07:08:30] 1- checking the actual version running [07:08:37] not the package installed [07:08:51] 2-check on --prepare and recover, not transfer [07:08:52] so jynus' patch would make kormat's yesterday issue work, but would have still allowed to recover a 10.1 host with a 10.4 package [07:09:02] yep [07:09:40] again, we need that to work, even if that is not desired [07:09:47] So I think checking for the same version when attempting a --prepare is definitely needed, where, that's up for discussion [07:10:03] ok, so transfer.py doesn't prepare [07:10:10] that is my main point :-D [07:10:31] i'll open a task to try and track this issue [07:10:43] I think we can do a check between metadata [07:10:50] and whatever runs prepare [07:11:00] let me see if version is on metadata [07:12:22] 10DBA: Care needed with mariabackup versions - https://phabricator.wikimedia.org/T253959 (10Kormat) [07:12:29] ^ rough notes [07:13:21] so this is the main missunderstanding "-versions of mariabackup on src and dest hosts when streaming a backup." [07:13:35] "+version originally used and version recovered to" [07:13:49] that's point 3 [07:13:51] and yes, that is more difficult to check, I know [07:14:06] no, we normally do prepare on a different host [07:14:19] define "normally" :P [07:14:26] our snapshots do prepare on dbprov hosts [07:14:34] yesterday i was following https://wikitech.wikimedia.org/wiki/MariaDB/Backups#Copy_data_from_a_backup_source_to_a_host [07:14:42] so we do 20 prepres a day :-D [07:14:57] kormat: lol [07:15:04] you should not do that [07:15:07] I mean, normally [07:15:09] * kormat flips table [07:15:19] jynus: why not? [07:15:38] he should "Provision a precompressed and prepared snapshot" [07:16:03] there is nothing bad, but it is much slower [07:16:13] why not using the precompressed package? [07:16:18] and pre-prepared [07:16:21] Yeah, but as far as I know he's trying all the methods [07:16:24] He already did that [07:16:25] ok [07:16:31] it is a right method [07:16:35] it just just slow [07:16:42] that is what I meant [07:16:44] 0:-D [07:17:09] it is not the primary intended method [07:17:30] 14:11:06 you can clone it with no problem [07:17:30] 14:11:14 from db2099 [07:17:37] you can [07:17:37] Again, he already tried Provision a precompressed and prepared snapshot [07:17:39] sure [07:17:51] I think trying the other should be fine too [07:17:55] If not, please let's delete it from there [07:18:00] don't disagree [07:18:08] well you just disagreed above [07:18:27] https://wikitech.wikimedia.org/w/index.php?title=MariaDB/Backups&diff=1825386&oldid=1819873 [07:18:31] ^you wrote it [07:18:42] I wrote it and it was agreed [07:18:48] Ok, I am going to delete it now [07:19:13] keept it [07:19:30] I think it is interesting IF a precompressed snapshot cannot be used [07:19:42] I am sorry, but this is very confusing, and I wrote that cheatsheet because there was no other cheatsheet, please review it and if it is not intended, delete it [07:19:55] This is the primary intended method https://wikitech.wikimedia.org/wiki/MariaDB/Backups#Provision_a_precompressed_and_prepared_snapshot [07:20:06] Which is what I wrote too [07:20:31] I have deleted the other method [07:20:37] yeah, the first thing is correct, just much slower [07:20:48] jynus: don't you see that you are saying both things now? [07:21:06] [09:14:57] kormat: lol [07:21:06] [09:15:04] you should not do that [07:21:10] Anyways, I have deleted it [07:21:15] I am going to go out for a few hours [07:21:16] bye [08:57:50] marostegui kormat: nice job for the failover! [09:01:07] as much as i'd like to take credit, all i did was sit there and be amazed :) [09:22:38] 10DBA: Care needed with mariabackup versions - https://phabricator.wikimedia.org/T253959 (10Kormat) Related: https://gerrit.wikimedia.org/r/599343 [09:25:39] jynus: can i get a review of https://gerrit.wikimedia.org/r/c/operations/puppet/+/599739 please? [09:26:00] let me check its current status [09:26:09] please do :) [09:26:29] seems all green to me, is it on grafana too? [09:26:37] it is [09:28:00] buffer pool full [09:29:34] did you restart the hosts after upgrade? [09:29:48] crap, no. [09:30:01] for a minor upgrade is not a huge issue [09:30:24] in most cases for a major it may work, but I think it should be safer to restart once after mysql_upgrade [09:30:31] not sure if that is documented [09:30:52] on starts says "Created with MariaDB 100143, now running 100413. Please use mysql_upgrade to fix this error" [09:31:09] it would be nice to see that not happening after a new start, just out of caution [09:31:34] right [09:31:44] manuel told me they used to alway restart at $JOB-1, and I thought it was wise, even only to make sure things were ok [09:32:30] yeah makes sense [09:32:36] I may not be needed, but given we are early on upgrade process it would be nice just as a sanity check [09:32:47] just an opinion, eh? [09:33:06] regarding the other patch, I didn't want to put it down, I think the idea was good [09:33:47] sure, i get that [09:33:56] but HEAD is like 20 patches above what there is on puppet [09:34:04] but it's clear it needs to be solved a different way, in a different place [09:34:09] plus I think it should be done [09:34:12] yeah [09:34:51] so for a few weeks I would suggest to ask: you don't have to ask for permission to do stuff [09:35:18] but the thing I hate to make you work in the wrong direciton [09:35:22] *is [09:35:48] I do ask manuel "hey, should I do X? " many times still [09:36:51] for transfer I can advise you as I am mentoring this student right now to improve it [09:36:56] i figured the amount of work involved was so small that i might as well send a patch as a way of having the discussion :) [09:37:08] oh, no problem on sending the patch [09:37:13] 10DBA, 10Cloud-Services, 10CPT Initiatives (MCR Schema Migration), 10Core Platform Team Workboards (Clinic Duty Team), 10Schema-change: Apply updates for MCR, actor migration, and content migration, to production wikis. - https://phabricator.wikimedia.org/T238966 (10daniel) Just so we don't pass on an op... [09:37:17] I just didn't want you to feel bad about it [09:37:26] that is why also I said not to abandon it [09:37:27] nono, that's fine [09:37:44] because it would be quite similar to the patch elsewhere [09:38:00] plus you didn't know that we had been preparing the path thing for some months already [09:40:57] q: the current transfer.py on cumin1001 is from the puppet repo, right? but the plan is the next 'release' will be from the wmfdbapy(?) repo as a .deb, and drop transfer.py from puppet. is that correct? [09:41:40] *wmfmariadbpy [09:42:12] correct, with 1 exception [09:42:20] we just requested a new repo [09:42:36] operations/software/transferpy [09:42:39] which is empty [09:42:40] ah hah [09:42:46] good to know [09:42:59] head literally now (maybe not tomorrow) is at wmfmariadbpy [09:43:26] we are splitting it so the student has the life easier [09:43:46] if you want to patch transfer.py now [09:44:31] it would be here: https://gerrit.wikimedia.org/g/operations/software/wmfmariadbpy [09:45:39] once that is on a deb, we could integrate Transferer into spicerack or whatever [09:46:58] which was why I wanted to avoid also any non-trivial change on puppet [09:47:04] gotcha [09:47:33] so you should not know that [09:47:40] you couldn't know that I mean [09:48:21] 10DBA, 10Patch-For-Review: Productionize db213[6-9] and db2140 - https://phabricator.wikimedia.org/T252985 (10Kormat) [09:49:13] so for db2140.yaml to be definitive [09:49:21] a master switchover will be needed [09:49:32] which is the exact same thing that we did this morning, but on codfw [09:49:40] * kormat nods [09:49:48] so it can happen at a civilized hour? :) [09:49:53] indeed [09:49:57] for context [09:50:05] the aim is to remove those hiera keys [09:50:11] and have all on a dinamic store [09:50:17] *dynamic [09:50:28] e.g. tendril "source of truth" [09:50:35] or whatever substitutes it [09:50:40] as it is not static configuration [09:51:51] I just saw another thing at: https://gerrit.wikimedia.org/r/c/operations/puppet/+/599746 [10:00:33] 10DBA, 10Cloud-Services, 10CPT Initiatives (MCR Schema Migration), 10Core Platform Team Workboards (Clinic Duty Team), 10Schema-change: Apply updates for MCR, actor migration, and content migration, to production wikis. - https://phabricator.wikimedia.org/T238966 (10Marostegui) >>! In T238966#6176187, @d... [10:01:26] 10DBA, 10Schema-change: Remove image.img_deleted column from production - https://phabricator.wikimedia.org/T250055 (10Marostegui) [10:01:42] 10DBA, 10Schema-change: Remove image.img_deleted column from production - https://phabricator.wikimedia.org/T250055 (10Marostegui) 05Open→03Resolved [10:01:50] 10DBA, 10Datasets-General-or-Unknown, 10Patch-For-Review, 10Sustainability (Incident Prevention), 10WorkType-NewFunctionality: Automate the check and fix of object, schema and data drifts between mediawiki HEAD, production masters and slaves - https://phabricator.wikimedia.org/T104459 (10Marostegui) [11:22:48] 10DBA: Failover DB masters in row D - https://phabricator.wikimedia.org/T186188 (10Marostegui) [12:08:35] 10DBA: Investigate possible memory leak on db1115 - https://phabricator.wikimedia.org/T231769 (10Marostegui) >>! In T231769#6120789, @Marostegui wrote: > This seems very similar to what we see: https://jira.percona.com/browse/PS-6961 (also affects 5.6, 5.7). I will try to run the same procedure they describe the... [12:10:26] 10DBA: Care needed with mariabackup versions - https://phabricator.wikimedia.org/T253959 (10Marostegui) p:05Triage→03Medium [12:11:20] 10DBA, 10Parsoid, 10Parsoid-Tests: testreduce_vd database in m5 still in use? - https://phabricator.wikimedia.org/T245408 (10Marostegui) @ssastry were you able to test the GRANTs? [12:19:15] 10DBA, 10MediaWiki-API, 10Pywikibot, 10Wikidata, and 3 others: Wikidata API fails with timeout when asking for 5 RC redirects - https://phabricator.wikimedia.org/T245989 (10Marostegui) On 10.4 the optimizer keeps choosing the wrong plan, so we need to fix this in code with an `ignore index (page_redirect_n... [12:27:55] 10DBA: Convert Tendril TokuDB tables to InnoDB - https://phabricator.wikimedia.org/T249085 (10Marostegui) Just for the record `global_status_log` and `global_status_log_5m ` are now under control in terms of growth: {T252331} `general_log_sampled` is not used, I have moved it to InnoDB and removed its partitions. [12:36:39] 10DBA, 10Cloud-Services, 10CPT Initiatives (MCR Schema Migration), 10Core Platform Team Workboards (Clinic Duty Team), 10Schema-change: Apply updates for MCR, actor migration, and content migration, to production wikis. - https://phabricator.wikimedia.org/T238966 (10daniel) >>! In T238966#6176276, @Maros... [12:50:32] 10DBA, 10Cloud-Services, 10CPT Initiatives (MCR Schema Migration), 10Core Platform Team Workboards (Clinic Duty Team), 10Schema-change: Apply updates for MCR, actor migration, and content migration, to production wikis. - https://phabricator.wikimedia.org/T238966 (10Marostegui) Thanks - I will try both q... [12:52:38] 10DBA, 10Cloud-Services, 10CPT Initiatives (MCR Schema Migration), 10Core Platform Team Workboards (Clinic Duty Team), 10Schema-change: Apply updates for MCR, actor migration, and content migration, to production wikis. - https://phabricator.wikimedia.org/T238966 (10daniel) By the way, all usages of the... [12:57:51] 10DBA, 10Cloud-Services, 10CPT Initiatives (MCR Schema Migration), 10Core Platform Team Workboards (Clinic Duty Team), 10Schema-change: Apply updates for MCR, actor migration, and content migration, to production wikis. - https://phabricator.wikimedia.org/T238966 (10daniel) >>! In T238966#6176804, @Maros... [12:59:27] 10DBA, 10Cloud-Services, 10CPT Initiatives (MCR Schema Migration), 10Core Platform Team Workboards (Clinic Duty Team), 10Schema-change: Apply updates for MCR, actor migration, and content migration, to production wikis. - https://phabricator.wikimedia.org/T238966 (10Marostegui) Excellent! To recap on the... [13:00:09] 10DBA, 10Cloud-Services, 10CPT Initiatives (MCR Schema Migration), 10Core Platform Team Workboards (Clinic Duty Team), 10Schema-change: Apply updates for MCR, actor migration, and content migration, to production wikis. - https://phabricator.wikimedia.org/T238966 (10daniel) Heh - you are too qick, I just... [13:00:37] 10DBA, 10Cloud-Services, 10CPT Initiatives (MCR Schema Migration), 10Core Platform Team Workboards (Clinic Duty Team), 10Schema-change: Apply updates for MCR, actor migration, and content migration, to production wikis. - https://phabricator.wikimedia.org/T238966 (10daniel) I just want to make extra sur... [13:01:40] 10DBA, 10Cloud-Services, 10CPT Initiatives (MCR Schema Migration), 10Core Platform Team Workboards (Clinic Duty Team), 10Schema-change: Apply updates for MCR, actor migration, and content migration, to production wikis. - https://phabricator.wikimedia.org/T238966 (10Marostegui) Sure, if you can provide m... [13:04:31] 10DBA, 10Cloud-Services, 10CPT Initiatives (MCR Schema Migration), 10Core Platform Team Workboards (Clinic Duty Team), 10Schema-change: Apply updates for MCR, actor migration, and content migration, to production wikis. - https://phabricator.wikimedia.org/T238966 (10daniel) Will try to dig that up later... [13:05:35] 10DBA, 10Cloud-Services, 10CPT Initiatives (MCR Schema Migration), 10Core Platform Team Workboards (Clinic Duty Team), 10Schema-change: Apply updates for MCR, actor migration, and content migration, to production wikis. - https://phabricator.wikimedia.org/T238966 (10Marostegui) Thank you - much appreciated [13:16:36] 10DBA, 10Data-Services, 10Quarry: Quarry: Lost connection to MySQL server during query - https://phabricator.wikimedia.org/T246970 (10Marostegui) So this query takes around 25 minutes to execute on an idle host, so on a normal loaded hosts it is perfectly possible that it would take more, so the reason it is... [13:35:19] 10DBA, 10Parsoid, 10Parsoid-Tests: testreduce_vd database in m5 still in use? - https://phabricator.wikimedia.org/T245408 (10ssastry) Hey yes. it worked and we are now working with the new test database. [13:36:51] 10DBA, 10Parsoid, 10Parsoid-Tests: testreduce_vd database in m5 still in use? - https://phabricator.wikimedia.org/T245408 (10ssastry) In a week's time, you can drop the old tesreduce_0715 database and resolve this "repurposed" ticket :) [13:39:08] 10DBA, 10Parsoid, 10Parsoid-Tests: testreduce_vd database in m5 still in use? - https://phabricator.wikimedia.org/T245408 (10Marostegui) Cool! So can I drop `testreduce_vd` and `testreduce_0715` databases? Nothing seems to be writing: ` root@db1133:/srv/sqldata/testreduce_vd# ls -lhSrh *.ibd -rw-rw---- 1 m... [13:39:43] 10DBA, 10Parsoid, 10Parsoid-Tests: testreduce_vd database in m5 still in use? - https://phabricator.wikimedia.org/T245408 (10Marostegui) a:03Marostegui >>! In T245408#6176963, @ssastry wrote: > In a week's time, you can drop the old tesreduce_0715 database and resolve this "repurposed" ticket :) Haha than... [13:48:16] 10DBA, 10Parsoid, 10Parsoid-Tests: testreduce_vd database in m5 still in use? - https://phabricator.wikimedia.org/T245408 (10ssastry) Please do not drop the testreduce_vd database! :) Only testreduce_0715. We are yet to make a decision where and how we will run our visual diff test runs ( T252483 ) [14:08:42] 10DBA, 10Parsoid, 10Parsoid-Tests: testreduce_vd database in m5 still in use? - https://phabricator.wikimedia.org/T245408 (10Marostegui) Cool, thank you! [21:12:03] 10DBA: Drop wb_terms in production from s4 (commonswiki, testcommonswiki), s3 (testwikidatawiki), s8 (wikidatawiki) - https://phabricator.wikimedia.org/T248086 (10Bstorm)