[06:10:22] 10DBA, 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Banyek: Migrate dbstore1002 to a multi instance setup on dbstore100[3-5] - https://phabricator.wikimedia.org/T210478 (10Marostegui) [06:35:19] 10DBA, 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Banyek: Migrate dbstore1002 to a multi instance setup on dbstore100[3-5] - https://phabricator.wikimedia.org/T210478 (10Marostegui) [07:48:03] 10Blocked-on-schema-change, 10DBA, 10Patch-For-Review, 10Schema-change, 10User-Banyek: Dropping user.user_options on wmf databases - https://phabricator.wikimedia.org/T85757 (10Marostegui) [08:08:13] 10Blocked-on-schema-change, 10DBA, 10Patch-For-Review, 10Schema-change, 10User-Banyek: Dropping user.user_options on wmf databases - https://phabricator.wikimedia.org/T85757 (10Marostegui) [08:08:26] 10Blocked-on-schema-change, 10DBA, 10Patch-For-Review, 10Schema-change, 10User-Banyek: Dropping user.user_options on wmf databases - https://phabricator.wikimedia.org/T85757 (10Marostegui) [08:08:36] 10DBA, 10Schema-change, 10Tracking: [DO NOT USE] Schema changes for Wikimedia wikis (tracking) [superseded by #Blocked-on-schema-change] - https://phabricator.wikimedia.org/T51188 (10Marostegui) [08:08:39] 10Blocked-on-schema-change, 10DBA, 10Patch-For-Review, 10Schema-change, 10User-Banyek: Dropping user.user_options on wmf databases - https://phabricator.wikimedia.org/T85757 (10Marostegui) 05Open→03Resolved All done [08:21:10] 10DBA, 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Banyek: Migrate dbstore1002 to a multi instance setup on dbstore100[3-5] - https://phabricator.wikimedia.org/T210478 (10Marostegui) [11:10:12] 10Blocked-on-schema-change, 10MediaWiki-Change-tagging, 10Patch-For-Review, 10User-Ladsgroup: Drop change_tag.ct_tag column in production - https://phabricator.wikimedia.org/T210713 (10Marostegui) db1098:3316 has some differences on change_tag table comparing it with the rest of the hosts on the section. I... [12:21:32] 10DBA, 10Patch-For-Review: BBU issues on codfw - https://phabricator.wikimedia.org/T214264 (10jcrespo) a:05jcrespo→03None [12:33:29] I am running mariabackup on dbstore1001, on a single thread and uncompressed it is going to be a slow process, we will see how mariadb responds [13:16:09] jynus: so I better not stop replication on dbstore1001:3316, right? [13:16:34] I wanted to reimport frwiki.change_tag from there into db1098:3316, but I can take another host, no worries [13:33:20] why the worries on 6? [13:33:36] I don't want to mess up with your work [13:33:38] there would be some extra iops [13:33:57] You ok if I stop it then? [13:34:00] but a logical copy, I assme you want that, on a different instance wouln't be a problem [13:34:07] if only s6, yes [13:34:11] yes, only s6 [13:34:29] the backup is still ongoing [13:34:45] so ssd for those hosts is more than operationally justified [13:35:05] hehe [13:35:11] ok, I will stop it in sync then [13:35:19] thanks [13:42:58] I have been working on the replication python library, I will soon will give you new toys [13:43:09] <3<3 [13:54:40] I am done with dbstore1001:3316 [13:59:40] the backup is going too slow [14:00:24] from where to where? [14:00:29] or just locally? [14:00:29] locally [14:00:49] it has already written 3GB on the log already [14:01:05] I didn't start it in parallel [14:01:16] but we may be lacking iops on that host [14:01:19] so maybe we do need the ssds on the recovery/provisioning [14:03:24] or maybe it is the process stopping when there is temporary tables, etc [14:13:09] I think it is blocked, I am going to kill it [14:13:27] do you want to try a real case scenario? [14:13:38] that was it :-) [14:13:54] I mean from a provisioning host to another host, not locally! [14:14:25] that doesn't exist [14:14:56] I was going to offer you dbstore1004, I need to populate s2,s3 and s4, if you want to try any of those? [14:15:14] that functionality doesn't exist [14:15:20] ah ok [14:22:30] I've enabled --close-files and --parallel=8 [14:29:39] there seems to be more going on now [14:30:14] :) [15:44:02] 10DBA, 10Cognate, 10MediaWiki-Database: populateCognatePages.php query keeps timing out while waiting for replecation - https://phabricator.wikimedia.org/T214402 (10Addshore) [15:44:12] 10DBA, 10Cognate, 10MediaWiki-Database: populateCognatePages.php query keeps timing out while waiting for replecation - https://phabricator.wikimedia.org/T214402 (10Addshore) [15:44:51] 10DBA, 10Cognate, 10MediaWiki-Database: populateCognatePages.php query keeps timing out while waiting for replication - https://phabricator.wikimedia.org/T214402 (10Addshore) [15:45:04] o/, not sure if ^^ is a DBA thing of a mediawiki db abstraction thing [15:47:12] 10DBA, 10Cognate, 10MediaWiki-Database: populateCognatePages.php query keeps timing out while waiting for replication - https://phabricator.wikimedia.org/T214402 (10Addshore) [15:59:31] 10DBA, 10Cognate, 10MediaWiki-Database: populateCognatePages.php query keeps timing out while waiting for replication - https://phabricator.wikimedia.org/T214402 (10Addshore) p:05Triage→03High Marking as high as without this working again we can't have sitelinks to yuewiktionary from other wiktionaries w... [16:06:09] addshore: it is never a dba thing [16:07:01] addshore: see https://phabricator.wikimedia.org/P8014 [16:23:34] so the backup took 2 hours [16:23:58] a bit less, 1:35 / 1:40 [16:27:10] and a 4GB logfile [16:42:59] nice [16:43:04] not bad! [18:22:12] quick note for dbstore1002 - I am running a mysqldump in tmux for the staging db, that will be moved to dbstore1003.. the /srv/ usage is ~91% now, I'll keep it monitored