[12:25:58] whenever I need a place for temporary storage, I use /srv/tmp, such as on T105713. I never delete anything, just move it there. [15:24:53] just changed password on m1 for racktables at godog request, new one already on the repo [15:26:23] I have also have (temporarelly?) get rid of s5 lag on dbstore1002 [15:27:28] the issue was that the delete performed a full table scan even when an analyze was run (I assume toku + FIFO had something to do it) [15:28:13] ^I thought I had to convert to InnoDB the wikidatawiki.wb_changes, but just recreating it to TokuDB works [15:29:33] ^not sure for how long, if it starts again, we can put that on cron/events, as the table is relativelly small (2:30 min for reconstructing it) [15:32:00] context (private for ops): https://phabricator.wikimedia.org/P1008 [15:40:01] user change on db1001 broke replication with db1016, as user did not existed before [15:41:44] now fixed by creating the user [15:42:12] note: reviewing user grants on m1 shard [16:30:46] sanitarium:3313 replication broke yesterday [16:34:23] Error 'Incorrect key file for table 'user_properties'; try to repair it' on query. Default database: 'incubatorwiki'. Query: 'DELETE /* User::saveOptions */ FROM `user_properties` WHERE up_user = '' AND up_property = 'uls-preferences'' [16:35:30] alter table user_properties engine=tokudb; run on db1069:3313 to fix it [16:48:03] (BTW, just for the record, I am not working today, I am starting tomorrow) [17:46:13] So, current state: everithing that should be running is running, but expect lag to be high on db1069:s3 (and by extension, labs) for a few hours until everything catches up [17:53:48] tendril's labsdb1002 monitoring is disabled, if you didn't touch anything when bringing it up, it may have lost in the process; I will reset it if that is the case (unless you want to investigate)