[07:11:59] in tendril, I think db1042 does not currently identify its master and vice-versa because I used CHANGE MASTER with the ip and not the host name [10:27:06] I am detecting some peaks in connection aborts in db1036 [10:27:31] could be physical issues, but probably related to its "watchlist, recentchanges, contributions, logpager" [10:28:10] keeping an eye on it, the current state is ok [10:31:06] import on db2001 was successful, importin now db2002 [10:31:53] I will keep its engine for now as InnoDB, I may try InnoDB compression on them or convert them back to TokuDB- I have not made my mind yet [13:05:58] re: tendril, it was exactly that, using MASTER_HOST='ip' instead of the domain name- I do not like to issue unnecessary CHANGE MASTERs, but a host not appearing on the dbtree could be more dangerous [13:13:44] "Error Incorrect key file for table user_newtalk: try to repair it on query." [13:14:45] ^if this is tokudb, I will have to take extreme measures, like writing a strongly-worded letter to somone [13:19:20] effectively, it was TokuDB. Which means 2 things: either we are using a buggy version, or we should not use it on production at all [13:27:18] T109069 and this is why I refuse to try it on External Storage [13:28:10] InnoDB compression is shitty, but at least reliable [14:49:40] so, right now ignore any lag on dbstore2002 (I've put a message on icinga and phabricator)- import is still ongoing [14:50:14] I'm wondering how inconsistent labswiki DB is with the real production databases [14:51:57] I've fixed many drifts this week [14:52:24] but there are some issues I think due to filtering [14:54:20] there are several insert...select statements that propagate problems, as they do not guarantee consistency [14:54:48] do not worry, we are on the way of checking all the servers automatically [14:59:20] even including silver? [14:59:28] silver? [14:59:46] is that replicated to labs? [14:59:47] the wikitech server, it runs mysql and obviously it has a mediawiki install [14:59:52] not at the moment [15:00:04] so, nothing to check there [15:00:17] afaik it has no replicas, only backups [15:00:19] sorry, I was unclear. [15:00:24] I meant consistency between the production database schema and the one on labswiki [15:00:28] not replica consistency [15:00:48] ah, so schema consistency [15:01:00] that is actually part of the script [15:01:22] https://phabricator.wikimedia.org/T104459 [15:01:39] T104459#1417843 [15:01:59] it is just a bit complex due to orphaned tables [15:02:53] and to be fair, labswiki is a bit unmaintaned [15:03:05] I understand the need to put it separatelly [15:03:23] but we could put it on s3 and have a replica offsite [15:03:37] but not my call [15:29:54] I try to keep labswiki at least working [15:30:00] and occasionally try to fix issues with it [15:30:30] I suspect the issue with that idea would be making it dependent on normal production servers [16:40:31] and again another tokudb table crashed [16:40:37] converted to innodb [17:20:33] labswiki is missing archive.ar_content_model [17:21:11] Oh, nope, found it. Guess I should have ordered my dump of column data [17:27:22] https://github.com/wikimedia/mediawiki-extensions-CentralAuth/blob/master/central-auth.sql#L118-L121 Is this supposed to be like this? [17:28:03] I mean, primary keys are indexed by default, right? [17:29:50] wait, the order is different [17:30:40] but still [17:30:45] is that right in general? [17:34:09] Krenair: legoktm ^ [18:13:37] hi