[09:23:53] during the reimport, db1069 and dbstore1002 will stop its replication multiple times [09:24:32] ack [09:25:28] horrible one liner: https://phabricator.wikimedia.org/P2779 [09:26:42] it's a 10 liner ;) [09:27:54] it went out of hand [14:06:39] grants on db1011 (tendril db) don't match with local /etc/mysql/production-grants.sql on the host, do we run them manually? I cannot find on puppet where we execute the file :) [14:07:17] for context, I was trying to add db2008 to tendril monitoring from neodymium [15:11:37] what is missing? [15:12:00] access from tendril to db2008? [15:12:57] ah, yes, I may have missed tendril [15:13:20] I applied the grants on all masters and forgot tendril is a master [15:23:51] no access from neodymium [15:23:56] to add the host to tendril [15:24:33] but potentially others :) [15:27:11] please add it, it should be just one of the few missing- I focused on production first, not on misc or support, like tendril. [15:27:34] granting requires a refactoring, the current system is horrible [15:28:13] something like, maybe not automatic granting, but automatic checking [15:28:22] I saw is not optimal , hence I was wondering if we apply maually what is in /etc/mysql/production-grants.sql on the host, that has the grant I needed :) [17:02:39] the "old hardware" is funny, db2009 is as old :-) [17:05:28] lol [17:05:37] nice https://tendril.wikimedia.org/host/view/db2008.codfw.wmnet/3306 [17:05:52] you are a dream as a partner [17:07:15] and you too kind :-P [17:07:36] I just provide fair feedback [17:09:14] those should be replaced at some point, but for the redundancy, of the redundancy of the redundancy there is not a lot of preassure [17:10:32] actually, 8 servers should crash (including backups) before those would be a problem [17:11:55] in 2 different DC [17:14:32] "WHAT COULD POSSIBLY GO WRONG?" [17:15:27] * jynus after returning from vacation https://45.media.tumblr.com/6e485dba7dc16743c769f850c19ac0f6/tumblr_nqofbeQLOy1twltp9o1_250.gif [17:15:58] rotfl! hopefully not... :D