[09:05:50] es2001-es2004 will be decommissioned too? [09:07:41] well, at least in software [09:08:11] (salt, puppet, dns, icinga) [09:08:53] ok [09:09:20] final word will be by datacenter ops/provisioning [09:09:56] was just for depool vs removing from the file, I'll remove them [09:11:13] what do you mean removing from the file? [09:11:31] db-codfw.php? [09:11:37] yes [09:11:45] set a ticket [09:11:58] there is always things left on puppet, and other places [09:12:29] sure [09:13:20] I suppose T127330 can be closed now? [09:14:08] I will close T119056 [09:14:32] oh, you claimed it, close both when you are ready [09:14:38] with this commit that remove them from db-codfw.php I'll close it and I'll open a new one for decomissioning [09:14:46] great [09:38:10] setting up cross-datacenter master-master replication on es2/3 [09:45:21] ok, do you want me to wait merging the removal of old ones? (unrelated but...) :) [09:45:46] no need, it was just a "soft !log" [09:46:33] you do not need to respond - I use this for logging [14:44:33] jynus: hi! any suggestions for trying to profile which MySQL queries are taking a long time? [14:45:31] your own queries? [14:46:26] labs or somewhere else? [14:46:27] one sec [14:51:08] jynus: hey.... mmm well I was thinking of profiling on my local machine, to try to figure out which are the tardy queries for this task https://phabricator.wikimedia.org/T128869 [14:51:58] However, if there's a way to profile on production, that'd be fun too! [14:52:28] well, it is not that easy in production, as it has performance impact [14:52:40] Yeah if I can start locally that'd be great [14:52:50] but it can be done for beta, where performance wouls not be such an issue [14:52:51] The code that should be re-done on that task is really terrible. I can clean it up, but I want to make sure I'm targeting the right things [14:52:53] but anyway [14:52:56] Hmmm [14:52:59] What would you recommend [14:53:00] someonw shower [14:53:01] ? [14:53:06] I would start locally [14:53:11] K! [14:53:29] that is easier because you have full control, only go nearer production if you see nothing [14:53:37] yeah [14:53:46] so, someone showed what I believe is a general log [14:54:03] that is ok to write *ALL* queries to a file [14:54:10] but it does not do profiling [14:54:24] the best way is to enable the slow query log [14:54:33] and set the query limit to 0 seconds [14:54:44] I have documentation for that [14:55:03] See: https://wikitech.wikimedia.org/wiki/MariaDB/query_performance [14:55:42] however, being non-production, you can have SET GLOBAL log_slow_rate_limit = 1; (or leave it as is) [14:56:20] that will create one file /tmp/slow.log with all queries and its query time, rows read, etc [14:56:55] if you need a summary, pt-query-digest from percona-toolkit can give you a nice summary: [14:56:55] jynus: ah fantastic! yeah that's what I need :) [14:57:07] https://wikitech.wikimedia.org/wiki/MariaDB/query_performance/coreproduction-20151020 [14:57:15] Yeah I did get the general log, but didn't know how to get more [14:57:19] that should be enough to know the slow queries [14:57:40] there is another option, but I think that is the easiest to start [14:57:48] helpful? [14:58:16] Sí, ¡excelente! [14:58:24] nice! [14:58:27] ¡¡Mil gracias!! [14:58:32] :-)