[09:27:58] I've seen you are depooling db1107 4 times this week, is there any issue with it? [09:28:23] It is 10.4 host, I am pooling during the day and depooling at the end of my day [09:28:34] I want to be consistent with my email :) [09:28:44] but no observed issue yet? [09:28:47] nop [09:28:50] all good [09:29:16] ok, I got worried it could have a real issue rather than just being careful [09:29:26] no no, all is good [09:29:37] let me know if you want to pool it for longer and I can depoolit before I go [09:29:46] will do, thanks :) [09:30:00] I belive peak time is around 6 [09:30:25] yeah, not worried about that yet, but about query plans and unexpected issues with MW and GTID for now [09:30:40] once I am fully sure with that, I will leave it pooled for longer with low weight and all that [09:30:54] so this is relevant to the mw log conversation [09:31:00] but if it is only one host [09:31:06] there is this trick you can use [09:31:19] which is flushing the p_s stats [09:31:54] (it should be more reliable than tendril) [09:32:21] I am also capturing queries on the log, by enabling it [09:32:24] and analyzing those [09:34:12] I found it, sys.ps_truncate_all_tables() [09:34:28] just giving you more options, there is many ways to do the same :-D [09:34:36] sure :-) [09:34:36] thanks [09:41:32] interesting dashboard: more QPS doesn't necessarily mean higher latency: https://icinga.wikimedia.org/cgi-bin/icinga/status.cgi?search_string=MariaDB+read+only+s8 [09:42:29] we are mixing different kind of traffics there too, so maybe less QPS but heavier [09:42:34] yeah [09:42:50] I was worried at first when I saw a host with 30K QPS [09:43:01] it only does main, db1126 [09:43:03] but it turns out in some cases it is the least loaded one [09:43:11] api is probably meaner to mysql [12:35:08] I've just seen db1087 is at 88% /srv usage, but cannot see why if is different from the other hosts [12:35:16] *it [12:37:03] I am guessing row format [12:58:32] db1087 is scheduled to be compressed, starting as soon as I am done with another alter on db1099 [12:59:43] sorry, I didn't meant to stress you, I just didn't know at first why [13:00:05] I am surprised wikidata is now 3.2TB uncompressed [13:00:48] so my "backing up 2TB databases uncompressed" are already outdated [13:00:59] :-( [13:04:27] most of the increase in growth rate happened during november [13:23:47] https://i.imgur.com/aYYFcqQ.jpg [13:24:08] XDD [16:23:03] working to resolve a 2011 bug now :-D [19:04:00] 10DBA, 10Cloud-Services: Prepare and check storage layer for ngwikimedia - https://phabricator.wikimedia.org/T240772 (10Urbanecm) @Marostegui I've created the database now!