[20:10:19] jynus: any idea what might be up with https://logstash.wikimedia.org/#dashboard/temp/AVCq7AUwptxhN1Xava-q ? [20:27:18] https://phabricator.wikimedia.org/P2247 [20:29:24] https://phabricator.wikimedia.org/T116557 is the bug for tracking this btw [20:30:40] the inner query is trivial (limit/order) and the outer one only acts on those 100 rows [20:32:26] at some point it must chose a poor plan [20:34:23] it would be nice to have some workaround in place since this method is called on page save [20:35:23] oh, but this is only happening 15 times an hour, right? [20:35:30] every 12 hours [20:35:56] I am saying this because I had a long day today [20:36:22] I will start with this tomorrow, and another query by hoo [20:36:33] it is getting late here [20:36:52] these are the kind of things that performance_Schema comes very handy for [20:36:56] No worries [20:37:03] didn't see any user reports about it yet [20:37:24] I really try, it is that something else happens that is more urgent and I have to do it [20:37:38] like an outage or something more urgent [20:37:58] I also wonder if the method is slower than it should be in other cases (it takes a small but decent sliver of attemptSave at https://performance.wikimedia.org/xenon/svgs/daily/2015-10-24.index.svgz) [20:38:30] jynus: how is the dba search going? [20:39:07] at least our traffic/dba ratio is impressive ;) [20:39:07] very well, but it takes now even more time :-) [20:39:21] doing the interviews and so [20:39:49] funny thing is that performance and query optimization is my favourite topic [20:40:07] but I am very behind because infrastructure, setting new servers, etc [20:40:41] where should I look on the graph? [20:42:46] AbuseFilterParser::intEval [20:42:56] I wish there was a way to link to specific levels via # or something [20:43:13] yeah... [20:43:58] but this makes no sense [20:44:06] otherwise it's like performaction => submitaction => AbuseFilterParser::intEval [20:44:09] what you have linked me is api calls [20:44:48] this only happens on api edits, or all edits go though the api? [20:44:49] it's like map instructions defined by landmarks (the big tree, the pond, the area with with the pelicans) [20:45:13] the index.php graph I linked as all non-API [20:45:29] that one has a different graph...I should probably look at that too [20:47:42] yes, the kibana span that you linked me only references api calls to api-only dbs [20:48:13] either others are not on the logs you sent me or they are not registered on the logs [20:48:50] I do not know how much latency we are talking about [20:54:11] it's around 1/5-1/6 of abusefilter hook time. Not the end of the world, but worth looking into. [20:54:38] * AaronSchulz kind of wishes that extension didn't exist [20:54:48] iirc even Werdna recognised that it sucked hideously [20:55:31] https://tendril.wikimedia.org/report/slow_queries_checksum?checksum=0c7fc16ce171a09802a0281ca9aa95f1&host=^db&user=wikiuser&schema=wik&hours=24 [20:55:39] It is a new query, it seems [20:58:27] it's been there is some form for years, but changed a few times for performance reasons [20:58:34] *in [20:59:18] so there are 2 things here [20:59:35] on one side, api calls timeouts [20:59:59] on another side, longer save times [21:00:08] is that right? [21:01:16] you are mostly concerned about the latter, right? [21:03:42] if it is reploducible, it should be easy to debug [21:36:10] mostly the later, yeah [21:36:24] though getting rid of timeouts is always good