[12:58:34] JohnFLewis: https://phabricator.wikimedia.org/T102915#1538923 won't be slow, right? [12:58:43] oh sorry [12:58:45] jynus: ^ [12:58:46] jynus: ^ [12:58:52] :) [12:59:15] with gu_name >= instead of = [13:13:45] But >= is what caused the issue in the first place [13:15:26] it's a form to search for names [13:15:33] what can >= be replaced with? [13:16:09] imagine there are 2 users [13:16:24] Jaime and Jaime2 [13:16:37] I type "Jaime" [13:17:15] what results do you expect on that for, asuming one it is attached and the other isn't [13:17:19] *form [13:23:26] Jaime and Jaime2 listed [13:23:41] well, now it doesn't [13:23:50] I just submitted a patch [13:23:53] It does SELECT gu_name,MAX(gu_id) AS gu_id,MAX(gu_locked) AS gu_locked,MAX(lu_attached_method) AS lu_attached_method,GROUP_CONCAT(gug_group SEPARATOR '|') AS gug_group FROM `globaluser` LEFT JOIN `localuser` ON ((gu_name = lu_name) AND lu_wiki = 'testwiki') LEFT JOIN `global_user_groups` ON ((gu_id = gug_user)) WHERE gu_hidden = '' AND (gu_name >= 'Glaisher') GROUP BY gu_name ORDER BY gu_name LIMIT 51 [13:24:09] I also looked it on labs replica [13:24:13] and seems to work ok [13:24:19] https://gerrit.wikimedia.org/r/232492 [13:24:35] yes, the order should fix it [13:24:56] jynus: it won't be slow on production, right? [13:25:24] anyway, I have to go afk now [13:25:31] will look at the scrollback when I return [13:25:35] let me check the explain [13:29:57] It does a full table scan looking for half the table, although in index order [13:32:26] oh, no, explain is tricking us [13:32:53] Handler_read_key = 105, so it is nice [13:33:23] 51 rows in 0.24 seconds [13:36:33] actually clients side, we have even improved overall performance because of the saved roundtrip [14:02:42] for the groups? [14:03:06] yep [14:03:45] for small queries sometimes takes more time for the network trip than to access the rows (in memory) [14:07:52] Glaisher, where is the code disallowing | ? [14:09:55] if that is problematic, you can always do a second query with a join [14:10:31] I added initially the group_concat for debuging [14:11:32] Running that query on me returns at least 8 accounts that belong to me :/ [14:11:51] is that good or bad? [14:12:02] Well, generally you should only have 1 [14:12:06] (half of which I don't have the passwords to) [14:12:39] normal account, (WMF) account, (Public) account, Bot account. Then I also have Test, broken, rename request and EchoTest [14:13:12] oh, I had created several accounts with insults by vandals [14:13:22] when I was eswiki admin [14:13:28] pure fun [15:34:04] jynus: sometimes it'll also have gug_group in conditions... will that make it any slower? [15:34:56] Krenair: Global group names go through Title::newFromTitle() in the form [15:35:06] and | is not an allowed character for Titles [15:35:22] you could try testing it on Special:GlobalGroupPermissions [15:36:53] Glaisher, sorry, hard to switch between tasks, can you give me a quick example? [15:37:08] just a sec [15:37:12] sure [15:37:48] use phab paste or the own gerrit issue [15:39:49] http://fpaste.org/256790/99877414/ [15:39:50] um [15:39:55] I already used that :P [15:40:12] jynus: that one has several variations of the query which it can run [15:41:05] so it is the same one but with extra filtering on gug_group [15:41:06] also, it could do ORDER BY DESCs too [15:41:10] yep [15:41:27] Glaisher, invalid name given, okay then [15:41:32] then no problem, worst case scenario, it wouldn't use an index [15:41:58] and the non-filtered one is the one that I tested before [15:42:15] To be fair, I cannot guarantee that nothing will be wrong [15:42:31] because of edge cases of the optimizer [15:42:38] but I am pretty confident [15:43:14] It won't use an gu_name index? [15:43:18] -an [15:43:57] worst case scenario, it wont use an index on gug_group [15:44:07] if such a thing exists [15:44:34] there is one on gug_group [15:44:47] https://github.com/wikimedia/mediawiki-extensions-CentralAuth/blob/master/central-auth.sql [15:45:33] global user groups [15:45:58] with I suppose it is a 20 row table or so [15:46:12] I think it would be more than that... [15:46:30] but probably less than 100 or so [15:46:43] that is the same for mysql- constant time [15:46:54] so do not worry! :-) [15:47:01] I am manually checking your examples [15:48:46] the last one takes 1.48 seconds [15:49:45] but only because of cold buffers- it still read only 1005 keys. [15:49:50] ah, that's the one with 501 rows [15:50:09] I think 5000 is the maximum you could use in a Pager [15:50:18] but people usually won't try that [15:50:30] if this was a query to be done on every page reload [15:50:47] I would require it to be always in miliseconds [15:51:02] but I can assure you we have way worse queries [15:51:16] and this is the best thing we can do without reducing functionality [15:51:42] you have my virtual +1 [15:51:43] ok cool [15:51:51] thanks for all the help :-) [15:52:07] I am here for exactly that :-) [15:52:42] ;) [15:52:45] hope I also have a couple of belivers in stronger sql_mode [16:21:45] Glaisher, is jenkins broken? :/ [16:21:47] is mysql install failing on CI? [16:22:00] looks like it [16:22:19] ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) [16:22:21] asking in -releng [16:23:03] yes, I may not be able to help with that much [16:39:50] Glaisher, the "globalGroups = array()" is now redundant [16:40:35] ok, will amend that [16:55:06] Krenair, did you know what it was, just in case it happens again? [16:55:19] the CI issue? [16:55:23] yes [16:56:15] I don't fully understand it, something to do with elasticsearch and puppet [16:56:39] The releng guys seem to and have run into it in the past [16:56:49] so, "ask release is the answer" :-) [16:57:29] "apt-get install --reinstall elasticsearch" and a manual puppet run [16:57:52] on the integration-slave-precise-* machines [16:58:39] thanks, whenever it has mysql in the error, I get asked, that is why I was interested [16:59:37] some details in http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-releng/20150819.txt [17:00:14] and the day before apparently