[00:56:58] notconfusing: https://wikitech.wikimedia.org/wiki/User:Legoktm/pywikibot_on_tools_lab [02:33:52] [bz] (8RESOLVED - created by: 2Maarten Dammers, priority: 4Immediate - 6critical) [Bug 54847] Data leakage user table "new" databases like wikidatawiki_p and the wikivoyage databases - https://bugzilla.wikimedia.org/show_bug.cgi?id=54847 [04:35:10] @hmel [04:35:16] @help [04:35:17] I am running http://meta.wikimedia.org/wiki/WM-Bot version wikimedia bot v. 1.20.2.0 my source code is licensed under GPL and located at https://github.com/benapetr/wikimedia-bot I will be very happy if you fix my bugs or implement new features [04:37:31] @labs-user Gabrielchihonglee [04:37:31] Gabrielchihonglee is member of 1 projects: Bastion, [05:21:10] @labs-user gry [05:21:11] That user is not a member of any project [05:21:14] @labs-user Gryllida [05:21:14] Gryllida is member of 2 projects: Bastion, Tools, [07:33:29] @labs-user rschen7754 [07:33:29] That user is not a member of any project [07:33:35] @labs-user Rschen7754 [07:33:35] Rschen7754 is member of 3 projects: Bastion, Bots, Tools, [07:33:42] case sensitive :S [08:13:41] @labs-user Hym411 [08:13:41] Hym411 is member of 1 projects: Bastion, [08:13:45] :P [10:25:19] (03PS1) 10Nemo bis: Add gerritfeed-wm for a full feed relayed to IRC [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/87663 [10:28:55] (03PS1) 10Nemo bis: Move TwnMainPage too to #mediawiki-i18n [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/87664 [10:30:19] (03PS2) 10Nemo bis: Move TwnMainPage too to #mediawiki-i18n [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/87664 [10:39:31] (03CR) 10Yuvipanda: [C: 032 V: 032] Move TwnMainPage too to #mediawiki-i18n [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/87664 (owner: 10Nemo bis) [13:18:58] @replag [13:18:58] Replication lag is approximately 1.05:29:11.3349390 [13:24:45] Coren: dunno what you were talking about yesterday, but 20131004074947 isn't "darn up to date" [13:24:45] MariaDB [enwiki_p]> SELECT UNIX_TIMESTAMP() - UNIX_TIMESTAMP(rev_timestamp) FROM revision ORDER BY rev_timestamp DESC LIMIT 1; [13:24:46] 106480.000000 [13:25:45] Earwig: That's YYYYMMDDHHMMSS, not a "real" timestamp. :-) [13:26:22] so? UNIX_TIMESTAMP converts it fine. [13:27:38] It does, my calculation skillz were, clearly, at 0. [13:27:45] * Coren facepalms. [13:27:48] it's okay [13:28:24] * Coren goes check. [13:29:48] Hmm. The slaves are okay, it's the master not sending. Looks like Sean turned replication off while he was doing the fixes and didn't turn it back on. [13:29:55] ah hah [13:29:57] How uncouth. [13:30:13] * valhallasw always gets itchy when 'approximately' and numbers with a lot of decimals are combined [13:30:32] I'm surprised more people haven't noticed this. Thought I was going crazy. [13:35:55] Hm. The master state isn't the the short list of "things in mysql replication I am confident I can fix without breaking it". Lemme poke Sean. [13:53:36] Sean poked. [13:53:36] !ping [13:53:36] wm-bot seems to be locked up.. [13:53:36] ^ping [13:53:36] @infp [13:53:36] @info [13:53:37] !pong [13:53:38] http://bots.wmflabs.org/~wm-bot/dump/%23wikimedia-labs.htm [13:54:17] T13|: Apparently just before its first coffee. [13:55:05] Really bad lag.. [13:55:07] !coffee wm-bot [13:55:08] * Coren returns to his. [14:42:53] T13| you need toooo much coffe :P [14:43:27] ??? [15:05:20] T13|: coffee + coffe = T13 [15:05:25] :) [15:10:29] T13|needsApartment [15:10:38] Wouldn't fit.. [15:23:07] o_O [17:46:17] Coren, ping [21:00:43] Coren: you around? [21:17:23] @replag [21:17:23] Replication lag is approximately 1.13:27:36.5018670 [21:20:35] database is borked again [21:26:36] petan: poke [21:46:01] !petanping [21:46:11] !pingpetan [21:46:11] don't say petan: ping EVER!! If you need anything, say petan: , saying just "ping" is totaly useless [21:46:16] Ha. [21:58:52] which db server is @replag reporting? [21:58:59] @replag [21:58:59] Replication lag is approximately 1.14:09:12.4884380 [22:00:28] I reported that s2 replica broke yesterday - https://bugzilla.wikimedia.org/show_bug.cgi?id=54934 , now the issue is spreading to the whole cluster