[01:47:57] 06Labs, 10Labs-Infrastructure, 06Operations: investigate slapd memory leak - https://phabricator.wikimedia.org/T130593#2226160 (10Andrew) Seaborgium went OOM briefly just now and threw some errors. [01:54:11] 06Labs, 10Labs-Infrastructure, 06Operations: investigate slapd memory leak - https://phabricator.wikimedia.org/T130593#2140296 (10Dzahn) seems like we had one again today. serpens and seaborgium were reported by Icinga as having various issues. then serpens recoverd itself. seaborgium showed puppet errors.... [01:58:11] Hi. Is there anyone around willing to help me figure out why a script isn't finding the shared pywikibot? (Please excuse the labs noob.) [02:12:56] 06Labs: archive_userindex is not filled in eswiki_p - https://phabricator.wikimedia.org/T133251#2226170 (10Superzerocool) [02:15:30] 06Labs, 10Tool-Labs, 06Community-Tech-Tool-Labs, 10Diffusion: Create application to manage Diffusion repositories for a Tool Labs project - https://phabricator.wikimedia.org/T133252#2226182 (10bd808) [02:23:14] 06Labs, 10Horizon, 07Tracking: Make OpenStack Horizon useful for production labs - https://phabricator.wikimedia.org/T87279#2226198 (10bd808) [02:50:11] 06Labs, 10Tool-Labs, 06Community-Tech-Tool-Labs, 10Diffusion: Create application to manage Diffusion repositories for a Tool Labs project - https://phabricator.wikimedia.org/T133252#2226204 (10bd808) As I see it, there are 4 options for how to build this application: * Add this functionality to wikitech as... [02:52:55] 06Labs, 10DBA: archive/archive_userindex is not filled in eswiki_p - https://phabricator.wikimedia.org/T133251#2226209 (10Krenair) [02:53:20] 06Labs, 10DBA: archive/archive_userindex is not filled in eswiki_p - https://phabricator.wikimedia.org/T133251#2226170 (10Krenair) it's not just _userindex, the actual archive table has the same problem [03:05:46] 06Labs, 10Tool-Labs, 06Community-Tech-Tool-Labs, 10Diffusion: Create application to manage Diffusion repositories for a Tool Labs project - https://phabricator.wikimedia.org/T133252#2226238 (10bd808) Assuming that a standalone application is chosen, python and django would be a reasonable choice for the de... [03:06:14] 06Labs, 10Tool-Labs, 06Community-Tech-Tool-Labs, 10Diffusion, 15User-bd808: Create application to manage Diffusion repositories for a Tool Labs project - https://phabricator.wikimedia.org/T133252#2226241 (10bd808) a:03bd808 [05:09:03] RECOVERY - Puppet run on tools-exec-1219 is OK: OK: Less than 1.00% above the threshold [0.0] [07:33:18] 06Labs: cronspam from labscontrol1001, labstore1001, labnet1002.eqiad.wmnet, labsdb1003.eqiad.wmnet - https://phabricator.wikimedia.org/T132422#2226451 (10elukey) [09:27:39] PROBLEM - Puppet staleness on tools-bastion-10 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [43200.0] [12:13:32] Reminder: The wikis will not be editable for ~30 minutes in a couple of hours. [12:13:35] https://meta.wikimedia.org/wiki/Tech/Server_switch_2016 [13:17:32] RECOVERY - Puppet staleness on tools-bastion-10 is OK: OK: Less than 1.00% above the threshold [3600.0] [14:26:52] andrewbogott, I have a question. What could cause the CPU usage by the system to unexpectedly to goto, and stay, at 30%?\ [14:27:29] Cyberpower678: Use the idiot method: reboot the machine :D [14:27:53] Luke081515, I know that, but I want to know how to figure out the cause, so I can fix it. [14:28:01] ok :) [14:28:39] I guess a running job started by cron or other maybe? [14:28:42] Cyberpower678: no idea, other than 'something is using the CPU' [14:28:50] 'top' should show you what's happening [14:29:50] andrewbogott: What was the linux command to kill a process? I only remeber windows :-/ [14:30:03] 'kill' [14:30:06] ah ok [14:30:08] thx [14:30:25] Luke081515, Stuff run from the cron indicates user usage. [14:30:44] But I also have an additional 30% from the system being used. [14:30:58] grfana shows 19% user, 15% system [14:31:00] I thought it would eventually die down down, but it isn't. [14:31:01] https://grafana.wikimedia.org/dashboard/db/labs-project-board?var-project=rcm&var-server=All&theme=dark [14:31:06] *grafana [14:31:18] hm, ok, 45%, 30% [14:31:21] wrong graph [14:31:24] Look at Cyberbot project [14:31:44] yeah, I selected it, but the link doesn'T updated [14:32:05] https://grafana.wikimedia.org/dashboard/db/labs-project-board?var-project=cyberbot&var-server=All&theme=dark [14:32:14] is the correct link [14:32:40] I would take a look with top [14:32:48] if you found nothing reboot the instance [14:34:07] Luke081515, can I assume root is the "system" [14:34:15] yeah [14:34:28] don't kill the wrong things ;) [14:34:58] The most heavy root application currently running is the scheduler at whopping 2% [14:35:10] Nothing else. [14:35:14] hm [14:36:01] I do note that one of the PHP tasks is skyrocketing the CPU to 100% [14:36:07] Momentarily [14:36:10] Cyberpower678: Did you scroll down? [14:36:21] But that would show up as user usage. [14:36:53] There's nothing to scroll [14:37:48] Cyberpower678: http://tools.wmflabs.org/nagf/?project=cyberbot <= wrird [14:37:52] *weird [14:38:15] zhuyifei1999_: use https://grafana.wikimedia.org/dashboard/db/labs-project-board?var-project=cyberbot&var-server=All&theme=dark [14:38:37] yeah ik, but I'm not so used to grafana :P [14:39:16] I killed the heavy PHP scripts, whatever they were. [14:39:25] Too bad I can't see which scripts those were. [14:40:03] Cyberpower678: try top, get the PID then ps -Af? [14:40:20] seems like usage goes down [14:40:21] actually [14:41:12] Ummm... [14:41:13] Dashboard init failed [14:41:13] have to go now, good luck ceroning your instance, cyperpower ;) [14:41:13] Template variables could not be initialized: Cannot read property 'message' of null [14:41:14] bye [14:41:20] try reload [14:41:23] I did [14:41:24] bye [14:41:26] hm [14:41:30] Grafana just broke [14:41:32] bye [15:07:22] so I rebooted, and the problem is just as bad or even worse now. [15:07:28] andrewbogott, zhuyifei1999_ ^ [15:10:04] Cyberpower678: this is on a private instance, right? Just your stuff running there? [15:10:26] yes. [15:10:39] I'm going to try something real quick [15:12:21] 06Labs, 10Tool-Labs: Web requests fail after a period of time - https://phabricator.wikimedia.org/T133090#2227439 (10Nettrom) I haven't seen many 503s over the past couple of days, so I regard this particular task as resolved. At the same time, I logged 71 restarts of the web service yesterday (see below), so... [15:20:37] andrewbogott, there we go. I got it down now [15:25:29] so what was the troublemaker? [15:25:39] andrewbogott, so one of my scripts was completely out of control. I patched it [15:25:49] Cyberpower678: "system" usage means the kernel doing things. Generally that's either disk/network io or waiting on io. If you are for instance using multicurl you will see lots of system as it waits on the requests [15:26:40] bd808, I figured. The script was just running wild, and persistently flooding the log needlessly. [15:26:56] I'm assuming the massive logging is what spiked the system usage. [15:27:10] *nod* probably lots of disk io [15:27:39] Anyways the patch freed up 50% CPU [15:27:58] :D [15:28:38] moar patches like that one :) [15:28:50] wow [15:29:17] * zhuyifei1999_ never thought disk io use us so much cpu [15:29:19] bd808, I wish [15:29:23] *up [15:29:45] * Cyberpower678 patches his scripts to use 0 CPU [15:30:09] CPU? They don't run on CPU. They run on magic. :p [15:30:27] o.O [15:31:01] zhuyifei1999_, if you look on the IABot instance for Cyberbot, there are actually bots running on it. [15:32:31] ~0 ram ~0 cpu usage [15:32:47] I've built IABot to be extremely efficient [15:33:00] It's my best PHP bot I've written so far. [15:33:34] And with that, we will have plenty of resources to globalize this bot. [15:34:08] just wondering, I heard the bot use a lot of database queries. What else makes it efficient? [15:34:29] Oh yea. That's one problem I'm still trying to address. [15:35:31] So the bot is efficient in that if the links exist in a DB, Cyberbot's API queries on article are only either 1 or 2 queries. [15:36:21] Cyberbot's heaviest API operation is the binary scans of the page history over the API, to get access times of URLs. [15:36:48] Which is usually slow and heavy. So Cyberbot saves that information for future use on the DB. [15:36:48] so there's like multiple workers doing different tasks and sharing data in db? [15:36:58] Yes [15:38:17] As well as it's dead state, and archive urls being used for it and other information, that would otherwise require several CURLs. By caching data like that, Network bandwidth is kept down too, leaving it free for the checkIfDead class. [15:38:45] Which limits it's curling to HEAD requests, to keep mem usage and network bandwidth down./ [15:39:02] But we have discovered some issues with that method. [15:41:19] you mean when a url is used for two pages, the first page get whether the ul still exists or now, and on second page the cached results are used directly? [15:41:28] *not [15:41:35] In a manner yes. [16:08:51] 06Labs, 10Tool-Labs: Web requests fail after a period of time - https://phabricator.wikimedia.org/T133090#2227623 (10bd808) >>! In T133090#2227439, @Nettrom wrote: > I haven't seen many 503s over the past couple of days, so I regard this particular task as resolved. At the same time, I logged 71 restarts of th... [16:46:20] 06Labs, 10Tool-Labs: Web requests fail after a period of time - https://phabricator.wikimedia.org/T133090#2227801 (10Nettrom) There are 71 "server stopped" patterns in ~/error.log as well. All of them have timestamps within 6–12s (mean 7.89s) after each of the timestamps in the service log. So yes, it's the sa... [17:20:55] PROBLEM - Host tools-worker-1011 is DOWN: PING CRITICAL - Packet loss = 100% [17:59:32] 06Labs, 10Beta-Cluster-Infrastructure, 07Puppet: /etc/puppet/puppet.conf keeps getting double content - first for labs-wide puppetmaster, then for the correct puppetmaster - https://phabricator.wikimedia.org/T132689#2228059 (10Krenair) this just happened twice more :/ [18:01:08] (03CR) 10Merlijn van Deen: [C: 032] Added #wikimedia-interactive [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/284613 (owner: 10Yurik) [18:03:27] (03Merged) 10jenkins-bot: Added #wikimedia-interactive [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/284613 (owner: 10Yurik) [18:04:03] !log tools.wikibugs Updated channels.yaml to: 68d22345224b2e64698527a65b21c1a9cba9f84f Added #wikimedia-interactive [18:04:05] 06Labs, 10Beta-Cluster-Infrastructure, 07Puppet: /etc/puppet/puppet.conf keeps getting double content - first for labs-wide puppetmaster, then for the correct puppetmaster - https://phabricator.wikimedia.org/T132689#2228070 (10hashar) Le 21/04/2016 19:59, Krenair a écrit : > this just happened twice more :/... [18:04:07] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.wikibugs/SAL, Master [18:05:43] 06Labs, 10Tool-Labs, 10DBA: labsdb1001 short on available space - https://phabricator.wikimedia.org/T132431#2198300 (10RobH) So this just paged again on icinga/sms/irc: PROBLEM - MariaDB disk space on labsdb1001 is CRITICAL: DISK CRITICAL - free space: /srv 179614 MB (5% inode=99%) Just FYI. [18:06:01] 06Labs, 10Tool-Labs, 10DBA: labsdb1001 short on available space - https://phabricator.wikimedia.org/T132431#2228088 (10RobH) p:05Triage>03High [18:23:22] 06Labs, 10Tool-Labs, 10DBA: labsdb1001 short on available space - https://phabricator.wikimedia.org/T132431#2228138 (10Cyberpower678) Woah. Why is xtooks on the list? It shouldn't be using that much DB space. It should be using almost nothing. [18:25:30] 06Labs, 10Tool-Labs, 10DBA, 10xTools-on-Labs: `s51187__xtools_tmp` database using 272G on labsdb1001 - https://phabricator.wikimedia.org/T133321#2228140 (10valhallasw) [18:26:32] 10Tool-Labs-tools-Other, 06Commons, 06Multimedia: Commons API fails (413 error) to upload file within 100MB threshold - https://phabricator.wikimedia.org/T86436#2228157 (10Nemo_bis) [18:26:40] 10Tool-Labs-tools-Other, 06Commons, 06Multimedia: Commons API fails (413 error) to upload file within 100MB threshold - https://phabricator.wikimedia.org/T86436#968269 (10Nemo_bis) a:05BBlack>03None [18:26:43] 06Labs, 10Tool-Labs, 10DBA: u3532__ (=marcmiquel) table using 64G on labsdb1001 - https://phabricator.wikimedia.org/T133322#2228158 (10valhallasw) [18:27:23] 06Labs, 10Tool-Labs, 10DBA: labsdb1001 short on available space - https://phabricator.wikimedia.org/T132431#2228175 (10MusikAnimal) Where does `s51187__xtools_tmp` live? I can't seem to use it with `sql local`: ``` MariaDB [(none)]> USE s51187__xtools_tmp; ERROR 1049 (42000): Unknown database 's51187__xtools... [18:28:28] 06Labs, 10Tool-Labs, 10DBA: labsdb1001 short on available space - https://phabricator.wikimedia.org/T132431#2198300 (10jcrespo) > Where does s51187__xtools_tmp live? It is the enwiki/s1 host. [18:30:14] 06Labs, 10Tool-Labs, 10DBA: u2815__old_p (dispenser) database using 20G on labsdb1001 (enwiki) - https://phabricator.wikimedia.org/T133323#2228182 (10valhallasw) [18:30:23] Unfortunately I don't know who p50380g50816 is... [18:31:44] 06Labs, 10Tool-Labs, 10DBA: s51127__dewiki_lists (merlbot) database using 13G on labsdb1001 (enwiki) - https://phabricator.wikimedia.org/T133325#2228208 (10valhallasw) [18:32:31] valhallasw`cloud, I may guess magnus, but just based on a guess [18:32:37] valhallasw`cloud, thanks for this [18:32:44] no, it's tsreports [18:32:53] i.e. me >_< [18:32:57] 06Labs, 10Tool-Labs, 10DBA: u3532__ (=marcmiquel) table using 64G on labsdb1001 - https://phabricator.wikimedia.org/T133322#2228222 (10marcmiquel) Thanks for the message. I created new files few days ago, in particular a very big one from the English Wiki. I will free it as soon as I can. I hope next week I... [18:33:13] if for any reason they cannot be deleted, at least move them to another hosts would be enough [18:33:24] at least until we get more hard [18:34:19] oh, no, it's not. I was comparing the pXXXXX id, but I should compare the uXXXXX id [18:38:13] 06Labs, 10Beta-Cluster-Infrastructure, 07Puppet: /etc/puppet/puppet.conf keeps getting double content - first for labs-wide puppetmaster, then for the correct puppetmaster - https://phabricator.wikimedia.org/T132689#2228226 (10Krenair) cache-text04 again, I'll look into it [18:38:27] valhallasw`cloud: It can take a bit till T133325 get's completed, merl is inactive since the first february. But his bot is still running, I guess he needs a big part of the data [18:38:28] T133325: s51127__dewiki_lists (merlbot) database using 13G on labsdb1001 (enwiki) - https://phabricator.wikimedia.org/T133325 [18:38:41] see https://de.wikipedia.org/w/index.php?limit=100&title=Spezial%3ABeitr%C3%A4ge&contribs=user&target=MerlBot&namespace=&tagfilter=&year=2016&month=-1 [18:38:53] Luke081515: that's ok -- if the data is necessary, it's fine to use the space [18:39:00] ok :) [18:41:16] 06Labs, 10Tool-Labs, 10DBA: p50380g50943__cache (???) using 53G on labsdb1001 (enwiki) - https://phabricator.wikimedia.org/T133326#2228235 (10valhallasw) [18:45:30] 06Labs, 10Tool-Labs, 10DBA: p50380g50943__cache (???) using 53G on labsdb1001 (enwiki) - https://phabricator.wikimedia.org/T133326#2228250 (10valhallasw) [18:49:30] 06Labs, 10Tool-Labs, 10DBA: p50380g50816__pop_stats (popularpages) using 53G on labsdb1001 (enwiki) - https://phabricator.wikimedia.org/T133326#2228275 (10valhallasw) [19:01:43] valhallasw`cloud: Are all of those bugs getting assigned to the responsible tool owners? [19:02:50] andrewbogott: they are all cced [19:03:01] valhallasw`cloud: thanks! [19:03:55] 06Labs, 10Tool-Labs, 10DBA, 10xTools-on-Labs: `s51187__xtools_tmp` database using 272G on labsdb1001 - https://phabricator.wikimedia.org/T133321#2228317 (10Matthewrbowker) Over 10,000 tables in that database right now. http://pastebin.com/TpLU4RqH [19:20:01] PROBLEM - Puppet run on tools-bastion-10 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [19:22:02] ^ Notice: Skipping run of Puppet configuration client; administratively disabled (Reason: 'reason not specified'); [19:24:01] valhallasw`cloud: that's me sorry I"ll drop a note :) [19:35:08] RECOVERY - Puppet run on tools-bastion-10 is OK: OK: Less than 1.00% above the threshold [0.0] [19:36:13] 06Labs, 10Tool-Labs, 10DBA, 10xTools-on-Labs: `s51187__xtools_tmp` database using 272G on labsdb1001 - https://phabricator.wikimedia.org/T133321#2228388 (10Cyberpower678) >>! In T133321#2228317, @Matthewrbowker wrote: > Over 10,000 tables in that database right now. > > http://pastebin.com/TpLU4RqH WTF??... [20:06:14] valhallasw`cloud: Around? [20:06:40] multichill: yes [20:06:44] ish. [20:06:48] https://tools.wmflabs.org/autodesc/ seems to a bit brokenish [20:08:00] * valhallasw`cloud qmod -rj's it [20:10:16] 06Labs, 10Tool-Labs, 06Community-Tech-Tool-Labs: Collect and display basic metrics for all tools (service groups) - https://phabricator.wikimedia.org/T129630#2228450 (10bd808) One use of this would be proactively monitoring for large databases like the ones that are being looked at in {T132431}. [20:10:30] valhallasw`cloud: are you doing the log or shall I do it? [20:10:46] !log tools.autodesc qmod -rj'ed webservice as it was hanging [20:10:55] * valhallasw`cloud is never sure if people actually read the SAL [20:11:11] first! [20:11:43] there's a bug in the morebots that makes it not say that it worked on the first write to a new SAL [20:11:50] RECOVERY - Puppet run on tools-webgrid-lighttpd-1407 is OK: OK: Less than 1.00% above the threshold [0.0] [20:11:59] I'm used to working in operations so documenting changes is part of my routine. [20:12:22] and the world thanks you for that multichill :) [20:12:27] log all the things [20:12:42] What is the SAL? [20:12:47] server admin log [20:12:50] server admin log [20:13:17] That thing the bot logs to? [20:13:23] When you do !log [20:13:35] tom29739: there is a bot that watches for !log ... messages and writes to pages like https://wikitech.wikimedia.org/wiki/Server_Admin_Log [20:13:53] https://wikitech.wikimedia.org/wiki/Category:SAL [20:15:06] in this channel you write !log tools. and it will be written to /Nova_Resource:Tools./SAL [20:15:59] I see what you mean. [20:16:06] And it creates those logs. [20:31:05] 06Labs, 10Tool-Labs, 10DBA, 10xTools-on-Labs: `s51187__xtools_tmp` database using 272G on labsdb1001 - https://phabricator.wikimedia.org/T133321#2228547 (10MusikAnimal) When we grepped for `xtools_tmp` the things that stood out were `./public_html/autoblock/core` and `./public_html/pages/core` which are bo... [20:33:00] valhallasw`cloud: ^ should have freed up 200GB or something [20:33:27] lol, well, someone really called some binary 'core' ? [20:33:47] yeah I dunno hah [20:34:28] MusikAnimal: 'core' is a core dump [20:34:42] (Process was killed and memory dumped) [20:34:51] dah [20:34:57] Thanks for freeing up that space! [20:35:02] no problem [20:35:29] So probably php processes in this case? Not sure [20:36:41] so you're saying when these php processes die they are dumped into these core files? [20:36:51] xtools is restarted maybe 6-8 times a day [20:37:10] not sure if that has anything to do with it [20:37:36] a core dump is generated by the linux kernel when a process crashes with a segmentation fault or similar exceptional event [20:38:09] it's a copy of the process memory at the time the crash occurred [20:38:19] right [20:38:33] a tool like gdb can be used to inspect what was happening then [20:39:00] that being said, we should probably have core dumps disabled on the grid normally [20:44:06] 06Labs, 10Tool-Labs, 10DBA, 10xTools-on-Labs: `s51187__xtools_tmp` database using 272G on labsdb1001 - https://phabricator.wikimedia.org/T133321#2228645 (10bd808) >>! In T133321#2228317, @Matthewrbowker wrote: > Over 10,000 tables in that database right now. > > http://pastebin.com/TpLU4RqH Obligatory [[... [20:53:09] 06Labs, 10Tool-Labs, 10DBA, 10xTools-on-Labs: `s51187__xtools_tmp` database using 272G on labsdb1001 - https://phabricator.wikimedia.org/T133321#2228695 (10MusikAnimal) Thanks for the info :) Looks like it's the counter that's creating the tables: https://github.com/x-tools/xtools/blob/master/modules/Count... [20:58:19] 06Labs, 10Tool-Labs, 10DBA, 10xTools-on-Labs: `s51187__xtools_tmp` database using 272G on labsdb1001 - https://phabricator.wikimedia.org/T133321#2228733 (10Cyberpower678) I didn't even know the core did that. I guess I'm really outdated with the code, being familiar with the old original xTools that I mov... [21:20:23] Matthew_: Just a tip: Phabricator has pastes too: https://phabricator.wikimedia.org/paste/edit/form/14/ ;) [21:24:43] Luke081515: thanks :3 [21:25:10] I'll switch it over to phab when I can get back to a computer. [21:27:25] Matthew_: Np ;) Phab has the advantages of comments and custom visibility options ;) [21:37:36] Yeah. [21:37:43] I'll transfer it from PasteBin. [21:37:54] I just had it pastebinned already to share on IRC :) [21:43:05] aww http://tools.wmflabs.org/replag/ [21:45:25] Nemo_bis: ouch [21:47:59] Nemo_bis: I pinged in the #-databases channel. Not sure if that's related to the disk utilization stuff that is going on labsdb1001 or not [21:58:07] 06Labs, 10Tool-Labs, 10DBA: labsdb1001 short on available space - https://phabricator.wikimedia.org/T132431#2229152 (10Volans) It actually went out of space during the spike: ``` Thu Apr 21 18:01:55 2016 TokuFT file system space is really low and access is restricted 160421 18:01:55 [ERROR] Master 's5': Sl... [21:59:00] Nemo_bis: looks like is related to labsdb1001 issues. ^ [22:02:41] quickly getting better now [22:09:24] \o/ DBA heros [22:19:11] might just be me, but suddenly i'm not getting any response from bastion1.eqiad.wmflabs [22:19:33] oh i bet its controlmaster or some such ... checking [22:31:38] 06Labs, 10Tool-Labs, 10DBA, 10xTools-on-Labs: `s51187__xtools_tmp` database using 272G on labsdb1001 - https://phabricator.wikimedia.org/T133321#2229356 (10Matthewrbowker) p:05Triage>03High a:03MusikAnimal Assigning to MusikAnimal as he's investigating further. [22:39:10] ok, s4 has replag 0 again now [22:39:20] s2 has 20 minutes left, but decreases [22:42:47] replag is gone :) [22:42:48] 06Labs: Console log broken in new Jessie labs base images - https://phabricator.wikimedia.org/T133363#2229378 (10Andrew) [22:53:47] 10Wikibugs, 10xTools-on-Labs: Add wikibugs to the Wikimedia XTools IRC Channel - https://phabricator.wikimedia.org/T133364#2229415 (10Matthewrbowker) [22:57:42] 10Wikibugs, 10xTools-on-Labs: Add wikibugs to the Wikimedia XTools IRC Channel - https://phabricator.wikimedia.org/T133364#2229415 (10Legoktm) See https://www.mediawiki.org/wiki/Wikibugs#Configuring_channels for how to add your channel, patches welcome! [23:02:53] 10Wikibugs, 10xTools-on-Labs: Add wikibugs to the Wikimedia XTools IRC Channel - https://phabricator.wikimedia.org/T133364#2229453 (10Matthewrbowker) a:03Matthewrbowker @Legoktm Thank you for the information. I'll make a patch soon. [23:36:24] 06Labs, 10Tool-Labs, 10DBA: labsdb1001 short on available space - https://phabricator.wikimedia.org/T132431#2229516 (10Dispenser) [23:36:26] 06Labs, 10Tool-Labs, 10DBA: u2815__old_p (dispenser) database using 20G on labsdb1001 (enwiki) - https://phabricator.wikimedia.org/T133323#2229513 (10Dispenser) 05Open>03Resolved a:03Dispenser I (eventually) found the script for create and populating those tables, so I've dropped them.