[07:48:33] hello [08:56:15] 3Wikimedia Labs / 3tools: Add some of the missing tables in wikidatawiki_f_p - 10https://bugzilla.wikimedia.org/59682#c7 (10Silke Meyer (WMDE)) Hey Marc-André, for merl this is a crucial thing for migrating. Please give this a high priority. Thanks!! [09:17:16] !log tsreports webservice randomly broken: 2014-06-06 10:51:20: (server.c.1512) server stopped by UID = 0 PID = 12087 [09:17:19] 2014-06-06 10:51:20: (server.c.1502) unlink failed for: /var/run/lighttpd/tsreports.pid 2 No such file or directory [09:17:21] Logged the message, Master [09:19:16] Coren, can something be done about the 'webserver randomly dying syndrome'? A warning e-mail 'your webserver died' would be useful, at the very least. [09:20:37] !log tsreports wtf? job=487745 gives start_time Fri May 23 14:30:17 2014 end_time Fri Jun 6 10:51:21 2014 exit_status 0 - so not killed? However, maxvmem is 3.973G, so probably OOM. [09:20:39] Logged the message, Master [09:20:48] !log tsreports restarting, hoping it will stay up. No clue how to debug that OOM. [09:20:49] Logged the message, Master [09:35:06] why is puppetd not found on my instance now? [09:35:20] (03PS1) 10Faidon Liambotis: Add dummy passwords::mysql::otrs [labs/private] - 10https://gerrit.wikimedia.org/r/138558 [09:35:38] (03CR) 10Faidon Liambotis: [C: 032] Add dummy passwords::mysql::otrs [labs/private] - 10https://gerrit.wikimedia.org/r/138558 (owner: 10Faidon Liambotis) [09:35:58] (03CR) 10Faidon Liambotis: [V: 032] Add dummy passwords::mysql::otrs [labs/private] - 10https://gerrit.wikimedia.org/r/138558 (owner: 10Faidon Liambotis) [09:37:00] 3Wikimedia Labs / 3tools: Provide user_slot resource in grid - 10https://bugzilla.wikimedia.org/52976#c4 (10Silke Meyer (WMDE)) Hi Marc-André, not a blocker, but can you please make this happen for merl nevertheless? Best, Silke [09:37:59] 3Wikimedia Labs / 3tools: Provide namespace IDs and names in the databases similar to toolserver.namespace - 10https://bugzilla.wikimedia.org/48625#c36 (10Silke Meyer (WMDE)) Nosy is on it. Long term maintenance of this tool is still an open question though. [10:11:40] aude: maybe it was renamed with puppet 3? that's the only thing I've seen on puppet on the mailing list [10:11:55] no idea [10:12:04] i can wait for andrewbogott_afk etc [10:18:06] aude puppet agent apply or something [10:18:13] hmmm, ok [10:18:32] aude: it was on labs-l from hashar [10:19:12] puppet agent -tv [10:19:23] so much for reading my mails :) [10:35:56] 3Wikimedia Labs / 3deployment-prep (beta): Improper deletion of fiwiki from beta.wmflabs.org - 10https://bugzilla.wikimedia.org/66401 (10Andre Klapper) [10:37:23] valhallasw, agreed we need a mail when web server oom, beside that when I got that, it was a leak in my script, I leaked a python thread ;( [10:40:49] I don't think it's safe to try to add a webservice start as a cron job [11:43:25] hi. I'm getting an error while running a python script on tool-labs. ValueError: You can only send PreparedRequests. does anybody know about it? (works on my local pc) [11:56:10] phe: yeah, but imo this should be fixed on the TL level, not on the project level... [12:11:01] 3Wikimedia Labs / 3tools: Move wiki.toolserver.org to WMF - 10https://bugzilla.wikimedia.org/60220 (10Silke Meyer (WMDE)) p:5Unprio>3High a:5Marc A. Pelletier>3Sam Reed (reedy) [12:19:38] aude: "puppetd" was deprecated in puppet 2, removed in 3. The correct useage now is "puppet agent"; a manual puppet run is thus "puppet agent -tv" [12:27:13] Coren: thanks [12:34:45] 3Wikimedia Labs / 3tools: Add some of the missing tables in wikidatawiki_f_p - 10https://bugzilla.wikimedia.org/59682#c10 (10Marc A. Pelletier) 5PATC>3RESO/FIX Views added. [13:22:41] Coren: can you take care of https://bugzilla.wikimedia.org/show_bug.cgi?id=66344 when you get a second? [13:23:25] Betacommand: I'll take a look before lunch. [13:23:40] Coren: thanks [13:25:27] Coren: one suggestion to https://www.mediawiki.org/wiki/Wikimedia_Engineering/2014-15_Goals#Wikimedia_Labs is creating a backup process [13:35:50] Oh, hm. I thought that made it to the draft; I guess it got lost in the transcription. [13:37:27] (added it) [14:03:18] hi. I want to add a folder to PYTHONPATH env in tools-lab. how can it be done permanently..? [14:10:44] 3Wikimedia Labs: Upgrade mwparserfromhell - 10https://bugzilla.wikimedia.org/66344#c1 (10Marc A. Pelletier) 5NEW>3RESO/FIX Version updated to (local) 0.3.3-1; the next puppet run should upgrade. [14:13:17] Betacommand: Next puppet run will push up to 3.3 [14:13:21] 0.3.3* [14:15:13] Coren: thanks [14:26:31] Coren: Before I dive back in again… do you have any immediate thoughts about my weird vmbuilder issue? [14:26:54] Seems like some tools are starting w/in the chroot? [14:27:29] andrewbogott: Not offhand; I was poking some other opsen about other pressing issues and you were next on my list. Have you looked at an lsof see what made the mount busy? [14:28:11] Coren: Yes, that's in the email I think. Ruby stuff is held open by… ruby things. e.g. puppet agent [14:30:07] andrewbogott: Ah, sorry, like I said I haven't had time to look at it yet. I'll do so now if you want, provided you answer a q of mine. :-) [14:30:24] Sure. But if you're doing time-critical things then it can wait [14:30:35] Speficially: how confident are you atm about being able to migrate instances from one brick to another? [14:31:01] If you don't mind some downtime it's a pretty safe process. There's a script... [14:31:12] are you looking at the docs already? If not I'll find you a link [14:35:23] well… still searching for docs. But the short answer is 'cold-migrate' on virt1000 [14:35:42] It does everything except for deleting the old VM. So even if things go badly you can recover [14:42:26] well… Coren, I can't find you a doc link. But the instructions are only a few steps anyway. Shut down the instance (optional), cold-migrate, verify that the new instance is up and working, delete image files on the old host. [14:42:56] andrewbogott: Allright, lemme take a peek at this now. [14:47:50] andrewbogott: I think I might have an idea what might be going on; clearly there's a puppet run going on ("puppet agent"); that used to be "puppetd" and there might have been a guard against it. [14:48:28] andrewbogott: I'm thinking one of the build script might have a "puppetd --disable" in it that is now broken because 3. [14:52:52] Coren: that's roughly what I was thinking too. Lemme look for that again... [14:54:01] I'm pretty sure we can expect the problem to be caused by 2.7->3 [14:54:20] * andrewbogott nods [14:58:03] Cyberpower678: you free? [14:59:51] Coren: Well, grepping for 'puppet' in our puppet manifest or the vmbuilder source turns up nothing. So it's something less specific than that I guess [15:00:12] * Coren ponders. [15:00:52] I'm going to keep brain cycles on it in the background while I work on my NFS bandwidth issue. [15:14:10] Pratyya, not really. [15:15:06] Will you pls look at https://bn.wikipedia.org/wiki/%E0%A6%AC%E0%A7%8D%E0%A6%AF%E0%A6%AC%E0%A6%B9%E0%A6%BE%E0%A6%B0%E0%A6%95%E0%A6%BE%E0%A6%B0%E0%A7%80:Pratyya_Ghosh/Userboxes/My_Time, it is the translation of https://en.wikipedia.org/wiki/User:Pratyya_Ghosh/Userboxes/My_Time [15:15:12] Cyberpower678: ^ [15:21:53] Coren: I found some similar issues in forum posts, and generally the suggested solutions are 'remove that package' or 'add some sleep right before the unmount' [15:21:58] So I'm trying the latter, dumb as that is... [15:22:16] Oh, eeew. [15:25:27] Coren: amusingly, most people encounter the problem with a stray cron process. And folks are, like, just remove cron from the base image and install it later. [15:25:34] Because it's just cron, no one uses that anyway? [15:25:44] * Coren facepalms. [15:28:56] andrewbogott: there is "chronos" and "hcron" to replace them :p [15:29:08] that doesnt sound like fun to do though [15:29:56] Remind me why http://*.tools.wmflabs.org/ doesn't redirect to tools.wmflabs.org/*/ ? [15:30:33] a930913: because it's problematic for https, I think. [15:31:29] valhallasw: Cert gets signed for *tools.wmflabs.org? [15:34:09] petan: Is there a command for wm-bot to cancel a @notify request? [15:35:44] a930913: yea, can't have *.*.wmflabs.org cert [15:35:49] just one level of * [15:39:35] a930913: basically, *. certificates are expensive; there is an *.wmflabs.org certificate, and no direct need for project.tools.wmflabs.org [15:40:43] Oh, *. is only one level of subdomainness? [15:41:13] a930913: Yep. [15:41:40] Krinkle: matanya: Need one of you (or anyone else administering the cvn project). [15:42:57] Coren: Wha tup? [15:43:40] anomie: no but it will automatically expire in few days :P [15:43:45] Krinkle: cvn-app4 is eating 95% of NFS bandwidth. We need to talk efficiency dude. :-) [15:44:39] I bet they use python for something :P [15:45:08] Coren: Hm.. I'll give it a reboot to verify it's deterministic and consistent (not something that happens after the bots run for a week). [15:45:15] sudo apt-get remove python = ultimate solution for all labs performance problems [15:45:21] Coren: I suspect it is the I/O from the sqlite databases [15:45:24] Krinkle: kk, worth a try. [15:45:38] ... wait; you have a database on the NFS partition? [15:46:24] Coren: What do you mean by NFS here just to verify? I don't know how it is implemented on the backend, I just know that any data I care for shoud be on /data/project [15:46:54] * Coren headdesk. [15:47:15] You're welcome to put copies there, but for the love of the Network Gods do not run a database on a networked filesystem! :-) [15:48:06] shouldn't that be 'don't use sqlite for performance-sensitive tasks'? [15:48:20] The web server (cvn.wmflabs.org/api.php) also runs its php files from a directory that is the docroot which is in /data/project [15:48:47] both the headless irc bots db and web server static files [15:49:51] Coren: OK, ever more desperate measures… if I want to 'killall' things like this: [15:49:51] root 7740 1 0 00:23 ? 00:00:01 /usr/bin/ruby /usr/bin/puppet agent [15:49:57] Krinkle: Web server traffic is normally okay. [15:50:04] How do I do that? When I do $killall "/usr/bin/ruby /usr/bin/puppet agent" [15:50:18] I get… 'No such file or directory' which I don't understand at all [15:50:24] andrewbogott: afaik, you have no way of doing a killall on arguments, only program names. [15:50:33] andrewbogott: 'killall ruby' is the best you could do. [15:50:36] Coren: Hm.. I'm slightly surprised/confused though. Is this actually NFS? I mean, they're virtual instances. [15:50:37] you'd have to parse the output of ps [15:50:38] that says no such process [15:50:45] and get the PIDs [15:50:57] Krinkle: /data/project and /home is NFS [15:50:58] andrewbogott: I can send you some script, sec [15:51:13] But I guess the instance host (the labs virt nodes) have more than one, so it needs actual NFS still to communicate between instances on different virt nodes [15:51:33] mutante, petan, yes, I can think of how to script it. But why does 'killall' not work if I just want to Destroy All Ruby? [15:51:56] kill -9 `ps -ef | grep |awk '{print $2}' | xargs` [15:52:16] andrewbogott: killall ruby doesnt work? [15:52:17] Krinkle: Nope; "local" disk is local. [15:52:18] because ruby is pain in the ass :P [15:53:21] Krinkle: Running a database on a remote filesystem for anything but extremely lightweight/trivial use is a recipe for disaster. [15:53:30] petan: what's your favorite language? :) [15:53:38] Krinkle: Please tell me you aren't also using that database concurently from more than one instance? :-) [15:53:39] andrewbogott: possibly killall -9 instead? not sure if ruby catches other kill signals [15:53:55] The issue is that killall says 'no such process' [15:55:01] petan: I don't suppose you know offhand how to quote that command so I can pass it to os.system in python? quoting hell! [15:55:09] andrewbogott: killall /usr/bin/ruby? [15:55:13] valhallasw: same [15:55:25] andrewbogott: don't use os.system, use subprocess.Popen(['command', 'arg1', '...']) [15:55:25] Coren: Nope, no concurrency [15:55:35] or subprocess.check_output et al [15:55:36] I don't know how to quote it in python, but I can shell quote it [15:55:57] * andrewbogott always winds up spending all day whenever he uses subprocess :( [15:56:25] However we do use /data/ as transport between instances (the app instance makes a copy every minute from /data/project/----/app/db/ to /data/project/----/web/api/data (simplified)) so that the web server isn't acessing the same file [15:56:33] andrewbogott: os.system maps to subprocess.call(...), iirc (returns return code, output goes to stdout) [15:56:38] which wouldn't just be foolish but actually doesn't work [15:56:42] because sqlite locks [15:56:42] valhallasw: lol, that kind of kills the functionality isn't it? :D [15:57:18] Krinkle: Allright; turn on the role::labs::lvm::srv puppet class, that'll give you local storage in /srv and please move the sqlite database in there. [15:57:19] valhallasw: command is kill -9 `ps -ef | grep |awk '{print $2}' | xargs` you /can't/ use subprocess.Popen for that [15:57:39] with piping [15:57:45] subprocess.check_output, then. [15:58:21] Krinkle: As a side effect, your /own/ performance will increase by an order of magnitude. :-) [15:58:33] also: why are you piping to xargs? [15:58:37] oh wait, killall isn't failing, the problem is that the tool is in a chroot [15:58:41] ps -ef | grep |awk '{print $2}' | xargs kill -9 makes more sense [15:58:45] but it shows up in ps with an absolute path? Weird [15:59:38] Coren: I assume that's instance-specific, right? [15:59:49] Krinkle: Yes, hence /local/ disk. [16:00:07] Krinkle: You're welcome to make regular dumps of the database in /data/project though, for backup. [16:00:18] Krinkle: Just don't actually have a live database on it. :-) [16:00:21] Coren: The storage assigned to an instance (e.g. 40, 80 or 160GB) is that not mounted by default? [16:00:39] Krinkle: No, that's what the role::labs::lvm::* classes do. [16:00:51] Ah, that explains /tmp being so small [16:00:54] So you get to pick where. [16:01:05] (or / rather) [16:01:33] (You're welcome to partition differently, role::labs::lvm::srv is just a commonly used shortcut for the typical use case) [16:03:58] Coren: So.. this is the first time since 2007 that CVN stuff runs all in 1 environment instead of scattered across a fleet of private, free, corporate and toolserver stuffies. And the first time it's gotten any feedback from the environments' operator. The software we run is almost entirely unmaintained and on its way out. I can't make this change right away as it would break stuff, but I'll try do it [16:03:58] during the next maintenance window so that the bots stay up. [16:04:20] which is in a couple days [16:05:23] Allright, but I reserve the right to choke your bandwidth a bit if things get worse. [16:07:58] Krinkle: What does all the software do? [16:11:09] Coren: Sure, but all means. [16:11:16] The bots have never run this fast to be honest. [16:12:01] a930913: Eh.. ask me tomorrow. It's too big an infrastructure to explain in a few words. In a way, without it Wikipedia will sink to the floor full of vandalism. [16:12:52] Of course that's a large exaggeration, there's two more labs projects that do a similar thing, so you'd have to take out all three if you want to do serious damage. [16:13:22] Krinkle: Vada, CBNG, Huggle, STiki? [16:13:53] a930913: there's an entire "academy" for this :p [16:13:56] https://en.wikipedia.org/wiki/Wikipedia:CVUA [16:14:07] Huggle and STiki do have servers but they mostly don't "run" somewhere, they're local standalone executables that connect to irc.wikimedia.org for recent changes feed [16:14:21] This is for CVNBot and SWMT [16:14:29] see their respective meta.wikimedia.org pages [16:14:46] as well as the CVN database which has a JSON API used by various other bots and gadgets. [16:15:05] So it's your typical SPOF upside down pyramid that happens when things grow organically over years and years. [16:20:04] Coren: I don't know if this is the kind of thing Ganglia would show, but just saying.. :) [16:21:01] And thanks for the feedback, I'm sure you're doing it for the benefit of the project and not me, but still appreciated. I have nothing to defend for I inherited it all and have the fine duty to keep it running. [16:24:44] https://github.com/countervandalism/infrastructure/issues/4 [17:34:43] Krinkle: so, basically, take out labs and vandalism will reign free? :-p [17:37:10] valhallasw: That's why decentralised where possible is best. [17:40:13] STiki is central, as is Igloo. Huggle3 seems to be centralising more, but I think it still could fallback. AWB and Vada don't have a centralised core, but can use third party additions. [18:03:45] Any ideas on how to have a number of lists, where the elements expire a set time after being added? [18:04:00] (Well, presumably one list, replicated a number of times.) [18:07:27] a930913: in redis? [18:09:00] a930913: I don't think redis supports it by default, but you can maybe just add an expiration timestamp to each item [18:09:16] and check juts after the BLPOP? [18:12:50] valhallasw: Is there a better way than redis? [18:13:11] a930913: I'm not sure what your 'list' context is [18:13:26] but the 'add an expiration timestamp to each item and check when getting an item' method always works [18:15:42] valhallasw: No way to remove the double stage from the application level? [18:16:33] * a930913 might as well just make a DB for that. [18:17:24] a930913: again, I'm not sure what the context is [18:17:50] as you're not providing any :-p [18:18:40] valhallasw: Read the RC redis feed and log certain ones. [18:18:51] But I only want to keep them for 24 hours. [18:19:13] valhallasw: I have http://tools.wmflabs.org/cluestuff/cgi-bin/vada/history.cgi [18:19:23] But that just crons a clear script at midnight. [18:19:44] right. but you already have a time column there :-p [18:19:55] so displaying just the last 24 hours should be easy [18:20:04] valhallasw: It's a flat file. [18:20:22] It's stored in the HTML you see there :p [18:20:58] oh, right :-p [18:20:59] echo $head; cat table.txt; echo $foot; [18:22:15] then: yes, use a database :-p [18:22:33] combined with a query that checks for date > ... [18:22:41] Hmm, I could use one of them trigger things, no? [18:22:42] and a daily-or-so-cron that prunes the table [18:23:01] I guess you could also trigger a prune on insert, yes [18:47:35] Er... table created first time. That never happens :/ [19:16:30] hey, nodebot people? [19:16:55] wikichanges-* and wikistream-* bots, are running somewhere from here it seems [19:17:07] but on irc.wikimedia.org they just keep joining and quit flooding [19:17:33] one is called wikistream-wmflabs [~nodebot@anonymous.user [20:01:25] my grid job on tools-lab gets killed with signal SIGKILL automatically while downloading images..? why is that so? [20:02:03] is that due to memory? [20:06:28] andrewbogott: who should I ask about an IP address? [20:06:47] gwicke: me, probably :) Do you just need web access or the whole deal? [20:06:55] the whole deal [20:06:59] what project? [20:07:05] visualeditor [20:07:17] it's a web service that will collect wikitext lint data [20:08:00] technically it might be possible to do all the communication through the instance proxy [20:08:20] ok, you should have a second one now [20:08:30] but an IP would make that easier as we can use separate ports [20:08:30] andrewbogott: can you help me with SIGKILL in grid? [20:08:36] andrewbogott: cool, thanks! [20:09:08] rohit-dua: …maybe? What do you mean? [20:09:34] something is killing his image downloads [20:10:00] andrewbogott: well does grid issue SIGKILL after a fixed time ? I am downloading something from the job.. how to prevent SIGKILL? [20:10:15] Not after a fixed time. But it does it your process eats too much memory. [20:10:25] *if your process [20:10:52] I believe that Coren can allocate more memory? But if you're downloading an image entirely into memory, that would probably do it... [20:11:31] well i'm downloading several images and then converting to 1 pdf... [20:12:03] images are dwonloaded individually and then written to file [20:24:22] rohit-dua: jsub -mem 1024m * [20:25:14] andrewbogott: I am not able to log into the new instance, getting Permission denied (publickey) [20:25:29] instance name is lintbridge, project visualeditor [20:25:47] I chose trusty [20:26:00] how long ago did you start it? [20:26:28] the first instance was up for about 10 minutes [20:26:38] then re-created the instance, but still no dice [20:26:51] that was maybe 3 minutes ago [20:28:24] a930913: thank you. [20:31:29] 3Wikimedia Labs / 3tools: Provide namespace IDs and names in the databases similar to toolserver.namespace - 10https://bugzilla.wikimedia.org/48625#c37 (10nosy) Its done so far. [20:34:00] gwicke: I'm looking, not sure what's happening yet. [20:37:15] andrewbogott: just retried, still no login [20:52:55] gwicke: I think I found the problem, looking for a fix now [20:54:57] andrewbogott: great [21:08:11] gwicke: I'm in the process of fixing existing instances (atm puppet is broken on every single one…) [21:08:29] but if you build a new one it should be fine. Want to give Trusty (testing) a shot while you're at it? I'm right about to promote that image anyway. [21:08:51] what's the difference between just 'trusty' and 'trusty (testing)'? [21:12:03] trusty (testing) is a new image that I built yesterday [21:12:09] and auditioning to replace trusty [21:12:21] It has puppet3 pre-installed. The old image will upgrade puppet first thing on boot… a bit tedious [21:12:35] It has a bunch of other upgraded packages as well. [21:12:47] It should just work and be better, but I need a guinea pig: you :) [21:13:13] andrewbogott: still no dice with (testing) [21:13:41] I'm trying to log in from bastion.wmflabs.org [21:15:11] is login on new instances working for you? [21:15:36] I don't know yet. [21:15:39] Let me check again [21:16:13] Why does my cgi not work when I "import cgi"? [21:22:12] gwicke: I just built a fresh instance that I can log into [21:22:53] oh wow, not it works for me as well [21:23:02] On several Tools hosts, /var has less than 100 MByte free. Apparently, something's filling up /var/log/diamond. What's that? [21:23:08] andrewbogott: thanks! [21:23:43] gwicke: thanks for noticing -- would've been bad to leave puppet broken everywhere :) [21:24:14] andrewbogott: you are welcome ;) [21:24:43] andrewbogott: could be nice to create a test for this [21:24:52] diamond 19637 0.0 0.3 379320 15152 ? Ssl May13 39:29 /usr/bin/python /usr/bin/diamond --foreground --skip-change-user --skip-fork --skip-pidfile [21:25:02] create a new instance, try to log in after five minutes or so, tear it down [21:25:55] gwicke: yep. [21:26:17] gwicke: so, once you're 20 minutes in or so can you email me that that (testing) image is working? And then I'll remove its stigma. [21:26:47] andrewbogott: I just told our gsoc student to start working on it [21:26:59] I'll let you know if he runs into any issues [21:27:27] he just logged in, so that's a good first step [21:27:39] thx [21:33:14] gwicke: I need to go for ~ an hour… you currently unblocked? [21:33:56] can a continuous job call another finite job via jsub? [21:33:59] andrewbogott: yes, thanks! [21:36:41] !log tools Deleted /var/log/diamond/diamond.log on all Tools hosts to free up space on /var [21:36:44] Logged the message, Master [21:37:36] rohit-dua: jsub can't be called from the grid :( [21:38:55] a930913: thats sad :(. well can we have pool/threads in grid? [21:40:07] And it's not a matter of jsub being available. The hosts themselves are not allowed to submit jobs, so you can't circumvent that. [21:41:45] hm, that gives me an idea [21:51:32] scfc_de: why are they not allowed to/ [21:51:33] !log tools Restarted diamond service on all Tools hosts to actually free the disk space :-) [21:51:36] Logged the message, Master [21:51:50] also: what prevents me from doing 'ssh submit jsub xyz'? :-p [21:52:35] valhallasw: a) I don't know. b) Nothing :-). [21:53:05] rohit-dua / a930913 ^ as a workaround, you can try that ssh trick [21:53:53] … great [21:54:28] except tools-submit does not accept connections from tools-exec-* [21:55:05] so you'd have to ssh to tools-login, and you need a local ssh key for that [21:55:07] Hmmm. Can you get from there to -login? [21:55:21] so that's rather painful, too. [21:55:34] (there's no host-based auth from -exec to -login) [21:55:43] valhallasw: that would sloowww down the submit [21:56:21] rohit-dua: it will be queued for a while anyway [21:56:35] I think the biggest wait occurs in SGE handshake; sshing shouldn't be that long. [22:40:14] 3Wikimedia Labs: Service diamond creates 500+ MByte /var/log/diamond/diamond.log - 10https://bugzilla.wikimedia.org/66458 (10Tim Landscheidt) 3NEW p:3Unprio s:3normal a:3None I had to delete those files and restart the diamond service on the Tools host because they were clogging up /var/log. Chase exp... [23:28:42] gwicke: still happy with that instance? [23:37:27] andrewbogott, its working fine :) [23:37:49] hardikj: great. I just made that image official, so if you make a new instance you can just select 'Trusty' [23:37:57] as 'Trusty (testing)' isn't there anymore [23:38:15] ok :) [23:39:32] andrewbogott: thanks again! [23:57:55] (03CR) 10Catrope: [C: 032] Added JsonConfig to mobile channel [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/138101 (owner: 10Yurik) [23:57:58] (03Merged) 10jenkins-bot: Added JsonConfig to mobile channel [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/138101 (owner: 10Yurik)