[15:02:17] ottomata: ping [15:02:27] ottomata: Daily Scrum time :-) [15:03:11] baah thank you [17:36:57] (PS1) Milimetric: Documenting through Diagrams [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/99677 [17:37:13] (CR) Milimetric: [C: 2 V: 2] Documenting through Diagrams [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/99677 (owner: Milimetric) [18:54:27] DarTar, two questions [18:54:34] howdy [18:54:52] (1) is that query run limit still enforced on 67, and if so, is it present on s1? (2) stat1002.eqiad.wmnet is where we have the sampled request logs, right? [18:55:22] (1) there's no query limit on db67 or s1/db1047 [18:55:37] hrm. I wonder why it kept disconnecting me, then. [18:55:39] it actually turns out springle is ok with running long queries on other slaves too [18:55:44] (I said it) [18:56:20] hmm, disconnecting or killing queries? [18:56:42] it'd be good to ping him if you wish to troubleshoot [18:57:09] (2) yes and a bunch of other datasets [18:57:25] sweet [18:57:30] and you can get at stat1002 through bastion? [18:57:47] I was running into some odd issues and was trying to debug before defaulting to 'nobody bothered to give me access, stick an RT ticket in' [19:08:26] Ironholds: yes, I had to ask ottomata to help me with the set up [19:09:21] hrm [19:09:32] okay, I'll try again and fling in an RT ticket if it's still ignoring my key. Thanks :) [19:10:08] hio [19:10:13] whasss happening? [19:10:41] ottomata: stat1002 doesn't like me. I don't know why, I've never done anything to it. [19:11:00] Except for that unfortunate dinner party, but how was I to know its cousin was allergic to mushrooms? [19:12:18] ha [19:12:21] yeah, I think it's a lack of permissions [19:12:23] yeah you shoulda known [19:12:24] * Ironholds wanders over to RT. [19:12:26] wha'ts the problem? [19:12:52] just not accepting my key - I suspect I was just never given access. [19:12:55] oh yeha [19:12:57] we need to do this less piecemeal next time ;p [19:12:59] you do not have accoun thtere [19:13:07] RT for sure [19:13:17] were you supposed to have been given an account there? [19:14:00] who knows! I asked DarTar/Christian for some data, Christian pointed me at stat1002 (sampled request logs) [19:14:16] I'll throw a ticket in. [19:15:19] Ironholds: copy your boss for approval [19:15:37] yeah, I shall [19:15:43] as soon as I remember my RT user/pass combination. [19:16:51] got it [19:17:17] access-requests, I assume? [20:08:06] hey milimetric [20:08:13] howdy [20:08:25] i'm making some modifications to my jsonlogsterā€¦not sure how to run my tests properly [20:08:35] i'll jump in the hangout [20:08:48] k [20:52:51] ottomata: what was the solution to https://gist.github.com/ottomata/7753319 ? [20:53:13] i got the same error on logstash nodes, googled it, and your gist came up :) [20:53:32] oh, somehow the value for net.core.rmem_max that we are having puppet installed isn't being respected [20:53:37] i'm not sure why [20:53:41] you can reload it [20:53:42] um [20:54:02] sysctl -p /etc/sysctl.d/10-wikimedia-base.conf [20:54:02] restart sysclt? [20:54:07] that might work too [20:54:10] then [20:54:18] service ganglia-monitor start [20:54:30] ottomata: worked! thanks [20:54:33] yup! [20:54:38] that's a weird thing though [20:54:40] that shoudln't happen [20:55:51] might be my fault, i rewrote the sysctl module a while back [20:55:56] i might've gotten the refresh logic wrong [21:01:47] aye hmm [21:01:57] i'll take a look [21:01:57] ah, milimetric, i just accidentally pushed something manually, rather than submitting for review [21:02:02] i wanted you to review it first :/ [21:02:08] sok [21:02:25] post review please? [21:02:26] https://github.com/wikimedia/operations-debs-logster/commit/48610fc6dd5a2ec012f6c3c642960d1312e6e6ce [21:02:46] no prob, looking [21:02:49] particularly the changes to JsonLogster [21:03:24] ah, '%s' % someString is discouraged as far as I knkow [21:03:38] I'd use '{0}'.format(someString) instead [21:03:56] for multiple values, '{0} {1}'.format(one, two) [21:05:09] oh, ottomata, did you mean just write it in github? [21:05:09] :) [21:05:35] no here is fine [21:05:44] oh really? [21:05:44] hm [21:06:04] well i won't change that in bin/logster, that is Etsy code [21:06:41] milimetric: [21:06:43] https://gist.github.com/ottomata/7831961 [21:06:47] that's my subclass of JsonLogster [21:06:51] would appreciate comments there too [21:06:54] i'll change the %s there [21:07:20] oh ew, etsy uses that [21:07:40] coin flip between consistency and doing it "right" [21:09:34] anything else :)? [21:09:45] yeah [21:09:53] i've got some questions about a couple of these things [21:09:55] hangout? [21:12:03] k [21:27:44] ottomata: how late are you going to be around today, and will you have a few mins to chat? [21:29:27] i've got another 30 mins or so [21:29:28] and yeah [21:29:31] i'm chatting with dan right now [21:29:49] what's up? [21:30:31] is it now possible to set up varnishkafka instance to consume a request URL pattern into a specific topic? [21:30:54] if you remember, a while back i wanted to set up an endpoint for frontend perf metrics collection (https://gerrit.wikimedia.org/r/#/c/89359/) [21:31:13] mark was fine with the idea but because varnishncsa had been giving the bits boxes grief he wanted to hold off for varnishkafka deployment [21:31:51] the volume of requests would be very low (less than 50 per sec) and i could consume them directly from kafka with the python-kafka package in place (thanks again for that, btw) [21:33:29] ottomata: ^ [21:34:14] ori-l, yes! but you might want to wait just a bit [21:34:26] i deployed vanrishkafka officially for hte first time to just 3 mobile hosts yesterday [21:34:30] i'm getting the monitoring puppetization stuff all set [21:34:37] hoping to get that set today / monday [21:34:40] and then deploy to more mobiles next week [21:34:56] well, there'd be nothing depending on it at first, i would just be building stuff on top [21:34:57] so, its a bit in flux right now, and the fewer places i have to fix weirdness the better [21:35:06] but, the answer is yes :) [21:35:13] varnishkafka is in apt, and there is a varnishkafka puppet module [21:35:26] you'd have to create the topic on the kafka brokers, but I can show you how to do that, its real easy [21:35:36] would i be complicating things for you if i were to do that? [21:38:48] well, i'll give it a shot [21:39:00] worst case scenario, i submit a puppet patch and you ask me to hold off because you need to change things [21:39:05] which would be fine [21:49:57] yeah sure [21:49:59] go right ahead :)