[00:08:21] so what do i do to get permits to stat1? [00:11:04] * Ironholds kicks DarTar again [00:11:12] did you stick a stat1 ticket in for nuria and I just can't find it, or...? [00:11:25] let me check [00:12:01] nuria, weren't you talking to ottomata last week about opening an RT ticket? [00:12:14] I didn't take any action because you guys were already sharing SSH keys and what not [00:14:07] nuria, if that didn't happen let me know [00:14:50] you basically need (1) a preferred shell username, (2) your public SSH key on the office wiki, (3) the name of your manager [00:40:14] Sorry DarTar [00:40:21] I didi not open a ticket [00:40:34] I have my user name(nuria) [00:40:42] and keys [00:41:05] let me know whether i can open the ticket myself or somebody else needs to do it [00:45:04] nuria: I can get this done in 10, start by creating a page on the office wiki called https://office.wikimedia.org/wiki/User:Nuria/key [00:45:33] with your public key [00:46:08] oh wait [00:46:30] it turns out that your username is NRuiz (WMF) [00:48:15] nuria: whatever, you can still create a page using the link above :) [01:37:20] * Ironholds sighs [01:37:26] so stat1002 has no way to actually talk to the internet? [01:37:49] probably not something we want to change. [01:50:56] marktraceur: we finished implementing that bar chart functionality [01:51:12] as soon as it's merged (tomorrow or so) we'll let you know [01:51:20] do you have a limn instance somewhere or are we making you a new one? [01:51:49] Uh [01:51:59] I have a local one, but I could also reasonably(ish) set one up on labs [01:54:47] Bah!1!! You can't join tables with multi-column keys that include NULL values. [01:54:54] * halfak shakes fist at the sky [01:55:00] * halfak wants his 4 hours back [04:23:31] (CR) Dzahn: "will this ever be merge-able? it was meant to be just a little helper fix and now it's in my gerrit review forever :p" [analytics/wikistats] - https://gerrit.wikimedia.org/r/88978 (owner: Dzahn) [09:44:24] hey qchris [09:44:26] :D [09:44:32] Hi average :-) [09:45:06] qchris: Coren solved the problem I had yesterday with labs [09:45:12] Awesome! [09:45:12] qchris: apparently glusterfs was the cause [09:45:25] glusterfs seems to be a problem often :-( [09:45:35] They onec had to unblock it for me as well. [09:45:42] s/once/onec/ [09:45:51] I see. I thought they used like plain NFS [09:45:55] Damn you glusterfs! [09:46:12] I have no clue how labs work :-D [09:46:40] But as long as you are able to log in again ... awesome! [09:47:30] yeah, that's pretty cool. I wanna try out that idea with ramdisks. I'm gonna put a hadoop in ramdisk, like 4 VMs inside it [09:47:37] I just did a test on speed of ramdisk using dd [09:47:47] it's damn fast [09:48:45] :-) [09:53:11] qchris: have you played with glib-json ? [09:53:23] No. [09:53:29] ok [09:54:23] qchris: do you have maybe a recommendation for an easy to use C library for json ? I tried using glib-json to serialize output for mwprof and ended up with this https://gerrit.wikimedia.org/r/#/c/101793/6/json-output.c [09:54:48] qchris: I think glib-json works ok, except for the fact that the interface is like... you have to do a lot of manual work to serialize .. [09:55:44] C is manual work :-) [09:55:51] Let me take a look at the code [09:55:52] * qchris looks [09:57:33] average: The json-output.c does not look too bad. [09:57:45] What would you want to have more condensed? [09:58:17] like, perhaps objects you would fill with data, instead of this procedural api that glib-json offers [09:58:41] in test_schema here you can see how the json would look like after it's being serialized https://gerrit.wikimedia.org/r/#/c/101793/6/test.py [09:59:08] eventually I was aiming to get a json from json-output.c and be able to test it against the schema in test.py [09:59:35] You mean like creating/instantiating structs from the objects and passing them to json-glib? [09:59:42] something like that yes [10:00:16] You're thinking python not C :-) [10:00:26] No but seriously ... [10:00:33] I think your code looks mostly fine. [10:01:46] https://developer.gnome.org/json-glib/0.16/json-glib-GObject-Serialization.html#json-gobject-serialize [10:02:02] It seems you can use ^ to achieve what you want. [10:02:15] But as it's glib ... everything needs to be a GObject. [10:02:38] ah yes, that's what I was looking for. I'll look for some example code too [10:02:45] And creating a GObject manually to get the json easy, just to avoid building the json manually directly ... :-) [10:02:59] yeah, it's kind of overkill [10:02:59] Honestly, I think your current code is better. [10:03:02] Yes. [10:03:40] The current json-output.c reads like straight forward C. [10:03:54] (But I did not verify your code) [10:06:39] (CR) Hashar: "Apparently some changes have been pushed directly to the repository AND broke the tests. So they got to be fixed somehow, then Jenkins wi" [analytics/wikistats] - https://gerrit.wikimedia.org/r/88978 (owner: Dzahn) [13:17:51] (CR) QChris: "Filed bug 58576" [analytics/wikistats] - https://gerrit.wikimedia.org/r/88978 (owner: Dzahn) [15:05:15] Again we're having a meeting scheduled ... [15:05:28] Again I am sitting in the hangout alone for 5 minutes ... [15:05:33] And let me guess: [15:06:05] Again we're cancelling it (for whatever reason) and no one bothered to let us know or remove the event :-( [15:06:40] Mhmmm... [15:07:13] average, milimetric, ottomata: Do we have the "Staff meeting" today? [15:07:19] Anyone heard of tnegrin? [15:07:39] I haven't seen him online qchris [15:07:54] @seen tnegrin [15:08:10] it's scheduled, but I guess it's not happening? [15:08:37] wm-bot2 ignores @seen? [15:08:49] milimetric: Ja. I think so too :-( [15:09:05] Then I'll vent frustration by eating :-)) [15:09:20] Off to dinner. See you later. [15:09:53] bon apetit qchris_away [15:10:33] oh i thought staff meeting was just after standup [15:10:34] once a week [15:19:52] my hardware assembly skills suck [15:20:03] anyone dealing with hardware in WMF ? [17:00:41] hiooo standup time? [17:23:19] ottomata, qchris: where do i go to see the live stream of requests? [17:23:43] dr0ptp4kt: No clue. Never did that ... [17:23:48] which requests? the full firehose? [17:24:03] the udp2log webrequest stream you mean? [17:24:08] qchris, ottomata - yeah, the firehose...or even the sampled firehouse [17:24:13] i mean firehose :) [17:24:15] sampled is easiest, stat1002 [17:24:21] * qchris looks at https://wikitech.wikimedia.org/wiki/Udp2log [17:24:23] but it isn't streaming [17:27:44] milimetric: left yet? [17:30:12] qchris, ottomata, so there's nothing for viewing a live stream, correct? in other words, i could tail -f the debug logs, but not the general logs? just trying to view some live requests to see whether x-cs headers are coming through. think a configuration file might be out of sync somewhere. [17:31:32] dr0ptp4kt: Not sure. [17:32:04] dr0ptp4kt: Ask ops ... It looks like you could spin up another udp2log client and pass your own config file there. [17:32:20] dr0ptp4kt: But I'd not try with the "ok" from ops. [17:33:04] s/with/without/ :-) [17:34:26] Sorry. I did not say the above. [17:34:45] hehe [17:36:36] mwah [19:04:54] YES [19:04:55] finally! [19:04:55] hide [19:04:57] oops [19:05:00] http://ganglia.wikimedia.org/latest/?r=hour&cs=&ce=&tab=v&vn=varnishkafka&hide-hf=true [19:05:02] YES! [19:08:10] TADAAAAAA!!!! [19:10:19] Whooooaaaaaaaaa \o/ [19:18:44] brb lunchy [19:30:39] marktraceur: sample ordinal plot: http://debugging.wmflabs.org/dashboards/sample#ordinal-graphs-tab [19:30:49] Oooh. [19:30:59] and anytime you'd like, we can set up a new instance for you [19:31:02] "debugging" seems like a super generic hostname for this but OK [19:31:04] or put it on an existing one [19:31:06] Oh, interesting [19:31:11] oh no, that's just for showing people things [19:31:14] I was thinking it would be on an existing instance [19:31:18] up to you [19:31:24] the most logical one might be... ee? [19:31:45] we have ee, mobile-reportcard, and that's about it [19:31:45] Ehhhhhhh [19:31:52] Sigh. Maybe a new one then. [19:31:59] I have the script basically-ready [19:32:00] hey no worries, that should be fairly easy [19:32:11] (not your script, the new instance) [19:32:12] I need to add support for adding to existing data for the ordinal datasources [19:32:16] it's puppetized and all that [19:32:20] Like, instead of appending new values for jpg/png etc. [19:32:41] oh, gotcha [19:32:58] brb tho, lunch and a bunch of meetings coming up [19:33:05] *nod* [19:33:13] I should probably add a config file too. Agh, so much work. [19:59:03] baack [20:02:08] hey ottomata [20:02:21] hiya [20:02:26] average: hi [20:02:48] can I run vagrant on labs ? [20:03:04] hm [20:03:06] probably [20:03:10] dunno never tried it [20:03:58] (PS1) QChris: Make squid tests time-independent [analytics/wikistats] - https://gerrit.wikimedia.org/r/102292 [20:06:18] (CR) QChris: "recheck" [analytics/wikistats] - https://gerrit.wikimedia.org/r/102292 (owner: QChris) [20:32:11] (CR) QChris: "recheck" [analytics/wikistats] - https://gerrit.wikimedia.org/r/102292 (owner: QChris) [20:33:00] Jenkins/Zuul does not like me :-( [20:33:37] Why don't you check the patch set? [20:36:14] qchris: you can manually force zuul to run the tests [20:36:26] Tried twice :-) [20:36:32] See "recheck" above. [20:36:45] And I tried "Build Now" from within Jenkins. [20:36:56] I am just writing an email to hashar :-) [20:39:45] cool [20:45:19] (PS2) QChris: Make squid tests time-independent [analytics/wikistats] - https://gerrit.wikimedia.org/r/102292 [20:45:23] (PS1) QChris: Ignore sampled-1000 testdata generated when running tests [analytics/wikistats] - https://gerrit.wikimedia.org/r/102298 [20:50:58] (PS1) QChris: Use WORKSPACE variable to determine $__CODE_BASE in fallback [analytics/wikistats] - https://gerrit.wikimedia.org/r/102299 [20:51:05] hashar: ^^ [20:51:17] I just joined ! [20:51:23] hashar: ok ok [20:51:35] :-D [20:52:23] average: aren't you in europe ? [20:52:36] we can talk about Zuul tomorrow during europe business day :D [20:52:43] got a bit busy right now catching up with 3 teams in SF :( [20:53:56] hashar: ok, I myself don't have a problem with zuul right now [20:54:21] the analytics jobs need to be reworked i guess [20:54:32] I haven't really looked at them since we have set them up a few months ago [20:54:33] hashar: qchris is submitting patchsets to gerrit but zuul won't trigger the jobs [20:54:43] ahh [20:54:48] Meh. No problem. [20:54:53] yeah that issue is happening everyday around 9pm UTC [20:55:00] I sent email to hashar some minutes ago. [20:55:08] Let's discuss it tomorrow. [20:55:09] zuul can't keep up with all the jobs posted by europe + SF + l10n bot [20:55:15] that is traffic jam [20:55:17] :-D [20:55:23] the job will definitely be run [20:55:31] but there is up to 20 minutes of delay [20:55:36] that is been p***ing off tons of people [20:56:00] Good to hear that gerrit is not the only problem for people ;-) [20:56:01] should get resolved when I upgrade Zuul in january. Previous attempt end of november didn't work :( [20:56:18] well Gerrit is kind of stable nowadays [20:56:29] and I guess it fulfills our needs pretty well ! [20:56:36] But there is an upgrade pending ... :-D [20:57:02] Let's discuss after we upgraded and people toyed around with a new change screen :-( [20:59:00] haha that is going to be fun [20:59:03] when is it scheduled ? [20:59:12] you might want to announce the UI change beforehand [20:59:21] maybe a wiki page Gerrit/2.8 with a few screenshots [21:02:37] ^ d asked about it a few days ago. [21:02:52] So I assume that it will not take too long until we're 2.8 users. [21:03:35] If it only be me ... I'd just stick with the old gerrit. I am too old for changes :-D [21:07:00] (PS2) QChris: Ignore sampled-1000 testdata generated when running tests [analytics/wikistats] - https://gerrit.wikimedia.org/r/102298 [21:07:01] (PS3) QChris: Make squid tests time-independent [analytics/wikistats] - https://gerrit.wikimedia.org/r/102292 [21:07:25] (PS2) QChris: Use WORKSPACE variable to determine $__CODE_BASE in fallback [analytics/wikistats] - https://gerrit.wikimedia.org/r/102299 [21:23:52] milimetric: you there ? [21:27:07] hi average [21:27:11] yes, but i'm in between meetings [21:27:20] anything urgent? [21:27:32] nope [21:29:05] wha whaaaa [21:29:05] http://f.cl.ly/items/1U2q281V1p2l2G3S0M0h/Screen%20Shot%202013-12-17%20at%204.28.37%20PM.png [22:03:12] qchris: replied to your email, do NOT read tonight :-) it is too long. [22:03:16] I am off! see you tomorrow! [22:03:25] Good night! [22:03:29] :-) [22:28:10] I have edited teh analytics onboarding wiki adding stuff here & there: https://www.mediawiki.org/wiki/Analytics/Onboarding [22:28:50] Hey folks, I think I tried to ask before but got a weird answer: Is there a git repo for data-analysis scripts? [22:32:46] This is the library on top of d3 that druid team used to make their dashboard. I will lay a little with this today. [22:32:49] *play [22:37:34] * marktraceur looks at milimetric or DarTar [22:37:44] marktraceur: what answer did you get previously ? why was it not satisfactory ? what kind of data-analysis are you refering to ? there are many kinds of data-analysis projects going on in WMF Analytics. Have you looked into these ? https://github.com/search?q=%40wikimedia+analytics . How about these ? https://git.wikimedia.org/project/?p=analytics [22:38:17] average: I think someone said "They're in $SOMEDIRECTORY on stat1" [22:38:31] marktraceur: and what was your reply to that ? [22:38:56] Well, I was disappointed because I was looking for a git repo I could push to [22:39:08] marktraceur: we haven't consolidated the scripts we use for data analysis into a single repo, it's been very ad-hoc until now :( [22:39:11] But a subdirectory of git.wm.o/analytics.git should work [22:39:30] I'll request the repo from ^d now [22:39:37] sounds good [22:39:52] ping me when you're done pls [22:39:55] *nod* [22:40:29] DarTar: Would you like me to request analytics/multimedia-stat-scripts or analytics/stat-scripts/multimedia? Or some other preference? [22:40:52] the former sounds good to me [22:41:01] or just drop stat [22:41:02] It has the downside of not being as nested [22:41:05] it's kind of implicit [22:41:07] Ah, yeah [22:41:17] and scripts too :p [22:41:26] how about analytics/multimedia [22:41:26] Oh, hm. I guess so. [22:41:30] :) [22:41:52] Sure sure [22:43:14] And now, WE WAIT. [22:43:19] * marktraceur watches ^d intently [22:45:16] marktraceur: you may consider using github/bitbucket for your repo until it's created on git.wikimedia.org [22:46:39] github/bitbucket/repo.or.cz/unfuddle/code.google.com/launchpad/etc [22:51:27] Would it be so hard to suggest a free one? :P [22:51:34] I guess launchpad is "free" [22:53:04] I guess repo.or.cz maybe too, it looks like basically gitweb with hackiness on top [22:53:53] But I can put hackiness on top of gitweb on my own server too. :P [22:54:06] * marktraceur goes to gitorious [23:01:58] https://gitorious.org/analytics/multimedia exists now [23:17:50] Hrm [23:18:01] Are there memory limits on processes run as a user on stat1? [23:18:15] Because it *looks* like there's plenty of memory left, but I'm getting out-of-memory errors [23:23:18] It's weird because basically none of the swap is available but there's all sorts of free memory [23:24:06] Maybe halfak's scripts that are running are swapping? [23:26:23] DarTar, milimetric, any guidance? [23:28:21] Surely these processes haven't been running at 100% CPU for over 60k hours. [23:29:14] Oh, minutes maybe. Better. [23:29:50] Only a little though. [23:31:29] marktraceur: have you checked htop/top ? have you checked what ulimit -a says ? have you checked /proc//limits ? [23:32:11] average: The process fails too quickly to look at /proc/whatever [23:32:21] average: ulimit -a says lots of things, what would be relevant? [23:32:45] NVM, http://dpaste.com/1510330/ [23:33:18] I was watching top, and it seemed OK, checking htop [23:33:50] It says about 1/3 of the memory is free, but all of swap is taken up [23:35:18] Hm, the file *is* pretty big, but not bigger than the available memory. I guess splitting it might be a Bad Plan. I can rewrite my script to maybe handle the thing line-by-line [23:35:30] marktraceur: have you checked vm.swappiness in /etc/sysctl.conf ? [23:36:04] I don't see that string in that file. [23:42:21] marktraceur: the fact that the process fails too quickly is not a showstopper. have you tried cryopid ? how about ./binary_you_wanna_run; kill -STOP ? how about modifying the source code to insert some pause before it segfaults ? [23:43:45] or how about debugging it with gdb and finding out why it segfaults in the first place [23:44:32] also, how does it fail ? does it segfault ? does it SIGSEGV ? does it not find a file ? [23:44:49] average: Honestly the solution is probably to not read in so much data at once [23:46:08] great