[00:01:55] I think I literally chose the worst way to design this script and now I'm really hating myself for it :P [00:09:45] milimetric: It comes to mind that I may not have a way to install node mysql support on stat1, so *that's* hilarious. Will coordinate with opsen. [00:10:19] hm [00:10:30] I don't know how smart it would be to stick npm there [00:10:47] i'm pretty sure they won't let you do that [00:10:51] you'd have to git deploy anything [00:10:53] and... yeah [00:11:02] Yeah. [00:11:30] I could probably just download the code for the mysql library, I think it's standalone [00:11:39] It's just...that's not super ideal [00:15:16] milimetric: I think I might go with a one-off local install of node-mysql, as imperfect as it is, just to avoid complication [00:27:26] marktraceur: usually the shortest path to troubleshooting something like this would be to re-create the situation with minimal code. what that means is taking your script and throw out everything you can as long as the problem persists [00:27:43] average: It turns out I should be using mysql. [00:27:49] It was dumb to do anything else. [00:27:53] then you would usually give it to someone else and tell him "hey, look, I'm on this machine, with this user, and I have these 5 lines of code that totally fail, what can I do ?" [00:28:04] s/him/them/ yeah [00:30:16] it's not dumb to not use mysql, it's not a panacea. but I tend to believe you since you know the situation better [00:32:28] Well, I mean [00:32:33] mysql has all the data [00:32:42] And will not require pulling in too much data at once [00:38:42] marktraceur: that seems ok, if it helps to get things done quickly. If you're still having trouble tomorrow, let's fork the mobile team's repo [00:39:02] milimetric: It's not about having trouble, just getting off the ground :) [00:39:10] Once the framework is set up sanely it should be pretty simple [00:39:23] right, i mean trouble with installs and administrivia stuff [00:39:34] DarTar: can you remind me where the fact that a user registers via mobile gets recorded [00:39:35] ? [00:40:06] nm :) [00:53:55] awjr: perhaps match against m.wikipedia.org/.*title=Special:UserLogin&action=submitlogin&type=signup in the URL field in logs, and only select those with a mobile UA ? [00:54:31] thanks average :) [02:48:32] (PS1) Milimetric: Closing connection not dumb way [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/102379 [02:48:44] (CR) Milimetric: [C: 2 V: 2] Closing connection not dumb way [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/102379 (owner: Milimetric) [03:24:12] (CR) Dzahn: "thank you, sounds cool. I'm just in "end of the year cleanup" mode" [analytics/wikistats] - https://gerrit.wikimedia.org/r/88978 (owner: Dzahn) [09:53:33] (CR) QChris: "recheck" [analytics/wikistats] - https://gerrit.wikimedia.org/r/102292 (owner: QChris) [10:09:52] (CR) QChris: "recheck" [analytics/wikistats] - https://gerrit.wikimedia.org/r/102292 (owner: QChris) [16:35:51] hashar: Sorry, I was not around much in the morning. Could I steal some of your time today to get wikistats fixed or would you prefer some other day in the European morning? [16:43:45] qchris: i looked a bit overnight at wikistats, could not manage to get tests running :- D [16:43:53] Ok. [16:43:53] I was probably missing some configuration and gave up :/ [16:44:55] :-) [16:45:17] Ok. Then I guess it's best we merge without Jenkins for now. [16:46:00] or find out the commit that was still passing tests :-D [16:46:04] Is there anything we have to do to get it fixed, or can we just sit&wait and let you fix Jenkins? [16:46:15] The tests I fixed already. [16:46:23] https://gerrit.wikimedia.org/r/#/c/102292/ [16:46:36] ahh that is your mail [16:46:38] But I cannot get Jenkins to pick up that change. [16:46:41] tests not being triggered hehe [16:46:45] Yes. [16:46:49] looking at it [16:46:55] sorry missed that today [16:46:59] np. [16:47:06] It's not urgent :-) [16:47:16] my brain somehow reached its context switch limit and drop tasks [16:47:27] I just wanted to get the wikistats change off of dzahns review queue. [16:47:30] hopefully vacations will let me rest a bit and start again with an empty queue [16:47:51] So I guess we'll just postpone wikistats and hand-merge things. [16:48:02] ahh [16:48:53] I fixed the repo URL to the new one in the meantime. [16:49:03] So Jenkins gets the code again. [16:49:13] so herethe deal [16:49:43] analytics/wikistats has a single job analytics-wikistats [16:49:48] which execute code submitted by user [16:50:09] to prevent untrusted people from running any code on jenkins boxes, the job is in the Zuul pipeline 'check' [16:50:14] which is only triggered for trusted people [16:50:18] and you are not trusted :-] [16:50:22] Ha! [16:50:29] Now I understand. [16:50:36] a @wikimedia.org is whitelisted by default [16:50:42] will add your address to Zuul config [16:50:50] That'd be awesome! [16:50:53] Thanks. [16:52:45] https://gerrit.wikimedia.org/r/102460 [16:52:53] that whitelist is horrible [16:54:52] :-) [16:55:03] and Zuul is horrible right now as well :-D [16:56:22] (PS1) Milimetric: making sure all session.close calls happen [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/102462 [16:57:35] (CR) Milimetric: [C: 2 V: 2] making sure all session.close calls happen [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/102462 (owner: Milimetric) [17:54:04] (PS4) Hashar: Make squid tests time-independent [analytics/wikistats] - https://gerrit.wikimedia.org/r/102292 (owner: QChris) [17:54:12] (CR) jenkins-bot: [V: -1] Make squid tests time-independent [analytics/wikistats] - https://gerrit.wikimedia.org/r/102292 (owner: QChris) [18:07:55] (PS3) QChris: Use WORKSPACE variable to determine $__CODE_BASE in fallback [analytics/wikistats] - https://gerrit.wikimedia.org/r/102299 [18:18:09] (CR) QChris: "This commit unbreaks the test suite and allows the tests to pass" [analytics/wikistats] - https://gerrit.wikimedia.org/r/102292 (owner: QChris) [18:19:36] (PS3) QChris: Ignore sampled-1000 testdata generated when running tests [analytics/wikistats] - https://gerrit.wikimedia.org/r/102298 [18:19:42] (CR) jenkins-bot: [V: -1] Ignore sampled-1000 testdata generated when running tests [analytics/wikistats] - https://gerrit.wikimedia.org/r/102298 (owner: QChris) [18:24:02] (PS4) QChris: Ignore sampled-1000 testdata generated when running tests [analytics/wikistats] - https://gerrit.wikimedia.org/r/102298 [18:25:47] (PS3) QChris: use new wikivoyage logo on stats.wikimedia.org [analytics/wikistats] - https://gerrit.wikimedia.org/r/88978 (owner: Dzahn) [18:26:32] (PS4) QChris: use new wikivoyage logo on stats.wikimedia.org [analytics/wikistats] - https://gerrit.wikimedia.org/r/88978 (owner: Dzahn) [18:28:52] (PS2) QChris: Typofix in squids report [analytics/wikistats] - https://gerrit.wikimedia.org/r/100368 (owner: Nemo bis) [18:30:18] (PS3) QChris: Add some more author aliases [analytics/wikistats] - https://gerrit.wikimedia.org/r/92069 (owner: Nemo bis) [18:33:13] I like the smell of jenkins-bot verifying patch sets :-) [18:33:46] (CR) QChris: [C: 1] Typofix in squids report [analytics/wikistats] - https://gerrit.wikimedia.org/r/100368 (owner: Nemo bis) [18:34:34] (CR) QChris: [C: 1] use new wikivoyage logo on stats.wikimedia.org [analytics/wikistats] - https://gerrit.wikimedia.org/r/88978 (owner: Dzahn) [19:04:01] qchris: hey [19:04:23] Hi average [19:04:32] reading the backlog [19:04:46] qchris: so now tests pass on your machine but not on jenkins right ? [19:04:52] no. [19:04:56] The pass on both. [19:05:01] https://gerrit.wikimedia.org/r/#/projects/analytics/wikistats,dashboards/default:open [19:05:03] pass on both ! awesome ! [19:05:09] The V column on ^ is mostly green :-) [19:14:13] running to cafe, back in a bit [19:53:29] so haiiii milimetric [19:53:33] you there? [19:53:36] yes [19:53:48] where were we with wikimetrics puppetization? [19:54:06] i think you had it all working on ottometrics [19:54:24] except [19:54:26] uh... [19:54:30] the queue [19:54:37] yeah how do I tell [19:54:40] the queue is running atm [19:54:54] you can tail /var/log/upstart/jobname [19:54:55] i see a ton of python procs in ps [19:54:59] oh right [19:55:14] or you can try uploading a cohort [19:55:18] is ottometrics public? [19:55:19] riight [19:55:19] [2013-12-18 19:55:15,255: ERROR/MainProcess] Consumer: Connection Error: [Errno 111] Connection refused. Trying again in 4 seconds... [19:55:24] yes, but i can't login [19:55:29] because i don't have the correct auth stuf fsetup [19:55:33] http://ottometrics.wmflabs.org/ [19:56:04] right [19:56:24] you have auth set up, just the callback from google is localhost [19:56:27] for that consumer [19:56:29] oh [19:56:31] so it's mismatching [19:56:44] GOOGLE_REDIRECT_URI ? [19:56:49] nono [19:56:52] on google's side [19:56:57] where i registered the consumer [19:57:15] uoook? [19:57:22] it's fine in other words [19:57:23] for ottometrics? [19:57:32] no need to do that, i'm sure it's fine [19:57:36] ok [19:57:38] sooo [19:57:42] next steps: i'll look at ottometrics when i'm back from lunch [19:57:44] ok the main problem is this i thikn [19:57:45] [2013-12-18 19:57:25,412: ERROR/MainProcess] Consumer: Connection Error: [Errno 111] Connection refused. Trying again in 24 seconds... [19:57:46] and double check [19:57:52] oh, is redis up? [19:58:08] hmm, nop [19:58:10] e [19:58:10] :) [19:58:18] that's something you could do [19:58:21] yup [19:58:22] sry, gotta run [19:58:22] :) [19:58:23] k [19:58:27] i'll be back in an hour [20:01:09] milimetric: http://www.slideshare.net/planetcassandra/c-summit-2013-realtime-analytics-using-cassandra-spark-and-shark-by-evan-chan might also be interesting [20:03:33] gwicke: cool [20:04:24] druid seems to be able to use S3-compatible blob storage, just not sure how useful that data would be outside of druid [20:08:18] gwicke: are we using S3 ? [20:08:22] gwicke: do we plan to use S3 ? [20:09:45] we are using swift currently [20:09:52] that's fairly close to s3 [20:09:59] gwicke: who's we ? [20:10:10] the storage service I'm working on will also have an interface that is very close to S3 [20:10:19] we as in foundation [20:11:09] didn't know that [20:11:28] well, good luck with s3/swift [20:13:42] average: Aaron is taming that, I don't have anything to do with it directly [21:20:10] fyi https://gerrit.wikimedia.org/r/#/c/102551/ [22:08:22] (PS1) Ottomata: Adding proxy command to ktunnel [analytics/kraken] - https://gerrit.wikimedia.org/r/102567 [22:08:23] (PS1) Ottomata: Adding output if there are no suitable tables found [analytics/kraken] - https://gerrit.wikimedia.org/r/102568 [22:08:31] (CR) Ottomata: [C: 2 V: 2] Adding proxy command to ktunnel [analytics/kraken] - https://gerrit.wikimedia.org/r/102567 (owner: Ottomata) [22:08:43] (CR) Ottomata: [C: 2 V: 2] Adding output if there are no suitable tables found [analytics/kraken] - https://gerrit.wikimedia.org/r/102568 (owner: Ottomata) [23:15:26] HEYYYYYYY [23:15:35] WHO WANTS TO SEE SOMETHING COOL [23:15:36] WHO!? [23:15:41] Me! [23:15:41] milimetric: ????????:):)' [23:15:45] qchris! [23:15:47] Ok... then not me. [23:15:50] :) [23:15:54] Oh. Yes. [23:15:54] i'm in a meeting :( [23:15:57] but I totally do [23:15:57] PSH [23:15:58] Show. Show. Show. [23:16:15] hangout? [23:16:37] batcave? [23:16:41] i think my internet is ok... [23:16:42] hmm [23:16:56] booting google machine [23:17:28] who can guess what I am going to show off? [23:17:56] Fancy new jokes? [23:18:02] haha [23:18:03] nope [23:18:23] a histogram !!! [23:18:27] noep! [23:18:36] omg, this is even more huge than I thought [23:18:43] please do tell [23:20:56] we are waiting ottomata [23:20:57] :) [23:21:04] Can I come too? [23:22:54] drdee we are in the hangout [23:22:57] batcave [23:23:02] https://plus.google.com/hangouts/_/calendar/d2lraW1lZGlhLm9yZ19jYjM3bXU0OGNuaHRkN2hybmE4czI3b25hb0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t.c6j7qidqs491nhi7ovk9pi4h14?authuser=1 [23:23:50] Snaps: [23:24:04] what were your throughputs tests for librdkafka? [23:24:11] you just had one broker, right? [23:40:13] milimetric: is the meeting fun ? do you like it ? [23:40:22] milimetric: you're in a meeting right ? [23:40:27] ? [23:40:34] yeah, talking to ori about event logging [23:45:02] halfak: https://github.com/edenhill/librdkafka/blob/master/INTRODUCTION.md#performance-numbers [23:45:15] we run with ack=1 [23:45:26] which is not one of Snaps' tests there [23:45:36] but, ack=1 should have slightly lower latency than ack=2 [23:45:38] so [23:45:53] we run witih [23:46:05] when the full fire hose comes around [23:46:07] we run with [23:46:25] ack=1 [23:46:25] 10 partitions(per topic, there will be 4 topics) [23:46:26] 2 brokers (or more?) [23:46:40] erroneously assuming that the data will be spread evenly across all topics (it won't) [23:46:41] that is 40 partitoins [23:46:49] oh and snappy compression [23:47:08] so, even with only 2 brokers, we should be able to do much better than 300000 msg/sec [23:47:34] also, his numbers ther are with only a single producer [23:47:45] we have 1 producer per varnish server [23:49:22] i enjoyed reading this wiki page btw: https://en.wikipedia.org/wiki/Byzantine_generals_problem [23:49:40] That's great news. Also, this is some nice documentation. [23:51:02] ok laters all!