[12:48:43] hey drdee [12:49:16] mooooorning [13:03:48] good morning good folk [13:07:09] ottomata: hiiiii [13:07:31] sandy survivors!!!!! WELCOME BACK!! [13:07:31] you good? were you in Brooklyn last night? [13:08:06] how was it? [13:08:53] hiiii! [13:08:58] yeah! stayed in clinton hill [13:09:11] wanted to slumber party in dumbo, got lazy and played board games [13:10:55] and you milimetric……. how was front row? [13:11:10] no problems here [13:11:12] no liberty bell [13:11:17] just a couple of branches fell [13:11:42] cool, what board games [13:14:30] i was going to make 3 stops last night [13:14:34] 1. pick up friend [13:14:44] sumanah, are you back on our fair coast? [13:14:44] 2. walk to other friend's, pick them up [13:14:44] I am [13:14:44] if so, how'd you fare [13:14:44] 3. walk to dumbo [13:14:47] we only made it to 2 [13:14:52] so at stop 1: dominion :) [13:14:58] stop 2: settlers [13:15:01] nice [13:15:14] you ever play dschoon in dominion? [13:15:20] he's pretty ridiculous [13:15:59] milimetric: I was on one of the last flights from SF to NYC, Sun night. Monday - was anxious a bit, but we actually never lost power or internet. But the winds were scary high of course, and I feel utter sorrow for those south and east of me [13:16:11] distraction: http://adainitiative.org/2012/10/10000-matching-donation-challenge-from-sumana-harihareswara-and-leonard-richardson/ [13:17:15] yeah, there's something like 6.5 million people without power who are probably gonna stay that way for a while [13:20:42] naw, i only recently started playing dominion [13:24:01] is ottomata dry? [13:25:29] cool. I also like settlers but I always feel like you're stuck after your initial setup. If you haven't tried Carcasone, I recommend that. Once people get it, the scheming and backstabbing is quite fun [13:25:32] dupe dry! [13:25:37] sumanah: i think i count as south and east of you! dry here! [13:25:47] i have, i like carcasone, its a bit chiller [13:26:01] me like it too [13:26:01] i've played settlers so many times, i'm a wee tired of it, but I think only because I never play when there are no newbies [13:26:24] sumanah: that's funny, jimmy also was somewhere else and got back to NYC around the same time as you [13:26:43] i can bring it next time, I fashioned the box my phone came in into a travel case [13:29:48] milimetric, what was the bug you found? [13:30:09] ottomata, could you do one quick thing before taking pictures? [13:30:25] and that's looking at https://gerrit.wikimedia.org/r/#/c/29779/ [13:30:29] i totally forgot I had rights to push reportcard-data directly into prod and pushed something for david to play with [13:30:53] so in doing that, I mangled one of the graphs as far as "old" Limn was concerned [13:31:06] so if someone had git pulled from reportcard2, prod would've died [13:31:33] i made myself a nice branch on reportcard-data to work on so we don't have this problem [13:31:53] cool [13:31:57] yeah, i'm here for a few hours drdee [13:32:01] but I feel like we should restrict our own ability to push master [13:32:23] and i would love to fix the Hue thing [13:32:41] maybe we should try enabling MRv1 (i really really don't want to do that) [13:32:45] yeah, i'm looking at cdh h.1 right now [13:32:51] but then at least we know whether it's a YARN issue or not [13:33:05] lets try that first, esp since they advertise hue/oozie improvements, might as well try it first [13:33:05] we want to do it anyway [13:33:11] agree [13:37:38] (I am a terrible daughter, I forgot to call my mom to say I am ok, she just called) [13:38:06] jeremyb: glad you are ok [13:41:18] average_drifter, around? [13:44:07] ok, drdee, as far as I can tell (which isn't very far) upgrading to cdh 4.1.1 should just be an apt-get command [13:45:05] gonna try it, hope things don't explode [13:45:08] before you do that, you might want to copy hive-site.xml as that is the only conf file that is not yet fully puppetized [13:45:14] oh ok [13:45:28] well, it is puppetized, you are saying you made changes? [13:45:30] not sure if our confs will be overwritten [13:45:42] yes i made a ton of changes yesterday [14:06:31] ok, drdee, upgraded to 4.1.1 [14:06:36] that was easy :) [14:06:36] let's try hue [14:06:56] cool [14:08:29] haha, we are both running it [14:08:49] :) [14:08:57] it doesn't work [14:09:11] rats, yeah, same deal [14:09:14] ok welp, good to konw [14:09:20] ok you want to try MRv1, eh? [14:09:56] is it a lot of work? [14:10:30] not sure….actually it might be, i'd have to configure the mrv1 .conf files [14:10:32] hmm, but wait [14:10:36] drdee [14:10:39] how could this be a mrv1 vs yarn problem? [14:10:44] we can run this query with hive [14:10:50] this is a hue problem [14:11:08] i am looking at the log file [14:12:58] it still seems to mix up the local vs distributed mode [14:13:24] drdee: here [14:13:43] average_drifter, we had some questions about the append_field function [14:14:44] ok, so append_field only provides space for new fields... and makes all the setup needed [14:15:03] right, shouldn't it take the field to append and append it? [14:15:03] and it returns the index of the newly appended field so it can be written to [14:15:17] why doesn't append_field just do that? [14:15:17] i was thinking [14:15:31] append_field(&fields, new_field); [14:15:45] then fields[last_field] == new_field [14:15:47] after that call returns [14:16:27] yes, we can do that too [14:16:30] basically array append [14:17:11] alright I'll change it to append like that [14:17:40] but we need 3 args, so append_field(&fields, &field_count, new_field); [14:17:48] because it needs to increase field_count also [14:19:45] that's fine, you have 3 args right now, right? [14:19:46] also, what is fields_appended for? [14:20:27] ottomata: well basically fields is an array of pointers [14:20:37] ottomata: so by itself,it doesnt store any actual data, it just stores pointers to where actual data is [14:20:55] right [14:20:55] ottomata: so fields_appended is an array of strings, so I can put the appended strings into them [14:21:10] ottomata: and then I tell fields that the new data is actually in fields_appended [14:21:23] hmm, but you don't need to keep an extra array of the appended fields [14:21:29] you can just malloc space for the new strings, and then set the pointer [14:21:29] or [14:21:37] in this case [14:21:39] you actually already have the string allocated [14:21:40] area [14:21:51] so I don't think you need to allocate more space [14:22:05] you can just add a pointer at the end of fields pointing at the address of area [14:22:13] that's correct [14:22:25] maybe you need to allocate a new pointer? dunno [14:23:02] fields[field_count_this_line] = malloc(sizeof(int *)); [14:23:02] fields[field_count_this_line] = &new_field; [14:23:02] field_count_this_line++; [14:23:17] so [14:23:57] so that entails that I'll have to allocate/free data for each line [14:23:59] free? [14:24:17] yes, well, every malloc must be followed by a free at some point right ? [14:24:53] that was why I prefered to use memory on the stack, so I wouldn't have to call malloc/free for each line [14:24:53] yeah, but I think that is being done............. [14:25:14] you are allocating memory though, when you do snprintf, no? [14:25:26] maybe not. [14:25:41] ottomata: no because I'm sprintf-ing to a stack allocated array [14:25:58] ahh i see [14:26:18] but [14:26:47] this bit: [14:26:47] fields[*i-1] [14:26:56] on line 532 [14:27:29] maybe you don't need a malloc if you can get away with that [14:27:29] oh right [14:27:37] because fields is already allocated on the stack too [14:27:37] right [14:27:58] yea [14:27:58] yeah [14:27:58] then you don't need a malloc either [14:28:02] i was thikning fields was only 14 fields long at this point [14:28:05] yeah then jjust [14:28:13] fields[field_count_this_line] = &new_field… [14:28:17] field_count_this_line++; [14:28:18] that's it [14:28:20] right? [14:28:45] that's what's happening now yes [14:28:59] except I'm using this additional array which I can get rid of [14:29:15] well, right, but you are doing the assignment outside of append_fields [14:29:19] that should be moved inside too [14:29:28] yeah, alright [14:29:31] I'll do the assignment inside [14:29:48] cool, danke [14:29:56] also, your spaces are still all weird :p [14:30:39] looks like this in my editor: [14:30:39] http://cl.ly/image/2z1Z2t2o3a26 [14:31:31] hmm, what are your editor settings ? you use vim ? [14:31:53] what editor do you use ? [14:31:58] I'll use the same editor [14:32:13] i use textmate [14:32:16] for mac [14:32:20] but if you just set it to use tabs all the time [14:32:21] instead of spaces [14:32:25] you'll be fine [14:32:34] ok, I'll try that [15:06:34] ottomata: can I give you a preliminary file so you can plug it into textmate and tell me how it looks ? [15:07:05] sure [15:13:56] average_drifter, if you do this in vim, you can show your tabs [15:13:59] :set list listchars=tab:\|\ [15:35:47] drdee, i'm trying to reproduce this on my local vm [15:35:52] how can I import test data into hive? [15:36:03] use snoopy :) [15:36:07] i mean sqoopy [15:36:21] clone github.com/wikimedia/sqoopy.git [15:37:13] then run sqoopy.py; supply mysql credentials like: username, password, host, dbname, table [15:37:27] it will generate the sqoop import command for you [15:37:42] then copy past the sqoop import command and run it [15:37:42] you are all set [15:53:17] ottomata, any luck? [15:54:54] drdee: do you have textmate ? [15:58:07] i use sublime text2 [16:06:17] dschoon, drdee: any suggestions on how to set up an rsync between stat1 and kripke? [16:06:31] ottomata: any suggestions on how to set up an rsync between stat1 and kripke? [16:06:53] specifically, just getting public key authentication to work [16:06:54] have you tried creating a key and sticking it on kripke? [16:06:57] heh [16:07:08] yeah [16:07:08] um. that's my cue to leave for the office [16:07:22] i definitely need a coffee before troubleshooting fucking labs networking/auth issues. [16:07:23] just curious if anyone had gone through this [16:07:25] I'll figure it out [16:08:33] hmm, dunno, that's probably what I would do [16:09:30] cool [16:13:39] erosen: depends on what account you are trying to sync [16:13:58] does that account have access to both bastion and stat1? [16:14:14] and then you need to do it with proxy support as you have to go through bastion [16:14:18] i got it work actually [16:14:33] i just copied my .ssh/config to stat1 [16:14:35] yeah [16:27:45] drdee, you hardcoded the path to mysql :p [16:27:52] yeah i just noticed it [16:27:55] stupid mistake [16:28:04] just remove the path [16:32:57] drdee: [16:32:58] Must specify destination with --target-dir. [16:33:06] also, —num-mappers printed out as 1.0 [16:33:11] instead of 1 [16:33:11] which made sqoop complain too [16:33:39] the num-mappers i forgot, i'll fix that [16:33:54] i forgot to mention that you could pass on the sqoop params to sqoopy [16:34:00] like --target-dir [16:34:14] and snoopy will put them in the cmd line [16:35:05] what shoudl I put for target-dir [16:35:05] ? [16:37:09] drdee ^ [16:37:11] anything? [16:37:16] any hdfs dir? [16:41:08] do I need to tell it where the mysql driver is via classpath? [16:42:13] --targer-dir is hdfs path like /user/otto [16:42:28] msyql driver depends on what machine you are running this [16:42:40] it needs to find it somewhere [16:42:43] on an01 it works [16:42:49] but probably on your vm it doesn't [16:43:28] i have it instealled [16:43:40] i used my pupept stuff the same way, and hive-metastore needs to use it to run [16:44:04] Could not load db driver class: com.mysql.jdbc.Driver [16:44:04] grrr [16:44:10] i put the jar in my classpath too [16:48:14] phew, got it [16:48:42] did you manually put htis in place? [16:48:43] mysql-connector-java-5.1.22-bin.jar [16:48:45] on an01? [16:49:43] hmm, also, drdee, the queries sqoop is executing are this: [16:49:43] Executing SQL statement: SELECT id, name FROM temp_table WHERE (1 = 0) [16:49:52] WHERE (1 = 0)? [16:49:56] that is correct [16:49:58] that won't import anything, right? [16:50:13] it is doing that to split the query over multiple mappers [16:50:15] (which of my qs is correct? :) ) [16:50:25] WHERE(1=0) is correct [16:50:45] the jar stuff is fixed? [16:57:53] https://plus.google.com/hangouts/_/2e8127ccf7baae1df74153f25553c443bd351e90 [16:59:25] https://plus.google.com/hangouts/_/2e8127ccf7baae1df74153f25553c443bd351e90 [16:59:40] https://plus.google.com/hangouts/_/2e8127ccf7baae1df74153f25553c443bd351e90 [16:59:44] brb1s coffee [17:00:37] running a few minutes late, i am attending product manager meeting [17:00:37] drdee, jar stuff, sorta, I need to know if you manually put that jar in place on an01 [17:00:52] i symlinked that to the global one for other services (hive, oozie, etc.) [17:01:15] i'm getting closer, but it doesn't quite work on my local yet [17:02:09] i am trying to remember [17:02:24] i think i just did apt-get install java-mysql [17:02:29] or something like that [17:03:09] hmmmm, i think you did something more [17:03:33] there is a file in place on an01 that is not on my local after libmysql-java and sqoop install [17:03:33] :) [17:04:19] what file? [17:04:52] /usr/lib/sqoop/lib/mysql-connector-java-5.1.22-bin.jar [17:05:22] we are in hangout, btw [17:24:30] (related) https://www.mediawiki.org/wiki/Analytics/Limn/GlobalDevFeedback [17:54:12] dschoon: can I bug you about global-dev deploy options/ [17:54:12] ? [17:54:57] totes. [17:55:15] cool [17:55:17] i'll be down in a bit [18:50:15] hey louisdang, i am back in 15 minutes [18:50:49] ok drdee [20:29:26] ok [20:29:28] drdee: [20:29:34] drdee: hi [20:29:34] yo [20:29:35] drdee: solved the problems with tabs and stuff [20:29:43] k [20:29:44] drdee: I git reviewed, waiting for Andrew to review [20:29:49] in the meantime [20:29:55] got a big dataset of x-forwarded entries in logs on stat1 [20:29:58] got them locally [20:30:03] he is not around, he will be back tomorrow [20:30:08] I can see that there are multiple entries for some of them [20:30:47] create a private gist and show some examples [20:31:15] sent you a link in pm [20:31:23] got it [20:32:12] some of those ips are ipv6 [20:32:12] some are ipv4 [20:32:17] but libanon knows how to handle both [20:32:26] from what I've seen [20:39:49] looking now [21:00:47] sweet jesus, brb lunch