[13:27:56] morning average_drifter [13:28:09] milimetric awake as well :) [13:28:10] hey drdee [13:28:18] just contacted hashar about the jenkins [13:28:23] yep :) [13:28:24] just asked hashar to give us the logs [13:28:26] cool [13:28:38] morning milimetric, how was your weekend? [13:28:43] cool he just answered [13:28:52] grea! [13:28:52] spent the whole weekend thinking about the new framework, I'm super pumped [13:29:13] excellent! [13:29:25] but I talked to the meteor guys and I'm not going with that right now. They're planning some stuff in the future that would make it make much more sense. Right now they're too tied to MongoDB [13:30:06] David's pumped too, we're still going at this in two different ways [13:30:14] mongodb is not the way of the force [13:30:19] I'm just going to do what I know - knockout [13:30:38] drdee: average_drifter hey :-) [13:30:48] would be easier for me if we all talk here :-] [13:30:48] hey hashar [13:30:48] totally [13:31:10] average_drifter: there :D [13:31:11] milimetric, meet hasher our jenkins / ci guy [13:31:19] hey hashar [13:31:24] oh I met him at the all hands [13:31:31] cpp; [13:31:31] so I just wanted to show you this link right https://integration.mediawiki.org/ci/view/Analytics/job/udp-filter/4/console [13:31:31] cool [13:31:32] he's one of my favorite wikimedians [13:31:35] having some problems [13:31:36] hi hashar [13:31:37] I have met too many people there :-] [13:31:46] milimetric: howdie :-] [13:31:46] Dan Andreescu [13:31:48] new analytics guy [13:31:53] (not so new anymore now) [13:32:19] a WMF month counts as a regular year for most :D [13:32:43] soo [13:32:43] https://integration.mediawiki.org/ci/view/Analytics/job/udp-filter/4/console [13:32:46] show that aclocal fails [13:32:50] which is the very first command [13:32:58] the reason is your workspace is empty() [13:33:04] :D [13:33:12] you can tell by going to the job page at https://integration.mediawiki.org/ci/view/Analytics/job/udp-filter/ [13:33:15] and clicking Workspace on the left [13:33:17] we are total jenkines newbies [13:33:20] https://integration.mediawiki.org/ci/view/Analytics/job/udp-filter/ws/ [13:33:29] the root cause is …. wait for it [13:33:34] that jenkins does not git clone :-]]]]]]]] [13:33:41] luckily that is an easy fix [13:33:58] average_drifter was asking which account to use for git clone [13:34:02] what is best practise? [13:34:10] anonymous [13:34:10] just your gerrit account? [13:34:10] using the https URL [13:34:13] ohhhhhhhhhhhhhhhhhh [13:34:17] that might explain [13:34:26] your gerrit account mean you will need a password less ssh key to be uploaded on jenkins [13:34:33] which also mean that anyone would have access to it :-] [13:34:39] right [13:34:53] another thing is the job is only configured to react on a patchset created / draft published [13:35:08] but there is nothing set to grab the code from git [13:35:10] I think using the git plugin will be fine [13:35:17] I will set it up for you [13:36:01] average_drifter ^^ [13:36:23] yeah if we have on working job then we can use that as a template [13:36:23] hashar: what happens if I hit "build now" [13:36:27] I should really write a doc about it :-] [13:36:35] if hte job is configured to run upon gerrit patchsets ? [13:36:35] average_drifter: I have no idea ;-] [13:36:37] you should :D [13:36:39] hashar: so it's UB(undefined behaviour) [13:36:39] probably some error [13:36:44] since nothing is defined [13:36:53] so here is the deal [13:36:58] whenever a change is submitted in Gerrit [13:37:09] hashar: can I set it to build from the git repo as well ? so I can at least debug it until I get it running ? [13:37:19] Jenkins receive an event which contains metadata such as the project name, branch name, ref spec etc [13:37:33] then I can hit "Build now" and see if everything ran smooth [13:37:36] the Gerrit Trigger PLugin converts them in environment variables which can then be used by other plugin [13:37:54] the trick is to setup the Git plugin to react to the Gerrit events and to get the branch/commit from the global environnement [13:39:31] brb getting my monday morning coffee [13:39:33] hashar: can you please suggest a way of doing that ? this is the current configuration garage-coding.com/Screenshot-41.png [13:40:06] so I finished it up [13:40:15] can you submit a patchset to the analytics/udp-filters repo ? [13:40:21] that should trigger the change + fetch from git [13:40:48] http://garage-coding.com/Screenshot-41.png [13:43:41] hashar: moment, trying that [13:46:01] ahh [13:46:07] did a wrong config [13:46:07] done [13:46:07] trout again :-] [13:46:10] wait [13:46:11] tryout again [13:46:26] GERRIT_REFSPEC was missing the leading $ :( [13:47:29] hmm [13:47:44] hashar: did you see the new review ? [13:47:50] I made it just now to test [13:47:53] link ? [13:48:06] remote: https://gerrit.wikimedia.org/r/31827 [13:48:06] remote: https://gerrit.wikimedia.org/r/31828 [13:48:21] sorry for the double.. one of them at least should have triggered jenkins [13:48:34] oh [13:48:36] so I used https://gerrit.wikimedia.org/r/26701 [13:49:25] average_drifter: there was a minor configuration issue in the Gerrit project configuration [13:49:39] hashar: oh, didn't know about that [13:49:39] the filter for branch was set to plain: ** [13:49:50] which mean it was expecting a branch named '**' which does not exist [13:50:02] changed the filter type from plain to path [13:50:02] hashar: I saw ** in a different project and I naively thought I could use it for this one too [13:50:09] yeah that was fine :] [13:50:17] expect you also need to switch from plain type to path [13:50:35] that is a newbie issue I have been facing to over and over when I started jenkins [13:50:36] :-D [13:50:46] anyway [13:50:46] https://integration.mediawiki.org/ci/job/udp-filter/6/ [13:51:16] that build got triggered from change https://gerrit.wikimedia.org/r/26701 [13:51:17] we got a problem now [13:51:19] src/udp-filter.c:47:21: fatal error: libcidr.h: No such file or directory [13:51:21] whenever you want to manually trigger a patchset, you can go to Jenkins main page https://integration.mediawiki.org/ci/ [13:51:30] we have some libs we need installed on that jenkins machine [13:51:35] click the "Query and Trigger Gerrit Patches" link on the left which brings you too https://integration.mediawiki.org/ci/gerrit_manual_trigger/? [13:51:37] hashar: how can I install some libs on that jenkins ? [13:52:01] oh libs [13:52:08] that is a C program isn't it ? [13:52:14] hashar: yes [13:52:28] hashar: the libs we need are already packaged by drdee and ottomata [13:52:47] hashar: can you please add an entry to the /etc/apt/sources.list on the jenkins and hit an aptitude install ? [13:53:13] just a minute, searching for the sources.list entry [13:53:37] Jenkins runs on a production server [13:53:57] so any package addition must be added as a puppet change in operations/puppet.git and get validated by ops [13:54:02] a third party source list is definitely out of question :/ [13:54:16] deb http://apt.wikimedia.org/wikimedia precise-wikimedia main universe [13:54:19] unless they are libs already available in Ubuntu Precise [13:54:20] deb-src http://apt.wikimedia.org/wikimedia precise-wikimedia main universe [13:54:27] it's not third party [13:54:47] \O/ [13:54:50] we are saved! [13:54:50] vagrant@wdev:~$ aptitude search libcidr [13:54:55] i A libcidr0 - A library to handle manipulating CIDR netblocks in IPv4 and IPv6. [13:54:56] i libcidr0-dev - A library to handle manipulating CIDR netblocks in IPv4 and IPv6. [13:55:01] vagrant@wdev:~$ aptitude search libanon [13:55:03] i A libanon0 - IP address anonymization functions. [13:55:11] i libanon0-dev - IP address anonymization functions. [13:55:13] libs needed are these [13:55:28] hashar: are you part of ops ? [13:55:32] nop [13:55:41] hashar: can we talk to someone from ops please ? [13:56:23] so the two packages you need are libcidr0-dev and libanon0-dev right ? [13:56:46] yes [13:56:52] because those provide headers too [13:57:11] so they need to be added to the continuous integration server [13:57:21] correct [13:57:21] that is done using puppet a configuration management system [13:57:25] all of the conf is in operations/puppet.git [13:57:30] which is gated by the ops team [13:57:36] hashar: so it's just two add_recipe lines right ? [13:57:36] but anyone can submit a change there for them to review [13:57:42] kind of yeah [13:57:51] have you ever made change to operations/puppet ? [13:57:51] good, I'll check out there puppet then [13:58:05] no, but I'm about to make one (with your help if you can guide me please) [13:58:11] sure :-] [13:58:15] first clone it :-] [13:58:35] git clone ssh://gerrit.wikimedia.org:29418/operations/puppet.git [13:58:54] that needs your labs account [13:59:28] cloning [14:06:15] mooorning! [14:07:50] hashar: I was thinking something like this https://github.com/activars/vagrant-java-webapp/blob/master/cookbooks/build-essential/recipes/default.rb#L35 [14:07:51] hey ottomata [14:07:54] hiya [14:08:07] is chef different from puppet ? [14:08:10] do they serve the same purpose ? [14:08:20] is chef using puppet or the other way around ? [14:08:28] I have no idea what chef is :( [14:10:39] moooorning OTTOMATA!!!!!! [14:11:01] morning [14:11:06] chef is an alternative to puppet [14:11:08] they do similar things [14:11:35] hashar: if I make a change to the operations/puppet , how can I test it locally ? [14:11:58] not sure you can :-D [14:12:27] average_drifter [14:12:27] you can! [14:12:28] i do tit [14:12:29] it [14:12:32] but it takes a bit of setup :0 [14:12:50] happy to help [14:13:03] ottomata: I have the repo cloned [14:13:13] ottomata: yes please, help needed [14:13:18] ottomata: what can I do now ? [14:13:31] first, set up puppetmaster [14:13:33] not too hard [14:13:40] I have some vagrant vms I can use as guinea pigs [14:13:40] just install it [14:13:43] ok yeah [14:13:46] yeah install puppet master [14:13:54] on a vm ? [14:13:54] put your cloned repo at /etc/puppet [14:13:54] yeah [14:14:09] (you can symlink to the cloned repo if you want) [14:15:11] average_drifter: what needs to happen to: https://gerrit.wikimedia.org/r/#/c/31827/ [14:16:36] drdee: should be abandoned, we used it only for debugging jenkins [14:17:24] average_drifter: have you cloned the operations/puppet repo yet ? [14:17:50] hashar: yes, I copied it to the vm [14:17:55] hashar: and I'm putting it in /etc/puppet [14:18:07] mm [14:18:14] to add new packages that is really overkill :-] [14:18:43] anyway, here the dump of my brain to make your change. [14:18:48] isn't' there a simpler way of doing this then to actually install the debs/ [14:19:03] drdee: what do you mean ? [14:19:42] average_drifter: https://integration.mediawiki.org/ is hosted on the gallium host. The puppet classes applied to that host are defined in manifests/site.pp under something like node "gallium.wikimedia.org" {} [14:20:04] for example it include the puppet class "misc::contint::test::packages" [14:20:16] which itself is defined in manifests/misc/contint.pp [14:20:38] ah sorry, my adium crashed [14:20:45] hehe [14:21:38] average_drifter: the syntax would be : package { ["libcidr0-dev", "libanon0-dev",]: ensure => present; } [14:22:02] ideally adding a comment above the line for future reference, something like: # dev packages used for analytics/udp-filters compilation [14:22:13] ottomata: did you what you told me [14:22:16] ottomata: /etc/puppet is now containing the stuff [14:23:29] ok cool [14:23:47] so start puppet master [14:23:58] hashar, well installing our udp-filters debs on jenkins doesn't sound like a scalable solution [14:23:59] sudo /etc/init.d/puppetmaster start [14:24:45] (and now for some funky stuff we're going to have to work out) [14:24:47] drdee: the other solution would be to have git submodules for libanon and libcidr and having those built inside our make process [14:24:58] drdee: what does not scale ? [14:25:04] what are you guys trying to do? [14:25:19] ottomata: jenkinizing udp-filters [14:25:39] having to write puppet changes to install dependencies to run jenkins tests [14:26:08] well it just need to be done once [14:26:45] right for udp-filters, but i can't imagine that this is the only project where this ahppens [14:26:53] right [14:27:31] normally someone should be able to ssh into the jenkins machine and just hit aptitude install [14:27:38] you are the first ones to setup a C project I guess :-D [14:27:53] it sounds like an anti-pattern [14:27:53] we are having a continuous integration sprint this week in Utrecht, NL [14:28:09] hopefully have a system to be able to run jenkins in labs machines that would report back to the production jenkins [14:28:13] that will nicely solve that kind of issue [14:28:14] average drifter, next up, run: [14:28:22] puppetd —test —server=`hostname` [14:28:23] tests in jenkins should be self-contained and not require installing deps on the jenkins machine to make it build a project [14:28:25] it will fail [14:28:27] in the sense you will have full root on the labs instance and install whatever you want :- ] [14:28:29] but tell me what it says [14:28:38] ottomata: moment, trying [14:28:45] you can PM me [14:29:26] drdee: must agree with you. Ori has been working a bit on vagrant a VM system that would eventually let us run jenkins tests in an environment where the job can do whatever it wants (such as adding packages) [14:30:09] hashar: is it insane to do in a shell on jenkins@jenkins machine this "sudo -s aptitude install " ? [14:30:26] you need to be root [14:30:39] honestly, just do the puppet change, ask ops to merge it and we are done :-] [14:30:44] hashar: ok [14:30:49] I can do it myself if you want [14:30:51] ha, I can do it too [14:31:03] ok, you guys will be faster than me if you want to [14:31:06] setting up puppet env to test this is actually going to be pretty annoying [14:31:09] not to install the packages [14:31:18] but if you want to test all the jenkins related stuff [14:31:19] that's going to be a pain, i'm sure [14:31:26] ottomata: do you have merge access on sockupuppet and the ability to run puppet on gallium ? [14:31:26] ok, lemme help you with that [14:56:15] hasher, ottomata, average_drifter: how about this: [14:56:36] make a separate project for libcidr, and lib anon in jenkins and have them build it once [14:56:44] then run jenkins copy artifact (http://wiki.hudson-ci.org/display/HUDSON/Copy+Artifact+Plugin) [14:56:55] to udp-filter workspace [14:57:00] no puppet changes [14:57:16] also if we every upgrade lib anon or whatever then we can just rebuild the artifact [14:57:40] that sounds great [14:57:46] libanon and libcidr could use some continuous integeration of their own [14:57:55] the libcidr guy has recently updated his own repo [14:57:57] and I need to bring in the upstream changes [14:57:59] yes totally [14:58:10] but we also cut out the ops dependency [14:58:15] and it just doesn't sound right [14:59:17] hashar ^^ [14:59:32] oh i pasted the hudson plugin, not the jenkins, let me googlle [15:00:00] https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin [15:00:18] drdee: if libcidr and lib anon are made you, you indeed want to build them in jenkins jobs [15:01:10] I though they were some standard / know libraries [15:01:45] average_drifter ^^ [15:01:55] reading [15:03:29] that would effectively mean that we will also have to set up environment variables [15:03:32] export LD_LIBRARY_PATH=$HOME/local/lib/lib:$LD_LIBRARY_PATH [15:03:39] export PATH=$HOME/local/bin:$PATH [15:03:39] export C_INCLUDE_PATH=$HOME/local/lib/include:$C_INCLUDE_PATH [15:03:39] export LIBRARY_PATH=$HOME/local/lib/lib:$LIBRARY_PATH [15:03:45] I mean.. not with those paths [15:03:58] but we will need to set these variables so libanon and libcidr will be found in them [15:04:39] can't we copy the artifacts in the right place using https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin ? [15:05:15] drdee: oh you mean, copying them right in the build directory of udp-filters ? [15:05:18] like having the .so-s theres [15:05:25] *there [15:05:28] yes [15:05:28] ok, re-reading that link [15:05:57] is the build directory of udp-filters the same each time ? [15:06:02] do we have multiple workers on the jenkins or just one ? [15:06:12] ask hashar :) [15:06:12] just one [15:06:14] hashar: ok [15:06:16] there is no slave yet [15:06:56] as for your lib stuff, I would set up a job for each lib that react to gerrit changes made to them. [15:07:11] they would do lint / compilation / tests if you have them [15:07:33] then make those job to trigger a run of the udp-filters latest master [15:07:56] and provide them with the lib that got build [15:38:32] man its chilly in here, i am changing locations, be back in just a few [15:38:46] k [15:40:52] average_drifter: maybe first setup a libcidr and lib anon jerkins job? [15:41:00] ottomata: you should write some documentation about the vagrant VM you did :-] [15:41:30] ori might be interested in it, he has been working on a vagrant VM wich provides a fully installed MediaWiki [15:42:00] ottomata: average_drifter: also for testing changes you could use the wmflabs. Create an instance, once installed apply the puppetmaster::self class https://labsconsole.wikimedia.org/wiki/Help:Self-hosted_puppetmaster [15:42:15] might be a bit more longer than simply duplicating a vagrant vm though [15:44:47] drdee: yeah [15:51:38] off to get my daughter back home. Will be back later this evening [15:54:37] hashar, mine wasn't vagrant [15:54:46] but I should do a real nice one, I probably should take Ori's and add puppetization to it [15:55:09] be back in a bit [15:58:00] see you tonight! [17:28:38] not having internet at home is pretty lamesauce. [17:28:57] getyoself a jailbroke iphone and then tether! [17:29:05] yeah yeah. [17:29:15] on the plus side, it means i basically read all the source to meteor, milimetric [17:29:36] lol [17:29:38] ok i got a surprise for you [17:29:43] it'll have to wait until the standup [17:30:14] i have no doubt you reimplemented everything in knockout. [17:31:34] anything for me to pull? [17:34:39] not yet, sorry, going to grab lunch [17:34:46] sounds good [17:53:05] https://plus.google.com/hangouts/_/2e8127ccf7baae1df74153f25553c443bd351e90 [18:00:14] drdee [18:00:18] we wait for uuuu [18:00:36] i am in the hangout [18:00:47] no, you definitely are not. [18:00:47] i pasted the hangout [18:00:47] haha [18:00:47] https://plus.google.com/hangouts/_/2e8127ccf7baae1df74153f25553c443bd351e90 [18:00:51] we are all in it [18:00:53] from your pasted link [18:00:53] click the link you pasted. [18:01:04] https://plus.google.com/hangouts/_/2e8127ccf7baae1df74153f25553c443bd351e90 [18:01:04] it's the smaame link [18:01:04] haha [18:01:04] weird [18:01:07] maybe re join? [18:01:20] https://plus.google.com/hangouts/_/2e8127ccf7baae1df74153f25553c443bd351e90 [18:01:24] yup we're all here [18:19:24] neither average_drifter or I can log in to build1 on labs [18:19:30] my public key is denied [18:19:39] anybody suggestions? [18:23:57] ottomata, some googling on the zip error seems to suggest that it's due to a lack of available file handles [18:24:05] would that make sense? [18:24:06] my pubkey is denied too [18:24:17] hmmmm, i doubt it but will check [18:26:58] so! dschoon, i'm ready to chat [18:27:04] we're still talking [18:27:11] gimmee a sec [18:28:16] mk [18:41:34] drdee: there is a way to increase number of file handles in linux [18:42:00] i know, just not sure if that is the actual problem [18:42:12] yeah, i don't think it is [18:42:13] /proc/sys/fs/file-max [18:42:23] well you can do a lsof [18:42:25] to see how many are used [18:42:28] and compare with file-max [18:43:19] yeah, 6K used, file max is 19565103 [18:44:08] is this libz related ? [18:44:38] i think it is hue or oozie related [18:44:38] something is stoopid [19:03:14] ahhhh diederik! [19:03:15] haha [19:03:17] ottomata: ready when you are [19:03:24] you gotta tell me when you make config changes on the servers [19:03:26] drwxrwxr-x 9 diederik wikidev 4096 Aug 4 2008 ext-2.2 [19:03:33] that needs puppetized [19:03:40] drdee^ [19:03:42] cool dschoon [19:03:42] so [19:04:17] ottomata, those changes happened like 4 weeks ago [19:04:21] haha, i know! [19:04:30] but I didn't know (did I?) [19:04:30] i always notify you immediatyl [19:04:30] they happened like 2 months ago [19:04:30] haha [19:04:31] ok maybe you did [19:04:32] hehe [19:04:35] did I know? [19:04:46] i think i told you [19:04:46] but if not then i am sorry [19:04:51] yeah who knows, maybe you did [19:05:02] cool, adding asana task for me to do that [19:05:38] dschoon: last we talked (before I sent out my email about this), we had kiiiinda said that we didn't care if the format or order of the fields changed [19:05:53] we did? [19:07:34] ottomata: just between request log format and event log format [19:07:35] ottomata: i'm trying to decide if we can share code if those are the same [19:07:35] ottomata: if we have to do speical stuff for event anyway [19:07:35] dschoon: no, they will have to be separate [19:07:35] ottomata: we might as well as choose a format that suits us [19:07:35] dschoon: correct. [19:07:56] ah, yes. [19:08:36] yeah. two ways of looking at reducing complexity [19:08:40] right [19:08:51] either aiming at a local minima, or a global minima [19:08:56] if we need to change the format at all, i think we might as well go all the way and make it into what we want [19:09:00] if we don't have to make any changes [19:09:07] then we should keep it the same [19:09:09] however, ori really wanted the url to come first [19:09:15] which I think is a fair request [19:09:39] yeah, totes fine with me. [19:09:47] shall we convene on the etherpad? [19:09:56] ok [19:09:56] i'm there [19:10:12] what's the URL again? [19:10:19] http://etherpad.wikimedia.org/Analytics-Pixel-Service [19:10:20] ah. [19:10:21] there you are [19:11:17] (I am chatting to you there) [19:13:59] how's that look? [19:17:32] if you want to try it out [19:17:39] oops, wrogn chat [19:18:11] drdee: fixed the build problem on packages [19:18:21] drdee: we have udp-filters_0.3.19.2-1_i386.deb [19:18:21] cool [19:18:28] :) [19:19:33] drdee: would you like to sign this package so it gets to the repo ? [19:22:19] yes [19:22:29] hmm, oxygen packet loss! [19:22:29] yes [19:22:31] i was just gonna say it [19:22:37] who is playing :) [19:23:01] average_drifter: can you install gnpg on build1? [19:23:10] ottomata, buid1 should work for you again as wlel [19:23:17] root 3789 11.1 0.0 9764 812 pts/0 D+ 18:57 2:51 tail -50000000 bannerImpressions-sampled1.log [19:23:35] drdee: yes [19:23:56] ottomata, on oxygen? [19:27:42] drdee: https://gerrit.wikimedia.org/r/31877 [19:27:43] drdee: please review [19:29:36] done [19:29:46] ohh NOOOOOO rob la's evil twin has joined [19:35:43] second git review [19:35:44] https://gerrit.wikimedia.org/r/31880 [19:35:46] drdee: please review [19:36:03] * average_drifter thinks there will be some 3rd and 4th tonight, I'm on a roll [19:36:19] ok going to build1 to build package [19:36:28] done [19:39:05] average_drifter: i think we should add debianize as a submodule to both webstatscollector and udp-filter git repos [19:39:50] root@i-000002b3:/home/spetrea# aptitude search gnupg [19:39:51] i gnupg - GNU privacy guard - a free PGP replacement [19:40:00] gnupg already installed on build1 [19:40:09] drdee: I agree [19:40:17] I'll do that [19:40:17] cool [19:44:24] drdee: I think we can fix the problem on jenkins too [19:44:28] awesome [19:44:41] drdee: I have a solution in mind but it's not very orthodox [19:44:47] let's first add the debianize submodule [19:44:50] ok [19:49:24] hey hashar [19:49:38] re: -) [19:49:58] have you sorted out your jenkins job to use the libanon / libcidr ? [19:50:46] hashar: we postponed it for a moment. I have something to finish first and we'll come back to it [20:15:55] growl packet loss on oxygen again [20:16:05] this time I can't blame Jeff! [20:16:11] growl [20:17:18] ah wait maybe I can [20:19:34] is it true that the speed with which an ssh connection can print messages to STDOUT may slow the execution of a program just because.. erm the connection is slow and the STDOUT messages don't get to me fast enough ? [20:19:47] this is an empiric conclusion :) [20:21:25] yes, average_drifter [20:21:35] it most definitely can and will if you dump a ton of text [20:21:42] because it'll block on writing to the buffer [20:26:56] drdee, dschoon was asking about the X-Device header [20:27:13] k [20:27:13] we have custom varnish stuff that does that, right? [20:27:13] amybe? [20:27:13] if we're already generating the information, sure [20:27:44] no we are not yet generating that information, but this would be an obvious timing to introduce it [20:27:44] would make folks in the mobile team very happy [20:28:09] iiiii dunno about that. [20:28:18] that is to say: it's a fine proposal [20:28:36] but let's keep it as a separate idea rather than bundle it as a requirement into getting this config change out [20:29:35] k [20:29:42] drdee, is x-device is inferred from user agent? [20:29:58] no, it's a separate http header [20:30:15] right. who sets it? [20:30:31] the client [20:30:41] but not all, that's why it's flaky [20:33:34] if the client sets it, then we don't have to do anything, rigth? [20:33:35] should we just log it? [20:35:38] sorry ottomata for misunderstanding what you meant by user agent, i read user agent string [20:37:47] no, that's what I meant, I think we thought this was like X-Carrier [20:38:01] which is gathered and created by Varnish, right? [20:38:04] X-Device is just sometimes set by some clients? [20:38:05] the question is whether the browser is setting X-Device. [20:38:10] so if the data exists, we might as well log it [20:38:18] i think drdee says it is, the client sets it [20:38:21] * dschoon agree with ottomata [20:43:20] drdee, hmm, i am googline about X-Device [20:43:22] not finding much [20:46:00] sorry, look for x-wap-profile [20:46:47] see also http://en.wikipedia.org/wiki/UAProf [20:48:14] examples of Samsung related device information: http://www.uaprof.com/Samsung/ [21:10:17] ah right, ok [21:12:18] ok, so dschoon, i'm adding X-WAP-Profile to the header list [21:12:44] aiight. [21:21:26] ottomata, so i am googling like crazy to figure out what's wrong with oozie…..can't find anything [21:21:42] yeah i did that this morning :) [21:21:46] coudln't find anything either [21:23:44] weird [21:25:28] mmmm well actually maybe we should start with properly configuring oozie :) [21:27:11] yeah so, that's where I left off before stand up [21:27:15] we should try to get an oozie job runnign via cli [21:27:17] haven't yet done that [21:27:25] I was getting a very cryptic null pointer exception when I left off [21:27:49] did you Installing the Oozie ShareLib in Hadoop HDFS? [21:28:02] yes, that bit is puppetized [21:28:10] k [21:28:39] drdee, dschoon, mind if I move the Event Data Stream thread to the public analytics list? [21:28:47] go ahead! [21:29:00] i think that's already been done. [21:29:13] naw, i think ori forwarded to some people, that's all [21:29:14] unless you're specifically talking about log formatting [21:29:17] yes [21:29:18] that [21:29:26] the thread that I started [21:29:31] you should update https://www.mediawiki.org/wiki/Analytics/Kraken/Data_Formats [21:29:38] was just to us + ori + asher [21:29:39] ok [21:29:46] to explain the difference between this and the other formats [21:30:12] and explain when a stakeholder might care about this information (never, hopefully) [21:30:13] that's avro, right? the avro stuff will be the same, right? [21:30:26] or shoudl I add a section for transport? [21:30:27] that page subsumes the avro schemas, yes [21:30:38] yeah, add a section [21:30:38] or log or wahtever [21:30:38] ok cool [21:30:38] will do! [21:30:53] it'd be good to also include notes about the experiments we ran for encoding. [21:33:21] ok cool [21:34:58] ottomata, running this oozie command on the CLI throws an exception as well: [21:34:59] oozie admin -oozie http://localhost:11000/oozie -status [21:35:12] it shouldn't do that i presume :) [21:35:23] ah good way to start! [21:35:23] nullpointer? [21:35:40] drdee, I am writing the logging stuff up now, and will send an email [21:35:47] will continue to work on oozie tomorrow morning [21:35:56] cool [21:36:02] oh! I am moving my stuff out of storage tomorrow morning, so I'll be on a bit later than usual, but I will work later too :) [21:48:30] k, updated [21:48:30] https://www.mediawiki.org/wiki/Analytics/Kraken/Data_Formats [21:48:37] is transport logging the proper term? hmm [21:48:39] i dunno [21:54:11] it's the transport format for loglines [21:54:15] honestly, it's just the logging format :) [21:55:25] yeah [21:55:25] i think so too [21:55:55] hey [21:56:02] you guys said that optimist is for bash too / [21:56:03] ? [21:56:19] the optimist? [21:56:40] ottomata: uhm, what was the name of that command line switch parsers ? [21:56:43] *parser [21:56:57] docopt [21:56:59] that one that I liked? [21:57:11] yeah [21:57:42] ottomata, i resolved the CLI null pointer exception, that was a known issue with CDH4 [21:57:58] ah good [21:58:02] drdee: can I use docopt to parse some parameters in debianize.sh ? [21:58:02] what'd you do? [21:58:28] the fix is one time only, append -auth SIMPLE to the command line [21:58:39] that generates a file and then subsequent calls are fine without hat [21:58:47] average_drifter, sure, if you like it [21:58:55] for bash: [21:58:55] https://github.com/docopt/docopts [21:59:03] ah nice [21:59:03] ok cool [21:59:05] imo, it's not very good. [21:59:41] but it depends on what you're looking for. [21:59:49] you no likey? [21:59:54] docopt? [21:59:54] so goooooood [22:00:12] you never like the fun things dschoon [22:00:42] i know :( [22:00:45] i write my help like this: [22:00:46] https://gist.github.com/2776666 [22:00:59] almost the same [22:01:01] but not quite [22:01:16] you parse it with https://gist.github.com/2776666#L61 [22:01:57] it doesn't do anything to the strings. you still have to deal with the getopts loop [22:02:13] but getopts is everywhere, sooo. [22:02:14] brb [22:05:27] kback [22:07:12] k [22:18:16] ottomata, i think i can reproduce the oozie 500 error on the CLI [22:18:48] the zip file error thing? [22:18:50] try this: [22:19:21] oozie pig -file /home/diederik/pageviews.pig -oozie http://localhost:11000/oozie -config /home/diederik/oozie.properties [22:19:31] well it's a 500 error [22:19:36] i haven't found the log yet [22:19:48] but the zip error is also a 500 error so i guess it's the same problem [22:20:34] i get null pointer [22:20:39] oh i bet I have to set that auth thing personally [22:20:53] yup [22:20:54] in logs [22:20:54] an01: Caused by: java.util.zip.ZipException: error in opening zip file [22:21:03] cool, so it is not a hue thing [22:21:04] good to know [22:21:32] which log file are you looking at? [22:22:07] /var/log/oozie/localhost.2012-11-05.log [22:22:47] ty [22:23:33] ottomata, [22:23:46] is oozie trying to download something: [22:23:47] at sun.net.www.protocol.jar.URLJarFile.(URLJarFile.java:72) [22:23:48] at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:48) [22:23:49] at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:70) [22:23:50] at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:104) [22:23:51] at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:132) [22:24:18] i assume that is just for whatever jar it needs to run your job…hmm, but you are using pig…i guess just pig jar? [22:24:18] hmm [22:24:40] hmm [22:24:47] i dunno [22:25:09] yarhg, i gotsta run, drdee, I will pick this up in the morn after I move out of my storage unit [22:25:24] i'll play a bit more with it tonight [22:25:26] it's bugging me [22:25:33] cool [22:25:40] lemme know how far you get in the morn [22:25:42] laterrrs [22:54:13] dschoon: Congrats on the title bump. :-) [22:55:35] Thanks, Brooke :) [23:25:42] brb [23:39:42] back