[13:42:11] yoyo [13:44:59] hello [13:45:08] yooooo [13:45:19] drdee, yargh, i still see the same error, even after using the new core .jar [13:45:25] Caused by: java.lang.ClassCastException: org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl$CompressAwarePath cannot be cast to java.lang.Comparable [13:45:34] welp [13:45:36] that sucks [13:45:43] i symlinkd the jar, just like the other one was, and the instructions said to replace the jar [13:45:44] so [13:45:46] maybe that is the problem [13:45:48] but i doubt it [13:45:57] i'll try and move that jar out and see [13:45:59] on all the nodes? [13:46:01] the orig, but i doubt that's the problem [13:47:58] yeah [13:57:40] ok, drdee, this time it failed [13:57:43] but with a different error [13:57:47] Caused by: java.lang.OutOfMemoryError: Java heap space [13:58:08] ok, i'm at the airport and picking up my friend, we're going to drive to a cafe in just a bit, be back on in a little while [14:34:08] you have to restart everything, i think [14:34:29] the jar wouldn't get picked up if the classes were provided in the vm already [14:39:33] morning dschoonl [14:39:52] can you fire up your job to see if it has been fixed (and restart the nodes)? [14:39:58] i feel like ass [14:40:02] i need to get some food first [14:40:04] but yeah [14:40:07] when i get back [14:40:20] i think this is the first time i've been out of beed for more than 10m in the last 2 days [15:00:45] drdee, we have meeting? [15:01:02] yahhh buuuuut not sure if ez will show up [15:01:07] oh, k [15:01:11] i'm in standup hangout [15:01:28] oh sorry it's this one: https://plus.google.com/hangouts/_/9b74f13bf4228b9b5c8e5dcc9ef179dd494c558b [15:01:58] thanks [15:07:46] New patchset: Yuvipanda; "Add graph showing % of images that were not categorized" [analytics/limn-mobile-data] (master) - https://gerrit.wikimedia.org/r/60011 [15:10:08] New review: Yuvipanda; "No code changes as such - regular add-new-graph commit. Besides, I have utterly failed at getting a ..." [analytics/limn-mobile-data] (master); V: 2 C: 2; - https://gerrit.wikimedia.org/r/60011 [15:10:08] Change merged: Yuvipanda; [analytics/limn-mobile-data] (master) - https://gerrit.wikimedia.org/r/60011 [15:27:52] ottomata, dschoon, or drdee: I'm trying to concat output from my pig script [15:28:00] is that thing that dschoon made useful here? [15:28:13] I didn't follow before when he explained [15:29:00] so right now, I have five lines per day of output. But it writes each line to a separate part file in the output directory [15:29:28] yeah, i think so [15:29:42] there are two ways, dschoon's is better, especially for regular automated stuff [15:29:49] ok, cool [15:29:57] his is an oozie job that does hadoop streaming [15:30:00] mine is a pig script [15:30:12] oh ok, so your concat pig script could be used here as well [15:33:07] i'm not sure how to use this coalesce workflow though. Is it just scheduled totally separately? [15:40:57] you'd do it as part of a coordinator [15:41:09] workflows chain actions, coordinators chain workflows [15:41:22] but i think yeah, you could use concat_sort.pig [15:41:31] if it is just a one time job, that wouldn't hurt [15:41:37] if you are trying to do somethign more regularly, you'd use a coordinator anyway [15:43:54] so, drdee, maayyyyybe the classcastexception is fixed, i'm not sure [15:44:00] i'm getting a java heap error now [15:44:03] ideas? [15:44:25] yes do not use the histogram.pig script but use another script [15:44:35] i remember know that it had this OOM before [15:44:36] sorry [15:44:43] hmm [15:44:43] ok [15:44:47] well, that's good news [15:44:50] what script should I use? [15:44:54] dschoon, you there? [15:45:04] what script were you hitting the ClassCastException with before? [15:45:06] best to use dschoon's script [15:45:07] i want to see if it is fixed [15:47:16] i'm pretty dead [15:47:22] going back to sleep soon [15:47:28] i can try to launch my job [15:47:44] New patchset: Stefan.petrea; "Added links for New mobile pageviews report" [analytics/wikistats] (master) - https://gerrit.wikimedia.org/r/59864 [15:48:16] "HTTP500 internal server error" [15:48:21] ottomata: ^^ [15:48:26] on what? [15:48:47] dschoon: or can you give us the command / script name so we can tinker with it today? [15:48:50] i can try to run your job if you tell me how, which pig script? i might try to run it ust as pig script [15:49:01] dsc@analytics1002 sessions $ cd ~/jobs/sessions/ && sudo -u stats oozie job -oozie $OOZIE_URL -config sessions-wf.properties -run [15:49:01] Error: HTTP error code: 500 : Internal Server Error [15:49:07] yeah, sec [15:49:12] making sure it's committed [15:49:13] on oozie? [15:50:41] yep, i got that too just now [15:50:45] oozie [15:50:58] i've pushed [15:51:05] it's the mobile_sessions script [15:51:58] i'll hopefully be back in a few hours [15:51:59] ugh [15:52:04] feel better man [15:53:35] ok , thanks dschoon [15:53:40] milimetric, how did you get that? [15:53:51] i'll paste command [15:54:02] oozie job -oozie http://analytics1027.eqiad.wmnet:11000/oozie -run -config ./workflow.properties [15:54:48] i had updated the workflow.xml and fs -put it into my home directory [15:55:04] uh, /user/dandreescu/oozie I mean [15:55:44] but i read some of that coalesce oozie stuff and it's way over my head. Your concat script seems like wicked simple, so I'm just using that [16:02:01] that's fine if you are just using it fo rnow [16:02:07] i wouldn't use concat_sort if you are doing a regualr job [16:04:58] i'm doing a regular job, but i honestly can't understand how to use the other approach [16:05:15] i mean, if i had a few more spare hours, that'd be great, but i'm already way behind on this [16:05:28] any idea what's wrong with oozie? [16:05:51] ottomata ^ [16:06:49] ergh, i'm seeing it too [16:06:49] no, but i'm looking it now [16:07:19] cool, if i can help test anything, i'm here [16:07:41] 2013-04-19 16:07:25,332 FATAL Configuration:2011 - error parsing conf mapred-default.xml [16:07:41] java.util.zip.ZipException: error in opening zip file [16:07:41] hm [16:08:26] oh i know [16:08:42] need to restart oozie [16:08:50] going to restart hue too [16:10:00] ohhhh, interseting [16:11:03] intesting, there are more places that 4.2.0.jar file is symlinked then just what that JIRA bug says [16:11:05] i'm going to comment there [16:11:42] oozie loads that jar by its full name in a diff dir [16:11:46] /usr/lib/hadoop/client/hadoop-mapreduce-client-core-2.0.0-cdh4.2.0.jar [16:20:19] milimetric: shoudl be fixed [16:20:40] k [16:20:51] yep [16:21:09] ty! [16:23:16] nice job ottomata! [16:24:15] cool [16:27:39] oh ottomata, when I get this error: [16:27:39] RemoteTrace: java.io.IOException: Resource hdfs://analytics1010.eqiad.wmnet:8020/user/dandreescu/.staging changed on src filesystem (expected 1366388447528, was 1366388453322 at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:98) [16:27:51] I just delete that .staging directory they're talking about [16:27:55] and usually it's fine [16:27:59] is that... wrong? [16:28:03] like... why is that happening [16:32:59] uhhhh, dunno~ [16:33:02] not seen that before [16:33:04] it seems like that a failed job was not cleaned up [16:33:05] sounds fine to me, but i dunno [16:34:05] https://issues.apache.org/jira/browse/MAPREDUCE-3931 [16:34:15] not sure if that has made it to CDH 4.2 [16:34:21] it is a YARN specific bug [16:34:33] apparently a race condition [16:34:57] milimetric, ottomata ^^ [16:37:17] mmmm, it should have made it to CDH4, http://archive.cloudera.com/cdh4/cdh/4/hadoop-0.23.1+370.releasenotes.html [16:39:51] milimetric, we should talk about the coelesce thing and get that to work then, i think go ahead and do what you are doing now [16:40:11] k [16:40:27] but ja, let's make coelesce work in the long run [16:40:31] we both need to know how to use it properly [16:40:33] OK! [16:40:38] oops [16:40:38] it is better and more reliable than concat_sort [16:40:39] ok :) [16:40:51] i'm just stressed out [16:41:42] hah [16:41:55] don't get stressed! [16:42:16] ok, found the bug report; ignore my previous comments [16:42:21] it is https://issues.apache.org/jira/browse/YARN-73 [16:42:39] it's not yet fixed [16:44:00] thanks ottomata for updating the DISTRO jira report [16:44:00] ! [16:44:30] yup! [16:49:33] ok ottomata, I've got a working workflow with concat_sort [16:49:48] wanna change it into a coordinator with coalesce? [16:49:56] together I mean? [16:51:11] sure [16:53:50] k, maybe after standup [16:53:59] i'll start on it and show you where i'm stuck [16:54:43] k [16:54:44] sounds good [17:20:29] hey milimetric [17:20:50] milimetric: fab mobile_dev deploy fails on the code steps [17:20:52] (data succeeds) [17:21:08] Fatal error: sudo() received nonzero return code 56 while executing! [17:21:43] hmm, weird [17:21:44] 136 error Error: EROFS, open '/home/yuvipanda/.npm/0bcf61ed-passport-google-0-3-0.lock' [17:23:13] looks like things are readonly for some reason :| [17:23:16] but only_data succeeds [17:23:18] gah [17:50:32] New patchset: Yuvipanda; "Add graph of deleted uploads" [analytics/limn-mobile-data] (master) - https://gerrit.wikimedia.org/r/60027 [18:08:13] New patchset: Yuvipanda; "Add graph of deleted uploads" [analytics/limn-mobile-data] (master) - https://gerrit.wikimedia.org/r/60027 [18:14:58] ottomata, are you around? [18:15:10] talking with ram about udp-filter and performance improvements [18:15:27] maybe you could join us at https://plus.google.com/hangouts/_/af49f718cc37861ec09230fc0a0a6faf208a67b1 [18:23:00] New patchset: Yuvipanda; "Add graph of deleted uploads" [analytics/limn-mobile-data] (master) - https://gerrit.wikimedia.org/r/60027 [18:26:23] YuviPanda: [18:26:25] sorry! [18:26:37] I was deep in a debug session with andrew [18:26:41] :D [18:26:44] OK, so the reason that's happening [18:26:57] is because glusterfs is screwed up on that box [18:27:00] yup [18:27:01] Coren fixed that [18:27:04] ? [18:27:05] and now it is http://mobile-reportcard-dev.wmflabs.org/ [18:27:06] :) [18:27:14] (500) [18:27:27] oh go [18:27:29] *oh god [18:27:33] he didn't reboot the box did he? [18:27:51] ok, be very very careful with that box [18:28:01] it's an older version of linux and if we reboot it we basically lose it [18:28:20] we've gotta migrate it soon (we've made backups) [18:28:40] ok, so, how did Coren fix it YuviPanda? [18:28:55] milimetric: ow :| [18:28:55] not sure [18:28:55] hop on -labs? [18:29:08] he asked me to log off, did some magic, and then tada! fab worked [18:29:10] um, ok, I should tell them to be gentle with this box [18:29:30] you break it; you own it :D [18:31:09] greetings! [18:31:34] hi xyzram! [18:32:28] i just forwarded you an email from average [18:33:49] xyzram: i created a mingle card to spec out this project; see https://mingle.corp.wikimedia.org/projects/analytics/cards/538 and add questions, comments [18:35:14] ok YuviPanda, I know what the problem is [18:35:17] Ok, thanks; is mingle a project management tool ? [18:35:24] xyzram; yes [18:35:26] I basically implemented graph creation and editing [18:35:40] and it requires logging in with your wmf account [18:35:50] oh? [18:35:59] so the server didn't start because of a config error [18:36:02] (new config file is required [18:36:14] yeah, let me copy that over and add your email address to the list of allowed users [18:36:19] you got anyone else that needs access? [18:36:44] I'm not sure if these features will help you, they were mostly done for Jessie [18:36:55] but if you wanna jump into a hangout I can show you quickly what i'm talking about [18:37:36] milimetric: oh, ok :) [18:37:38] milimetric: jgonera [18:37:39] too [18:37:46] milimetric: sure! [18:37:50] ok, i'll paste link [18:37:58] https://plus.google.com/hangouts/_/2da993a9acec7936399e9d78d13bf7ec0c0afdbc [18:38:02] milimetric: i've to add a bar graph and a scatter graph too, so maybe it'll help :D [18:38:10] moment [18:38:11] no, sadly no [18:38:12] :) [18:38:15] it just creates line graphs [18:38:16] sigh :( [18:38:17] :P [18:38:22] Change merged: JGonera; [analytics/limn-mobile-data] (master) - https://gerrit.wikimedia.org/r/60027 [18:47:56] New patchset: JGonera; "Add *.pyc and datafiles/ to .gitignore" [analytics/limn-mobile-data] (master) - https://gerrit.wikimedia.org/r/60032 [18:49:06] Change merged: Yuvipanda; [analytics/limn-mobile-data] (master) - https://gerrit.wikimedia.org/r/60032 [18:51:20] milimetric: thanks :) [18:52:44] k YuviPanda, http://mobile-reportcard-dev.wmflabs.org/graphs/create now works [18:52:55] * YuviPanda tries to log in [18:52:56] it was just a type-o in the config (had the wrong domain name for one of the auth links) [18:53:17] so the redirects from login are weird - we'll ajaxify it soon [18:53:20] ah! [18:53:21] :) [18:53:30] hmm, it just showed me the page [18:53:32] no signin needed :P [18:53:35] and on the graph edit pages, clicking on the description or title will change it into a box [18:53:39] yeah, it will [18:53:47] but you won't be able to POST or PUT or DELETE from it [18:54:00] unless you're signed in [18:54:39] oh, i can't edit now? only create? [18:54:49] * YuviPanda tests [18:55:06] ah [18:55:21] hmm i got to http://mobile-reportcard-dev.wmflabs.org/graphs/no-cats/edit but no idea what to do next [18:57:02] New patchset: Yuvipanda; "Fix off-by-one error in graph for no-categories" [analytics/limn-mobile-data] (master) - https://gerrit.wikimedia.org/r/60033 [18:57:25] Change merged: Yuvipanda; [analytics/limn-mobile-data] (master) - https://gerrit.wikimedia.org/r/60033 [19:08:33] hm, YuviPanda, that graph seems to have some problems with the data [19:08:40] I can't talk right now but maybe catch me first thing next week [19:08:43] we'll look it over [19:08:49] milimetric: yes, I just fixed that and pushed [19:08:56] i assume it's not high priority, you're just tyring to check out the new features right? [19:08:58] milimetric: can I do deploys now without worrying about taking down mobile-reportcard? [19:09:01] yup yup [19:09:04] not high priority [19:09:07] but, can I do deploys now? [19:09:13] so in general, the edit only has three things you can edit [19:09:20] oh yea, deploys are fine now [19:09:47] so on an edit page, if the graph has a "notes", "desc", or "name" field, they'll be displayed and editable [19:09:51] (just click on them to edit) [19:09:59] ah, right. [19:10:00] your cursor will turn into a hand if something's editable [19:10:04] k, brb [19:10:05] no editing actual series yet [19:16:27] E: Unable to locate package libcidr0-dev [19:16:43] Where do I get that Ubuntu package ? [19:17:32] Needed to build udp-filters [19:27:11] hi xyzram [19:27:14] 1 sec [19:27:25] hi [19:27:44] I'm building it from source at https://github.com/wikimedia/analytics-libcidr [19:27:54] git clone ssh://xyzram@gerrit.wikimedia.org:29418/analytics/libcidr [19:28:05] it should be the same on github, hi xyzram! [19:28:08] ah, ok [19:28:08] yes, that needs to be compiled first [19:28:12] yes it should be the same [19:28:50] src/udp-filter.h:26:21: fatal error: libanon.h: No such file or directory [19:28:55] ok [19:28:58] 1 sec [19:28:58] where does that come from ? [19:29:09] https://github.com/wikimedia/analytics-libanon [19:29:10] ? [19:29:11] :) [19:29:12] git clone ssh://diederik@gerrit.wikimedia.org:29418/analytics/libanon [19:29:22] there are debs for these [19:29:32] ohh [19:29:35] if you are trying to buidl upd-filter [19:29:36] in apt.wikimedia.org [19:29:37] Where do I get the debs from ? [19:29:44] yeah, i don't htink you need to build the deps from source [19:29:57] ottomata: xyzram is helping us with improving udp-filter performnace [19:30:05] Ok, just add that to sources.list ? [19:30:07] robla has made him available to us for a week or so [19:30:08] https://github.com/wikimedia/analytics-libanon [19:30:10] yeah [19:30:14] cool! [19:30:15] https://github.com/wikimedia/analytics-libanon [19:30:18] http://apt.wikimedia.org/wikimedia/pool/main/liba/libanon/ [19:30:23] http://apt.wikimedia.org/wikimedia/pool/main/libc/libcidr/ [19:30:24] yeah [19:30:27] but add to sources.list [19:30:30] and apt-get install them [19:32:34] What does the sources.list line look like ? [19:32:40] deb http://apt.wikimedia.org/virtualbox/debian .... ? [19:32:53] Sorry, [19:33:07] deb http://apt.wikimedia.org/debian .... [19:33:44] # cat /etc/apt/sources.list.d/wikimedia.list [19:33:44] ## Wikimedia APT repository [19:33:44] deb http://apt.wikimedia.org/wikimedia precise-wikimedia main universe [19:33:44] deb-src http://apt.wikimedia.org/wikimedia precise-wikimedia main universe [19:34:23] checking in [19:34:38] anyone need me? [19:34:53] dschoon, i *think* the ClassCastException is solved [19:34:59] although I haven't been able to successfully run your job yet [19:35:06] but i think due to other unknown reasons [19:35:10] reminder: i was not able to create an instance to back up kripke [19:35:12] someone should do that [19:35:17] heh [19:35:19] that's fine [19:35:20] good to hear [19:35:39] not sure, but i haven't seen that error since I added the new .jar [19:35:54] awesome [19:36:03] maybe whern i feel better in june i'll check it out [19:36:23] ottomata: I'm on quantal, the precise package may cause problems, no ? [19:36:31] hm, perhaps [19:36:35] it will probably work [19:37:20] and locke is still on lucid [19:37:42] ottomata, maybe just do a do-release-upgrade on locke ? [19:37:54] i told xyzram that we would use locke for performance testing [19:38:17] i think it's best to have local/ stage/prod environments to be precise [19:41:12] maybe? mutante had a hard time doing that for emery [19:41:26] reinstall would probably be easier…if it works [19:41:28] but who knows [19:42:09] i think reinstall is more painful [19:42:21] you will get the part man stuff [19:44:41] if it works it is easier [19:45:05] mutatne had to resolve tons of install errors before he got it to work [19:45:29] checking for OPENSSL... no [19:45:29] configure: error: Package requirements (openssl) were not met: [19:45:29] No package 'openssl' found [19:45:37] When building anon [19:45:53] ii openssl 1.0.1c-3ubuntu2.3 amd64 Secure Socket Layer (SSL) binary and related cryptographic tools [19:46:01] But I have openssl installed [19:46:58] getting food. brb [19:47:16] drdee: we have a new request for a proper ee-dashboards instance [19:47:21] I've forwarded you the chain [19:47:32] hopefully there's no SQL / data work but only DarTar can answer that [19:47:54] milimetric: how goes the ooziepig? [19:48:00] no go sir [19:48:02] no go [19:48:04] rats [19:48:22] xyzram, is that when building? [19:48:26] oh when building anon [19:48:31] mh [19:48:32] mh [19:48:33] hm [19:48:45] ottomata: yes when building anon [19:48:58] Maybe I need the openssl dev package ? [19:49:01] so it's still giving me the same error about not knowing the OUTPUT variable, parameter, whatever [19:49:11] oh, even with the later job? [19:49:20] like if you change the first instance to be the next days? [19:49:21] with the later job it only spawned C@1 [19:49:23] not C@2 [19:49:26] hmmm [19:49:31] and the C@1 had the output problem [19:49:34] ok so that's a problem then [19:49:34] hm [19:49:37] yep [19:49:44] i like looked over it again for typeos [19:49:46] but i'll go again :) [19:50:21] yeah i looked at it too and it looked fine [19:50:24] xyzram: yes i think you need to install that too [19:50:26] vim is behaving insanely over ssh btw [19:50:32] it's copy / pasting randomly into files [19:50:39] like bits of the text you paste get thrown everywhere [19:50:46] so it might have been that [19:51:04] xyzram: hm, i also have libssl installed [19:51:33] Ok, thanks, I was looking for openssl-dev but it is called libssl-dev; installing .... [19:52:10] configure: error: cannot find pcap headers [19:52:27] libpcap0.8-dev [19:52:34] man i need to fix up the control file [19:52:36] Thanks. [19:52:42] (these packages were some of the first debs i ever built [19:52:44] so very bad) [19:52:49] milimetric: yes, ee-dashboard has no implication for data generation [19:52:58] libpcap-dev [19:53:08] so Fabrice was saying there are new extensions or something [19:53:17] and that some changes needed to be made [19:53:27] oh, milimetric, you might need to run that as the stats user….>>…….hmmm, no hm [19:53:28] or something like that [19:53:34] that shouldn't be related [19:53:35] hm [19:53:39] :) [19:54:36] well the directory we've passed in as OUTPUT doesn't exist yet [19:54:44] is that a problem or it would just create? [19:55:18] that doesn't sound liek the error [19:55:20] and i htink it should create dit [19:55:27] maybe it won't create the full path? [19:55:52] oh! [19:56:00] but maybe since it doesn't exist at all, oozie won't set the var? [19:56:01] could be [19:56:06] lemme make it [19:56:19] oh [19:56:19] yeah [19:56:37] and your user doesn't have perms to create it [19:56:40] actually, milimetric [19:56:47] before you try to output into wmf/public [19:56:52] i would make it output into your homedir [19:56:54] to make sure it works [19:57:02] milimetric: yes, but that's something that we can handle on E2's side [19:57:07] because, otherwise, you'll start the thing, somethign will probably be wrong, and it will start generating datasets to wmf/public [19:57:11] which get synced to stats.wikimedia.org [19:57:36] so yeah, i think you are right, the output path isn't right (you need to be stats user to write to wmf/public) [19:57:51] which is who the job should be submitted as once you've got all the kinks worked out [19:58:31] ok, cool DarTar, then all I need to know is what labs instance E2 prefers this to be on and I'll set it up. [19:58:50] I'll give them deployment instructions for ongoing changes and I'll show you the new edit functionality for changing titles / descriptions / notes [19:58:56] sounds good [19:59:14] dschoon, do we need another instance? what if I just backup /srv to stat1 or something? [20:00:00] ok, but the job is fine the way it's described now ottomata, right? So I'll just make some local changes without committing them, make sure it works with a different OUTPUT and then ping you when/if that happens? [20:00:08] yeah, i think that's fine [20:00:09] yeah [20:00:14] it looks good as is to me [20:00:31] re: where to push the sources, we should probably have a single, canonical repository of CSVs instead of having them hosted on various labs instances, do you guys have any plans for this? [20:01:45] milimetric, drdee ^^ [20:02:27] no, that's not a plan [20:02:54] drdee: can you expand? [20:03:13] you asked if we had plan for this and the answer is no [20:03:36] drdee, to recreate the ClassCastExceptoin error [20:03:36] ok, but you guys think it would be useful? [20:03:39] should I group by something common? [20:03:44] or something with a lot of unique values [20:03:46] e.g. [20:03:58] content-type vs uri [20:03:58] ? [20:04:09] off to grab lunch, bbl [20:04:10] i guess the reducers need to be sent a lot of data [20:04:12] so common is better [20:04:20] that should work [20:04:28] DarTar: yes, i think we've talked about that before, having a common place for generated data [20:04:40] dschoon wanted there to be an API, something like metrics api is turning out to be…but for now [20:04:44] there is stats.wikimedia.org [20:04:47] we can put subdirs there [20:04:50] I'm a little confused as to what we're talking about DarTar-afk [20:05:00] there is also analytics.wikimedia.org, which we have too but there is nothing ther eyet [20:05:07] sounds good to me, let's catch up on this later? me starving [20:05:10] maybe ping me in a PM when you get back :) [20:05:17] will do [20:05:17] bon apetit [20:05:22] merci [20:10:07] drdee, I am running this right now: [20:10:07] http://codeshare.io/NjftI [20:10:09] hm, ottomata, no dice / rats / balls [20:10:19] do you think that would have been enough to trigger the classcastexecption? [20:10:22] even with /user/dandreescu it didn't work [20:10:31] well poop [20:10:32] same error? [20:10:34] yes [20:10:38] k, i'm going to try [20:10:41] variable [OUTPUT] cannot be resolved [20:10:52] and that sounds so much like what you were saying [20:10:59] can I use the properties you are using in your homedir? [20:11:01] that it can't find the directory and not the actual variable [20:11:11] no i don't think so [20:11:22] i'd use the stuff in this folder from my home dir: [20:11:27] ottomata: yes i am pretty confident that it would trigger it [20:11:31] /kraken/oozie/mobile/platform/daily [20:11:56] hey that's good news drdee / ottomata, means the bug is squashed right? good job [20:12:19] yes, think so! [20:12:24] it succeeded! [20:12:29] ottomata: drdee: Finally got udp-filter to build and run: ./udp-filter -i 216.38.130.161 < example.log | wc [20:12:32] 5 70 1164 [20:12:36] yay! [20:12:44] \o/ [20:12:56] please feel free to submit patches ) [20:12:56] :D [20:13:37] xyzram, if you want to add an easy feature: i'd like to be able to use udp-filter to filter by machine hostname [20:13:39] The build fails at the end to build the static binary [20:13:40] i think $1 [20:13:56] hmmm, what branch are you building in? [20:14:10] Presumably because it doesn't have static versions of some of the dependent libs. [20:14:40] Generally the consensus seems to be that building static libs is not a good idea. [20:14:54] Sorry, meant to say static binaries. [20:15:12] I get warnings like this: [20:15:26] (.text+0x19): warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking [20:15:33] yeah, i think average tried to build static ones or something, but we gave up on that i thought [20:17:16] I'm going to play around with it and read the code for a bit before adding any features, but I'll keep that hostname filter feature in mind. [20:17:25] ok [20:17:25] cool [20:18:02] ottomata: I'm building the master branch [20:18:12] mramanath@xyz udp-filters: git branch [20:18:12] * master [20:19:00] yeah not sure why we ever tried creating static binaries [20:22:52] New patchset: Diederik; "Email serious errors to dev's" [analytics/E3Analysis] (master) - https://gerrit.wikimedia.org/r/60044 [20:25:27] drdee: git clone ssh://xyzram@gerrit.wikimedia.org:29418/analytics/udp-filters [20:25:34] gives me: Permission denied (publickey). [20:25:55] I was using the https URL earlier. [20:29:00] Change merged: Milimetric; [analytics/E3Analysis] (master) - https://gerrit.wikimedia.org/r/60044 [20:29:24] xyzram….. weird [20:29:26] i added you to the analytics group [20:29:54] you have a working gerrit install, right? [20:30:32] ok, milimetric, i can reproduce with my own version [20:30:34] hm. [20:30:52] :( [20:30:54] drdee: what do you mean by a "gerrit install" ? [20:31:07] you have cloned from gerrit before, right? [20:31:08] got my 1/1 with rob soon [20:31:17] but maybe we can pick it up Monday [20:31:31] Yes, core, extensions, puppet, sure. [20:32:01] and xyzram is your gerrit username? [20:32:43] Ah, you're right on: ram is the gerrit name (I just copy pasted what you had above, my mistake). [20:32:56] and i just guessed your username :) [20:33:12] Thanks, the clone works now. [20:33:15] perfect! [20:34:14] submitting patches is the same as any other repo, with git-review ? [20:47:27] yup. [21:02:17] milimetric: just foudn this error in user logs [21:02:17] 2013-04-19 21:00:42,280 WARN CoordELFunctions:542 - USER[stats] GROUP[-] TOKEN[] APP[webrequest_mobile_platform_by_day_host] JOB[0000051-130419161609079-oozie-oozi-C] ACTION[-] If the initial instance of the dataset is later than the current-instance specified, such as coord:current(-1) in this case, an empty string is returned. This means that no data is available at the current-instance specified by the user and the user could try mod [21:02:18] i think it is our culprit [21:02:20] still readying... [21:02:22] reading* [21:02:22] hm [21:02:23] ok [21:02:38] wow that's a very very nice error :) [21:03:43] hmm, or maybe that is normal [21:03:48] i see that with other jobs too i think [21:03:57] interesting [21:04:47] you're certainly using current(-1) in other coordinators [21:06:24] yeah [21:06:30] hmm, oo, one unrelated thing [21:06:32] ${coord:current(-7)} [21:06:35] i think we want -8 [21:06:43] so we grab the one from the previous 4 hour perioud [21:06:57] if current is 15_00:00 [21:07:07] we want all the way back to 13_20:00 [21:07:37] hm, was -97 wrong then for 15 min. intervals? [21:08:35] um, i think as a rule of thumb, you should have +2 extra imports in your dataset for each period [21:08:35] so [21:08:42] if you are doing hourly with 15 minute imports [21:08:43] k, that works [21:08:47] you should have 6 datasets [21:08:50] 6 imports [21:08:54] in your dataset [21:09:01] because either way it's better to get more than less [21:09:08] the regex would filter them out anyway [21:09:12] yeah [21:09:28] fixed/pushed [21:12:48] Change merged: Rfaulk; [analytics/E3Analysis] (master) - https://gerrit.wikimedia.org/r/59882 [21:23:22] hey ottomata, can i bug you for a small merge? https://gerrit.wikimedia.org/r/#/c/59981/ [21:25:03] out for the weekend guys [21:25:05] have a good one [21:25:09] see you milimetric [21:26:12] ori-l: done [21:26:25] ottomata: thanks!! :D [22:36:37] drdee, do you know which card I am working on for milimetric? [22:36:49] mobile/platform/daily? [22:36:57] i was going to update it [22:37:01] i'm pretty stumped [22:50:18] ,\ [22:50:20] o [22:50:20] " [23:06:37] laters all!