[13:36:16] good morning [15:22:21] hey heyyyyy morningall [15:22:33] milimetric you up and working? whatcha think of my pull request ehhh? [15:22:59] hey otto [15:23:15] yeah, I had to catch up on some things [15:23:22] but LIMN_VARDIR is definitely necessary [15:23:41] cool, I puppetized everything with the assumption that it was good [15:23:51] https://gerrit.wikimedia.org/r/#/c/49710/ [15:25:10] hey ottomata , milimetric :) [15:25:21] Yep, it's good. I just have to change the Cokefile a bit [15:25:22] I'm in a bar, going to go home soon [15:25:40] milimetric: do we make a new package soon ? [15:25:42] I implemented --update [15:25:49] so it's going to be faster now to sync the new git commit [15:25:51] *commits [15:25:52] awesome average_1rifter [15:25:54] cool! [15:26:06] let me merge ottomata's pull request and test my Cokefile change [15:26:21] ok [15:27:48] erosen: ping. [15:31:56] erosen: ping [15:41:31] milimetric, cool! [15:41:48] one thing I didn't like is how I had to hard code the defaults ina few different places: e.g. ./var, ../var, etc. [15:42:01] i wasn't sure how the initial LIMN_VARDIR in server.co got passed around [15:43:04] yeah, it's ugly [15:43:18] you even made a small mistake of doing ../var in one place and ./var in the other [15:43:30] it's still not working for some reason, I'm fiddling with it [15:43:56] the reason it's like that though - server.co and sources.co - is because server.co isn't technically "limn" [15:44:14] server.co would be one example of someone using the Limn middleware [15:44:42] so David's been planning on splitting this into Limn Server and Limn Client for a while and that's when this will change [15:46:08] I did make LIMN_DATA not read from the environment [15:46:39] does that mess with your puppet config at all ^ ottomata? [15:47:30] well, that was how it was done before I got there [15:47:35] when it was setting varDir [15:48:05] as long as env variable LIMN_VARDIR works, then puppet will work :) [15:48:21] wait, you made LIMN_DATA not read from the environment? [15:48:26] oh as in it isn't settable? [15:48:34] it just always defaults to LIMN_VARDIR/data? [15:48:37] (that's fine if so) [15:49:02] yep [15:49:06] I'm pushing now [15:49:09] all is well [15:49:59] ok cool [15:51:18] average_drifter, average_1rifter: I merged Andrew's pull request and we can build Limn again. Is there anything new in the debianization? If not, I can do it [15:51:49] I first want to fix Travis CI because it's been breaking for a long time [15:52:03] ok [15:52:15] milimetric: yes, you should commit the debian/changelog too [15:52:22] cool, so we can make use of update [15:52:26] there's also a small issue with the submodule [15:52:27] will do [15:52:36] for some reason I can't seem to get the submodule to pull [15:52:51] even with that foreach option? [15:52:52] milimetric: are you working on the master branch ? [15:52:58] yeah, even with foreach [15:52:59] no I'm on develop [15:53:06] ok [16:11:45] milimetric: oh it actually works now [16:11:54] somethin related to keys [16:11:59] oh cool [16:30:35] milimetric: I did some more commits on develop branch [16:30:39] milimetric: can you merge those also please ? [16:30:45] cool, I'll pull in a sec - Travis is not working still [16:33:34] milimetric: can I help with travis ? [16:33:45] milimetric: I've put a project on Travis also (debianize) [16:33:48] sure, give it a shot [16:33:51] it's not travis setup though [16:34:00] it's some weird problem with running the tests themselves [16:34:05] three tests fail on Travis but not locally [16:34:08] milimetric: can you link me please with the travis build log that fails ? [16:34:10] one sec, I'll push latest [16:34:34] ok [16:34:51] ok, so pull the latest and you should be able to run ./test/run_headless [16:35:00] that will spin up the server and run the tests [16:35:10] (you have to do npm install && npm update) [16:35:31] basically, this is what happens on Travis: https://travis-ci.org/wikimedia/limn/builds/4905953 [16:45:42] ok average_1rifter, I think I figured it out [16:46:00] that was a bad test - it was referencing the reportcard data instead of the new sample data project [16:48:37] i'm gonna run home and grab lunch on the way so I can be home for standup, be back on in a bit [17:26:44] yoyo [17:27:10] hey drdee :) [17:36:58] and I'm around looking for erosen. [17:39:17] drdee! [17:39:20] hey [17:39:22] yoyo [17:39:31] geohacker: i've got an e-mail draft for you [17:39:38] erosen: yay! [17:39:42] let's convert your existing oozie jobs to use stats user and concat_sort.pig, and webrequest_load macro [17:39:43] eehhh? [17:40:04] geohacker: i just need to check a few more details about the algorithm, so you know what the numbers mean [17:40:19] erosen: right. okay! [17:42:11] okidoki [17:44:07] ok, so drdee, currently you've got your Mobile Pageviews coordinator running, right? [17:44:09] from this script [17:44:11] /user/diederik/pageviews.pig [17:44:21] yaya [17:44:41] we should put the oozie confs in kraken repo [17:44:45] is pageviews.pig in kraken repo? [17:45:08] also i'd love to clean up the kraken/pig directory [17:45:13] right now there are lots of one-off scripts in there [17:45:19] yes good point [17:45:25] can we make a pig/scr directory maybe? [17:45:28] sure [17:45:31] and put everything that we aren't using regularly in there [17:45:32] ok cool [17:45:48] so, yeah, is pageviews.pig in kraken, or should I take and adapt the one from your /user/diederik? [17:46:41] can I do anything to help with the mobile pageviews ? [17:50:12] I'll try to have the differences caused by IP block re-allocation in a few hours [17:51:04] hey average_1rifter: debianize in the latest on develop is pointing to /home/user/debianize [17:51:06] and is broken [17:51:29] average_1rifter: morning :) [17:51:56] mornin [17:52:10] morning dschoon [17:52:12] how goes [17:52:24] regarding priorities: debianize limn, document mobile page view flow diagram, run test with maxmind db's [17:52:29] morning dschoon [17:52:30] drdee: morning ! [17:52:44] milimetric: clone it separately, and move it there [17:54:22] average_1rifter: are you not using submodules anymore? [17:55:55] milimetric: we'll drop submodules use for now, until we fix them [17:56:01] ok [17:56:02] milimetric: so clone it separately and move it inside limn [17:57:27] drdee: ok, I'm going to focus on those [17:59:14] raaainy here [17:59:18] and lonely in https://plus.google.com/hangouts/_/2da993a9acec7936399e9d78d13bf7ec0c0afdbc [17:59:22] milimetric, ottomata [17:59:28] erosen as well [17:59:54] doh [18:17:51] erosen - a possible reason for the "magical gp-dev update" [18:18:14] gp-dev is running in "development" mode right now which doesn't cache bust the JS source files [18:18:26] (so you can set breakpoints and so they don't disappear) [18:18:59] drdee, help me understand the Mobile Pageviews stuff [18:19:00] so it might've been a caching issue. I tried to set it to production mode but for some reason, despite numerous supervisor reload/reread/reset/shutdown you bastards [18:19:02] so [18:19:06] nothing worked [18:19:17] ottomata … shoot [18:19:20] so expect there to be cache weirdness until we fix that [18:19:30] gotta go grab lunch [18:19:31] what are the time spans you want it to work on? [18:19:39] daily, right? [18:20:06] hourly [18:20:23] but you use TO_DAY and then count by day [18:20:25] in pageviews.pig [18:20:25] right? [18:20:30] TO_DAY(timestamp) AS day, [18:20:34] COUNT = FOREACH (GROUP LOG_FIELDS BY (day, country, device, vendor) .. [18:20:44] oh then by day ;) [18:20:46] ha ok [18:20:48] ok [18:20:49] but [18:20:57] the coordinator is only working with 15 minute bits of data [18:20:58] right? [18:21:11] each workflow gets a single input directory [18:21:15] hdfs://analytics1010.eqiad.wmnet:8020/wmf/raw/webrequest/webrequest-wikipedia-mobile/2013-02-19_17.45.00 [18:21:18] for example [18:21:46] feel free to fix it :) [18:22:04] ok, just trying to make sure I know what it should do [18:22:18] so, if I made each workflow run work with a full days of data, that is what is intended, right? [18:22:21] i can make it do hourly if you prefer [18:25:15] drdee whatchuuuu want? ^ ok heating up lunch... [18:25:51] i was thinking just run it hourly so the chart gets updated by hour [18:31:23] perfect, mk, i'm going to set it up in the same way as the webrequest_loss script then [18:39:35] Has anyone taken a look at PigUnit? [18:39:51] http://pig.apache.org/docs/r0.8.1/pigunit.html [18:39:55] looks nice [18:40:19] Seems like it'd be a good way to make sure our input formats stay compatible with our scripts. [18:40:46] oh awesome [18:41:19] dschoon we are already using that [18:41:27] hot. [18:41:28] not for all scripts but for about half [18:41:34] sounds awesome [18:41:42] we have that running in jenkins? [18:41:52] we have nothing in jenkins yet [18:42:02] how do we run it, then? [18:42:08] i was playing with travis on the kraken repo but did not finish it yet [18:42:17] when running mvn compile [18:42:17] shh [18:42:17] er [18:42:18] ahh [18:42:18] heh [18:42:26] yeah, so long as it's CI [18:42:28] that makes sense [18:42:40] we are!? [18:42:49] oh cool [18:43:51] /away away [18:44:16] bad bad irssi [18:44:29] laters. [19:01:54] brb, i need a coffee [19:12:59] me tooooo coffeeeeeee [19:19:21] hello geohacker :) [19:19:24] am in Bangalore :) [19:19:30] oh, he's not around. oh well [19:25:40] back [19:26:20] I'm around, debianization looks good average_drifter [19:26:31] yeah? [19:26:54] yeah, I've got a new deb but building it yourself in the develop branch should be possible [19:26:57] dschoon, did you ever get github -> gerrit mirroing working? [19:27:10] it's git2deblogs --update now which takes a lot less time [19:27:11] would be nice to ask ops about the .deb and be able to give them a gerrit url [19:27:17] i got a proof of concept going [19:27:26] i can make it go if we think that'd be valuable. [19:27:33] yeah fo sho [19:27:34] what would ops need? the debianization tools or the .deb itself? [19:27:40] i don't know, i think they want to review it [19:27:49] aiight. i'll add that to my list [19:27:55] i'll probably get to it tomorrow. [19:27:58] that ok, ottomata [19:27:59] ? [19:27:59] not sure, but i'm just trying to keep the objections they might give to a minimum [19:28:02] yeah [19:28:31] ok, makes sense, so should we try to set up a labs instance solely with our deb and puppet stuff? [19:28:52] milimetric: ping? Maryana tells me you are the one to ping about dashboards. [19:29:37] yeah, hey Yuvi [19:29:51] how can I help? [19:31:05] YuviPanda ^ [19:31:22] milimetric: heya. So we're going to be shipping betas of the Android app in a few days time. [19:31:31] tfinc was looking to see if we can make a dashboard of sorts with a few metrics [19:31:42] sure [19:31:56] I can help you make the graphs and organize the dashboard [19:32:01] uploads from mobile, uploaders from mobile, people who login but do not upload, etc. [19:32:07] sweet. [19:32:10] drdee and the rest of the Analytics team can help you with the gathering of the statistics [19:32:21] so we EventLog everything [19:32:27] ok, cool [19:32:38] heya drdee [19:32:43] and it is in a format that should make gathering the data pretty simple. [19:32:44] i just tried to run mvn package on my mac [19:32:45] [ERROR] Failed to execute goal on project kraken-dclass: Could not resolve dependencies for project org.wikimedia.analytics:kraken-dclass:jar:0.0.1-SNAPSHOT: Could not find artifact org.apache.devicemap:openddr-java:jar:0.9.9-SNAPSHOT in nexus (http://nexus.wmflabs.org/nexus/content/groups/public) -> [Help 1] [19:32:51] so right now we have the following structure YuviPanda: [19:33:04] dashboards have many tabs which have many graphs [19:33:07] ottomata, hold on let me fix that [19:33:41] danke [19:34:05] example: http://reportcard.wmflabs.org/dashboards/reportcard [19:34:31] * YuviPanda clicks [19:34:50] aha [19:34:51] right [19:34:58] and the data comes from CSV files? [19:35:00] so YuviPanda I think the first thing I need is a description of each graph, and how you'd like them organized. The idea is to make this easy enough for you to do, but for now we're walking with people through the process [19:35:30] yes, don't worry too much about the details, I can help you structure the CSV files and all that in the way that Limn understands [19:35:48] ah, sweet. so I'll get on a more accurate description of what graphs / lists we want [19:35:58] this is an example from DarTar's graphs YuviPanda: https://github.com/wikimedia/limn-editor-engagement [19:36:00] (prelim one at https://www.mediawiki.org/wiki/Apps/Commons#Reports) [19:36:41] yep, that looks great. Just make pretty sure it's a good final list and we'll get working on it :) [19:37:24] milimetric: sweet! [19:37:27] drdee, if you missed it, YuviPanda is requesting a new dashboard for the mobile team. All the data for the graphs is EventLogged: https://www.mediawiki.org/wiki/Apps/Commons#Reports [19:37:35] i am reading it [19:37:38] cool beans [19:37:56] so this means that we need to fix saving event logging to kraken in the first place [19:37:58] thanks YuviPanda, ping us when you're ready [19:38:03] yes drdee [19:38:17] milimetric: sweet. I'm on it right now, will ping in a few minutes [19:38:38] ottomata, if you have some spare cycles later today, this would be a good time to fix saving event logging data to kraken [19:38:43] drdee, I don't think that's going to happen anytime soon, i mean, i will bring it up when I'm hanging with ops next week [19:38:51] ok [19:39:00] let's hangout then [19:39:18] event logs are going to an01 right now, they're being sent…i mean, i could manually spawn up a udp2log instance and send them to kafka [19:39:25] ha, with event logs, i could probably even use Ori's kafka producer [19:39:27] https://plus.google.com/hangouts/_/2da993a9acec7936399e9d78d13bf7ec0c0afdbc [19:39:27] that'd be fun! [19:39:59] YuviPanda: can you send me a more detailed email with your requests? [19:40:19] drdee: I'm adding details to the wiki page now. What exactly are the ones you're looking for? [19:40:39] (details, that is) [19:40:58] ok ty [19:41:22] drdee: what details are you looking for? [19:43:45] ottomata: does that mean the event stream is making its way into hdfs again? [19:43:59] yuvi; that's a good start! [19:44:00] no [19:44:01] it is not [19:45:33] hokay. [19:46:20] ottomata, the limn_vardir would have to be configurable by instance though [19:46:27] if we wanted multiple instances [19:46:28] yes [19:46:31] k [19:46:38] in puppet you mean? [19:46:56] i just read your puppet stuff in gerrit [19:46:59] ja [19:47:05] so i saw it set to var/lib/limn there [19:47:11] sorry /var/lib/limn [19:47:26] » $var_directory = "/var/lib/limn/${name}", [19:47:31] oh cool [19:47:33] instance.pp [19:47:33] i missed that [19:47:54] oh nice, so this will be perfect [19:48:02] yeah super easy, [19:48:10] to spawn up a limn instance anywhere [19:48:10] we'll only have to do [19:48:17] limn::instance { "instance_name": } [19:48:18] and that's probably it [19:48:24] everything else will be automatic [19:48:24] port [19:48:35] oh yeah port :) [19:49:01] ottomata: you can pull now [19:49:16] and it's cool that it all runs in production mode, dev mode should only be for local installs really [19:49:19] ottomata, as you're a VM and puppet aficionado, did you see https://github.com/blog/1345-introducing-boxen ? [19:49:24] it's screwing with evan's install right now that he's in dev mode [19:50:17] whoa crazy dschoon [19:50:23] http://boxen.github.com/ [19:50:34] milimetric, right [19:50:40] that is configurable via puppet too [19:50:50] but the default is production [19:51:30] woa boxen dschoon [19:51:39] https://coderwall.com/p/d8iw2g [19:51:48] i'm not a big fan of steps like "delete textmate" [19:51:55] "remove everything homebrew has installed [19:52:01] "change your shell to sh" [19:52:09] like, "UH." [19:52:54] haha, yeah [19:52:59] but for bare installs on a new compy, maybe its cool [19:53:18] yeah. [19:53:31] it seems it'd be a lot of work to restore my customized settings [19:53:45] hmm [19:53:49] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.7.2:test (default-test) on project kraken-dclass: There are test failures. [19:54:00] Test set: org.wikimedia.analytics.dclassjni.DclassLoaderTest [19:54:00] ------------------------------------------------------------------------------- [19:54:00] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.054 sec <<< FAILURE! [19:54:00] testDclassLoader(org.wikimedia.analytics.dclassjni.DclassLoaderTest) Time elapsed: 0.01 sec <<< ERROR! [19:54:00] java.lang.UnsatisfiedLinkError: org.wikimedia.analytics.dclassjni.DclassWrapper.initUA()V [19:54:11] at $Proxy0.invoke(Unknown Source) [19:54:21] at org.wikimedia.analytics.dclassjni.DclassWrapper.initUA(Native Method) [19:54:22] at org.wikimedia.analytics.dclassjni.DclassLoaderTest.testDclassLoader(DclassLoaderTest.java:29) [19:54:25] drdee ^ [19:54:57] you need to install dclass-lib [19:55:07] average_drifter created a debian package [19:55:17] bwerrrrr, aw man, i'm running this on my mac :p [19:55:27] it works on my mac as well [19:55:31] the .deb? [19:55:34] just copy the libs [19:55:35] brb [19:55:36] hmmm [19:55:39] lunch! [19:56:02] average_drifter, are you around? [19:56:05] where? [19:56:19] where from or where to? [19:56:25] where is dclass-lib? [19:57:03] i thought you put it in wikimedia apt-rep [19:57:05] o [19:59:41] bwerrrr [20:12:06] ottomata: sorry :( [20:12:27] drdee: milimetric https://www.mediawiki.org/wiki/Apps/Commons#Reports is cleaned up a bit now, and we finalized the schemas [20:12:27] you can also do mvn compile -Dskiptests=true [20:12:35] t! [20:12:36] ty! [20:13:00] :) so what next? [20:13:37] YuviPanda, it looks like we have some problems with EventLogging data [20:13:57] ottomata is looking into making it work right now [20:14:27] as in 'problems in the pipeline from eventlogging into pretty graphs'? [20:14:53] yeah, drdee, I did that, but gah now for some reason geocode doesn't work. : [20:14:54] :/ [20:15:10] ottomata isn't really looking into making eventlogging into kraken work right now [20:15:13] but [20:15:15] milimetric [20:15:26] mobile and Ori and people have their own EventLogging collector on vanadium [20:15:29] completely separate from analytics stuff [20:15:36] so they should have the data they need already [20:16:16] I'm sorry, I misunderstood what you guys were saying earlier then ottomata [20:16:43] so I can login to s1-analytics-slave and check out mysql tables with the data we log from the database named 'log' [20:17:10] yeah, give that a shot YuviPanda and let me know [20:17:27] so I did that yesterday, and the data is all there :) [20:17:31] If the data's all there we can use that for now and I'll help you make Limn-compatible files [20:17:41] yup, data's all there. [20:18:13] ok, so then basically let's figure out what graphs we need [20:18:48] alright! want to etherpad? [20:19:22] sure [20:19:58] milimetric: http://etherpad.wikimedia.org/mobile-app-dashboard [20:21:22] milimetric: what exactly do you mean by 'metrics'? [20:21:31] things like 'from Android', 'from iOS'? [20:21:32] or...? [20:21:35] we can chat in the etherpad [20:21:48] ah right [20:38:09] back [20:44:37] i don't quite understand the rationale for groundwork. http://groundwork.sidereel.com/ [20:44:48] like, it seems monotonically shittier than bootstrap [20:45:19] i must be missing something, otherwise i'm dismissing this as a vanity project. [20:52:01] milimetric: http://etherpad.wikimedia.org/mobile-app-dashboard done I think [20:52:16] awesome Yuvi, thanks [20:53:29] milimetric: I'll probably head to bed in a few minutes, but feel free to poke me if any of that is unclear. [20:54:02] I suppose we'll have to produce scripts that generate these CSVs on a cron. [20:54:33] drdee: pong [20:54:43] reading backlog [20:55:22] ottomata: I'm around [20:55:37] aye jaa its ok [20:56:33] ottomata: error gone? [20:56:44] i stopped working locally [20:56:50] ok [20:57:19] if you wish to run it locally, you may compile it locally [20:57:29] debs don't work on macs AFAIK [20:57:44] but, you can still compile & install the .so [21:01:40] YuviPanda - I looked over the etherpad. It looks great to me. We're sending a quick email to get on the same page, and then we'll help out with anything that you need. SQL queries, moving files around, etc. [21:01:51] milimetric: thank you :) [21:02:14] erosen: http://www.textrazor.com/ [21:02:58] very nice [21:03:17] milimetric: I'm off to sleep now. Will poke channel again tomorrow :) [21:03:39] very very cool [21:03:46] try the example in the bottom right [21:03:48] cool, thanks Yuvi, talk to you tomorrow [21:04:39] wow, impressive disambiguation with "Manchester City" [21:05:10] actually, maybe it got that part wrong [21:06:08] misssed your last comment, erosen [21:06:37] dschoon: was just trying to figure out what the entity "Manchester City" actually meant [21:06:51] Heh [21:06:56] textrazor says it is the football club, but seems unlikely having read more [21:07:43] ottomata: if you believe running locally is something you'd like to have, I can make a .dmg for Macs. if you need that, discuss with drdee and put an issue here https://github.com/wikimedia/dClass [21:29:04] milimetric: perhaps of interest https://www.mediawiki.org/wiki/Analytics/Limn/Ideas [21:29:14] mili, be right there... [21:30:30] dschoon - yep, I believe sumanah started that page a while back [21:30:42] No, I started it three minutes ago. [21:30:48] oh then it sounds very similar [21:30:50] I think you're thinking of Dreams. [21:31:00] That was meant to be Limn-specific [21:31:01] yeah, probably shouldn't be two different pages [21:31:06] but we could merge them. [21:31:36] I actually find it useful to have things split out because they do tend to get big quickly. [21:32:09] if search engines were amazing and convenient, then maybe [21:32:32] as is, I'd rather find in page [21:32:43] Nothing wrong with crosslinking [21:33:02] yeah, links are anti-simple [21:45:47] i guess i'll merge them, then [22:06:54] drdee, q about org.wikimedia.analytics.kraken.pig.GeoIpLookup [22:07:04] shoot [22:07:08] is it good to hardcode private String dbPath = "/usr/share/GeoIP"; [22:07:08] as the default? [22:07:32] there are two constructors for this class [22:07:38] one without a dbpath and one with a dbpath [22:07:47] so you can supply your own path if necessary [22:07:50] reading this [22:07:51] http://stackoverflow.com/questions/4959762/if-i-have-a-constructor-that-requires-a-path-to-a-file-how-can-i-fake-that-if/4966099#4966099 [22:08:07] right hm, but i'm trying to make these pig scripts generic [22:08:19] i don't want to have to edit them in order to run them, so I want this runnable in local mode, in hadoop mode, and in oozie [22:08:43] I wonder if the default should just be ./ [22:08:47] /usr/share/GeoIP is where maxmind installs them [22:08:48] cwd [22:08:54] right, but that is not in hdfs [22:08:57] and we are not using the distributed cache [22:09:14] i think oozie doesn't look in local filesystem, right? [22:09:30] but that doesn't matter because this is internal java [22:09:48] eh? [22:09:52] and i belief the stack overflow answer is old [22:10:13] oozie cannot change java behavior if we load a file locally [22:10:18] you can specify local files specifically [22:10:22] using file:// [22:10:24] rather than hdfs:// [22:10:34] and file://./relative/path should work [22:11:06] bwerrrrrr [22:11:06] hm [22:11:11] have you tried running it? [22:11:19] oh yeah, i'm dying on not being able to load GeoIPCIty right now [22:11:23] in oozie [22:11:30] k [22:11:45] I"ve als tried hardcoding /libs/ [22:11:46] org.wikimedia.analytics.kraken.pig.GeoIpLookup('countryCode', 'GeoIPCity', '/libs/'); [22:11:46] and [22:11:57] org.wikimedia.analytics.kraken.pig.GeoIpLookup('countryCode', 'GeoIPCity', 'hdfs://analytics1010.eqiad.wmnet/libs/'); [22:11:58] just to see [22:12:04] but neither of those works [22:12:26] k [22:12:32] I could leave it with the default path [22:12:42] and then I think (according to that stackoverflow post) configure oozie to ship the file [22:12:48] in the dist cache [22:12:55] probably the distributed cache needs to be re-enabled [22:13:01] its disabled? [22:14:47] yes [22:14:54] i mean the udf does not use it right now [22:15:08] is there really a way to force its non-use? [22:15:25] yes by not callin git [22:15:51] i forgot that oozie was a pain with this stuff [22:16:37] bwer?, i'm confused, why would the UDF know if it iwas running in hdfs vs local? [22:16:48] oozie is a huuuge pain, super smart but so annoying to get right [23:02:42] gone for the night, catch y'all tomorrow :) [23:07:35] lataas [23:10:40] lates [23:32:45] brb