[00:00:17] quick labs question, maybe: [00:00:32] anyone know how to adjust the "allocated storage" for a labs instance? [00:01:53] delete it and recreate? [00:02:10] yeah [00:02:13] that is what I'm going to do [00:02:23] just checking it wasn't something y'all had seen [12:22:14] morning [13:27:06] morning average_drifter [13:29:49] hello drdee [13:30:10] time for some reviews? [13:30:35] I need to talk to Erik about some user agents [13:30:42] was writing an email to him [13:30:52] actually the question is https://gist.github.com/a6810c552fea8eb5e41c [13:30:57] having that list of user agents [13:31:11] where is the iPad version in it [13:31:17] iPad; CPU OS 5_0_1 like Mac OS X <==== is it in this part ? [13:31:24] like 5_0_1 ? [13:32:57] yes it's 5_0_1 but probably best to only capture major version numbers, not the minor version numbers [13:33:51] so the version number is prefixed by CPU OS [13:34:10] ok, then I can fix the current code [13:34:57] Wikihood iPad/1.3.3 is a specfic app [13:35:06] we need to talk about tracking apps in general [13:35:13] let's talk :) [13:35:26] my iPad is charging but we can do a hangout [13:36:42] https://plus.google.com/hangouts/_/2a279b5df15b3cf714633bea648de15207531e64 [13:40:15] you recognize an iOS app by CFNetwork being present in the user agent string [13:40:55] so far iOS apps just keep track of the names [13:41:01] ignore the version numbers [13:41:19] there is already a ragex for CFNetwork in wikistats [13:41:35] so you need to expand that regex [13:42:16] just drop 'iPad', spaces, dots and numbers [13:42:27] from the CFNetwork result variable [13:47:18] morning ottomata [13:47:52] did you watch the debate? [13:48:36] and i couldn't find your byte count script :( can you tell me where it is [13:56:35] | wc -c [13:56:35] counts bytes :) [13:56:35] but I'm sure it must be more complicated than that :) [13:57:35] i know, but we have a simple script that would rotate every 60 minutes [13:58:54] heyyyyy [13:58:57] i think you are talking about me [13:59:30] twas a pretty simple script: [13:59:30] https://github.com/wmf-analytics/kraken/blob/master/bin/netcat-wc [14:05:11] ty [14:05:23] so i should now use udp2log instead of socat? [14:12:47] ottomata, can you provide some emergency support? [14:12:51] yes [14:13:01] do you want to start counting bytes again? [14:13:01] so trying to run netcat-wc [14:13:02] yup [14:13:02] i'd have to change the script [14:13:15] yeahhhhhh, its not going to work unless we shut down udp2log [14:13:18] i made a copy of the script in an11:/home/diederik [14:13:21] the script doesn't work with stdin [14:13:38] i guess it makes more sense to make it work with udp2log [14:13:48] yeah totally [14:13:49] happy to do that [14:14:03] i already added snzip to the commas line [14:14:11] the thing is though [14:14:11] commas = commandline [14:14:25] we need to be able to sighup the wc command [14:14:29] so I think I need to write a brand new script [14:14:46] yikes [14:14:54] right now i'm just backgrounding the netcat | wc process [14:14:54] is that a lot of work? [14:14:58] iunno, prob an hour or two [14:15:08] but if we want udp2log to run it [14:15:17] orrrr [14:15:17] hmmm [14:15:22] udp2log it ssmart [14:15:24] i wonder if we just started it [14:15:29] and used logrotate [14:15:37] to kill the udp2log child process [14:15:42] every hour when it rotated [14:15:43] hmmm [14:15:44] noooo [14:15:45] that won't work [14:15:51] because wc then will just di [14:15:51] e [14:15:51] and not write its output [14:15:59] :( [14:19:58] oh, what's the emergency support/ [14:19:59] ? [14:21:16] that's it :) [14:23:12] oh ok [14:23:25] ok the other thign i was going to do this mornign is lookinto kafka -> hadoop stuff [14:23:32] which should I do first? [14:27:04] maybe the counting bytes stuff so that it can just run [14:27:36] I'm fishing for useragents in squid logs on stat1 [14:28:18] hmm it has 8 CPUs and I maxed out only one of them [14:28:26] I guess it's not a problem [14:35:03] drdee, which python thread module do you recommend again? [14:35:05] multiprocessing? [14:35:15] multiprocessing [14:35:36] k [14:35:49] oh btw, the new sampled files in hdfs work perfect :) [14:35:52] TY! [14:40:38] mmmmmmorning milimetric [14:40:44] hey drdee [14:40:58] watched the debate? [14:41:34] yep [14:41:39] what'd you think? [14:42:50] I tweeted something to the effect of "Romney's gonna start a war with China, Obama's gonna coddle us to death, neither talks about our addiction to growth and how we have to commit endless illegal acts abroad to support it" [14:43:28] but I may have been just trying to get out my aggression at SVG and Limn not playing nicely :) [14:43:34] i was happy to see that obama did not let romney bulley him around like the first debate [14:44:03] it almost makes you wonder, did obama purposefully underperform the first time [14:44:09] to re-energize his base [14:44:38] to make people think like, shit we need to step our game because this race aint over yt [14:45:48] :) like in great white hype? [14:46:04] where the only time he jogs is behind the ice cream truck? [14:46:21] i haven't seen great white hype [14:46:31] or for that matter don't know what it is [14:46:33] it's a one-joke movie but the joke is quite worth it [14:46:43] :) [14:47:17] i've seen more video entertainment than anyone who's not a professional in that business and probably more than some who are [14:47:18] http://en.wikipedia.org/wiki/The_Great_White_Hype [14:50:21] does Python still have that GIL ? [14:51:06] yes but it's usually not a big issue or you can work around it [15:14:11] the Perl community usually says threads are bad, but that's just because there's really no proper threading module on CPAN. so usually in Perl people recommend POE or AnyEvent which are event frameworks so asynchronous stuff.. just like Python has twisted [15:14:25] it's cool that Python got threading righ [15:14:25] *right [16:04:47] dscoon [16:04:50] *dschoon [16:04:51] http://stackoverflow.com/questions/9651167/svg-not-rendering-properly-as-a-backbone-view [16:04:53] aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa [16:04:54] :) [16:05:47] * milimetric screams in pillow and goes to lunch [16:06:18] :) [16:11:11] milimetric: use canvas [16:11:25] :) [16:11:32] I just spent the last three weeks moving away from Canvas [16:11:40] d3 is definitely the correct library [16:11:54] but backbone is a silly crazy library that should've never been born [16:11:57] or even better, a WebGL charting library [16:12:21] nah, I have to disagree with lower level libraries for our purpose [16:12:39] we need high level pretty objects to manipulate so we can keep improving the quality at a fast pace [16:13:03] http://webgl-surface-plot.googlecode.com/svn/trunk/example.html [16:13:18] the control we'd get with lower level stuff isn't necessary because d3 is more than capable of producing anything we dream of: http://d3js.org/ [16:14:13] milimetric: https://www.google.ro/search?q=sqrt(x*x%2By*y)%2B3*cos(sqrt(x*x%2By*y))%2B5 [16:14:29] heh, that IS awesome [16:14:32] :) [16:14:57] but yea, the problem really is that we have too many moving pieces to easily debug something like bizarre invisible svg [16:15:11] backbone, jade, our own stuff, events popping everywhere [16:15:58] you can develop separately a WebGL-based charting lib. just a thought :) [16:18:24] i could, but limn is not a graphing library :) [16:18:46] as dschoon likes to say [16:49:10] coooooool, finally I got it drdee [16:49:23] woot woot [16:49:27] https://github.com/wmf-analytics/kraken/blob/master/bin/inputcount [16:50:06] output looks like [16:50:28] start_time end_time byte_count line_count total_byte_count total_line_count [16:50:44] should I: [16:50:56] a. set this up as a udp2log pipe on an11 [16:50:56] b [16:51:01] set this up a a kafka consumer anywhere? [16:51:02] or maybe [16:51:04] both! [16:51:06] and compare?! [16:51:10] ooo that sounds fun [16:51:30] sure let's do both :) [16:51:53] and so where does the snzip happen? in the udp2log config? [16:52:07] oh yeah snzip, gotta pipe through that [16:52:07] lemme try that [16:52:21] command is snzip -c -t snappy [16:52:30] that will output to stdout [16:52:39] -t snappy? [16:52:49] no -c [16:53:03] -t snappy specifies the compression algorithm [16:53:26] milimetric, ottomata, so what http status codes constitute valid page views? [16:53:35] 200, 304 [16:53:50] 20X right? [16:54:18] right 20x [16:54:43] actually, I'm not sure, some of the 200s are weird [16:54:51] ummmmmmmmmmmmmmmmmmmmmmm drdee, i don't think snzip is going to compress a stream [16:55:06] i only see 200, 206 and very 204 [16:55:12] very rare 204 [16:55:18] ottomata, no [16:55:18] ? [16:55:40] it can't compress a stream, it needs a closed file to do that [16:55:54] unless it knows how to buffer and chunk the stream [16:57:28] right? [16:58:10] hm, maybe one way to cound "valid" page views is to decide on a certain content-length threshhold [16:58:16] *count [16:59:08] it really kind of depends what you mean by "valid" [16:59:57] https://plus.google.com/hangouts/_/2e8127ccf7baae1df74153f25553c443bd351e90 [17:02:57] ottomata: bzip2 can do streaming compression [17:03:11] bzip2 is slllooooooooowwwwwwwwwww [17:03:25] and we need snappy as that is the built-in hadoop compression algorithm [17:03:46] bzip2 cannot do streaming -- it can merely pretend by doing block-by-block decompression [17:04:19] snappy can do stream i think i was wrong, it has read and write buffer cli flags [17:04:28] nice [17:04:31] oh, hangout. [17:04:33] roight. [17:05:46] ok cool, drdee, I can pipe through snzip and count, [17:05:52] what value of read and write buffer shoudl I use [17:05:56] it needs to be given buffers to compress [17:06:22] mmmm good question [17:06:24] 1k lines? [17:06:26] 10k lines? [17:06:27] actually i thin it just needs a read buffer [17:06:28] and it is bytes [17:06:29] not lines [17:11:09] 64k? [17:21:02] aiight. heading into the office. [17:21:17] be there in ~40 (extrapolating from past times, 30 is optimistic.) [17:21:33] drdee: has anyone ever done a study that would tell me how many articles are served to the top 5 browsers? (what I'm trying to get at is how many real people are servced articles -- as opposed to special pages) [17:45:01] ok running out, lunch, changing locs, be back in a bit [17:59:31] the nastier the bug the sweeter the fix :) [17:59:31] /** [17:59:31] * Overwriting Backbone's BackboneView.make method to use the SVG namespace [17:59:31] */ [17:59:31] make: (tagName, attributes, content) -> [17:59:31] el = document.createElementNS 'http://www.w3.org/2000/svg', tagName [17:59:31] if attributes [17:59:31] $(el).attr attributes [17:59:31] if content [17:59:31] $(el).html content [17:59:31] el [17:59:51] oh dschoon isn't around. [18:38:07] milimetric, you busy? [18:38:33] um, I'm being productive.. [18:38:35] what's up [18:39:32] like, I'm not busy as in I can't help but I am busy as in I'm doing stuff [18:39:38] drdee^ [18:39:51] does anyone know offhand if bits/varnish sends udp logs we can collect with udp2log? [18:40:12] Jeff_Green, yes we can [18:40:19] milimetric, ohh okay [18:40:28] drdee: oh ReHEaally. that is good. thx [18:41:17] ottomata, just installed snzip on an17 [18:51:40] ottomata, if i wanna poke hadoop configs, i have to wait until you have migrated stuff to ops repo, right? [18:56:01] uhhhh, not really, it just isn't at easy, you'd have to manually copy the configs to each machine [18:58:54] back [18:59:03] stopped for lunch, since I forgot to do that yesterday :) [19:10:17] ottomata, okay, i will wait for the migration to finish [19:10:30] hey dschoon [19:10:49] fixed it and pushed [19:10:54] sup [19:10:56] woo [19:10:57] overwrote make in ChartElement [19:10:58] pulling [19:11:09] yeah, makeElement was what i was gonna say [19:11:10] and using createElementNS instead of createElement [19:11:19] hmmm, dschoon , i have q for you too [19:11:25] i'm tring to maven build a hadoop consumer package [19:11:26] you could also just call d3.createElement [19:11:26] this one: [19:11:37] https://github.com/miniway/kafka-hadoop-consumer [19:11:37] ok, one thing at a time :) [19:11:38] yeah, that makes sense, I might [19:11:40] no! [19:11:43] hhee, yeah i'll put q here and you can answer [19:11:44] you said you could do multiple [19:11:46] milimetric, did you need anything? [19:11:51] or was that just fyi? [19:11:56] nope [19:11:58] just fyi [19:12:00] ttyl :) [19:12:05] i have dependency in pom.xml on kafka.jar [19:12:06] i have kafka.jar [19:12:13] but I don't know how to tell maven to find it [19:12:19] so [19:12:27] you have a folder, ~/.m2/repository [19:12:36] every maven artifact has "coordinates" [19:12:48] groupId, id, version [19:13:06] they show up as the reverse-domain-path in the repo, then the id, then the version [19:14:01] so it depends on what your jar's coordinates say [19:14:18] hm [19:14:53] i see some files for kafka in there [19:15:03] here's a real path, for example: [19:15:07] ~/.m2/repository/org/apache/commons/commons-io/1.3.2/commons-io-1.3.2.jar [19:15:10] oh [19:15:14] i don't have a .jar file in my kafka stuf [19:15:15] the groupId was org.apache.commons [19:15:16] hahaahahah i just received my raspberry pi :D [19:15:17] just lastUpdateed files [19:15:26] the id was commons-io [19:15:30] the version was 1.3.2 [19:15:43] ok [19:16:22] hmm, ok, so I do'nt have the kafka jar in that dir [19:16:28] i think mabye because it wasn't built with maven [19:16:33] you should fix that :) [19:16:35] but with sbt (cause it is scala?) [19:16:42] sbt runs maven, iirc. [19:16:47] ok [19:16:51] the easiest thing is to just try building kafka. [19:16:54] and see what happens. [19:16:56] hmmm [19:16:56] ok [19:17:01] i will also try to build it [19:17:06] yeah maybe I copied the kafka to this machine, rather than building it here [19:17:06] ok [19:18:30] hmm, nope no change [19:19:53] hmm, interesting! [19:20:06] i copied the kafka .jar in and it is not complaining about that one anymore [19:21:12] christ, scala is slow. [19:21:21] the compiler. [19:21:31] yeah [19:24:58] maven is the most fucking verbose project on earth, i swear. http://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html [19:25:32] is that a guys butt behind a computer in the header image? [19:25:59] i assume. [19:26:54] good ol' xm [19:26:54] xml [19:27:24] i wonder if lifetime java devs have worse eyesight than script kiddies [19:29:48] brb [19:29:56] but you should try running ./sbt actions [19:30:31] make-pom got me target/scala_2.8.0/kafka*.pom [19:30:34] whcih is almost useful! [19:30:38] brb [19:31:50] hm [19:32:49] ooo, trying publish-local [19:33:01] ivy! [19:50:24] user@garage:~/wikistats/wikistats/squids$ cat testdata/user_agents_ipad_2_uniq | perl -Iperl -MUA::iPad=ipad_extract_data -ne 'chmop; $r=ipad_extract_data($_); print (defined $r ? "GOOD\n" : "BAD\n"); ' | grep GOOD | wc -l [19:50:33] 2149 [19:50:33] user@garage:~/wikistats/wikistats/squids$ cat testdata/user_agents_ipad_2_uniq | perl -Iperl -MUA::iPad=ipad_extract_data -ne 'chmop; $r=ipad_extract_data($_); print (defined $r ? "GOOD\n" : "BAD\n"); ' | grep BAD | wc -l [19:50:35] 707 [19:50:52] that's my current status, still have to add some regexes to identify all those iPads [19:51:43] gotta get that BAD count to zero [19:52:29] drdee here? [19:54:34] damn that "chmop" didn't complain although it was supposed to be chomp.. guess the regex didn't care about the line-ending [19:55:15] back [19:57:36] Jeff_Green: here [19:57:50] i'm trying to confirm that bits udplogs [19:57:58] average_drifter, can you send me a couple of those non-recognized iPad user agent strings [19:58:31] i tried a sample 100 through udp-filter -d bits.wikimedia.org on oxygen and got nothing [19:59:09] hm, i dunno about bits [19:59:19] ori-l might know, i think he's had some bits dealings [20:00:00] for one thing, bits is varnish [20:00:10] but we do have varnish traffic 100% [20:00:16] what is running on bits? [20:00:38] what do you mean? [20:00:55] what type of content is hosted on bits? [20:01:22] bits :-) [20:01:36] small chunks of text, images, which are heavily hit [20:01:55] I'm looking at the possibility of using it for banner impression tracking [20:03:16] k [20:04:00] and setting up a regular udp-filter is not sufficient? [20:04:40] correct [20:04:57] the idea is to move it to bits [20:05:35] the banners themselves? [20:10:49] sec. on the phone [20:21:14] drdee: alright--sorry. got a call from Zack there [20:21:26] np [20:21:29] so no, not banners themselves--the idea is this .. . [20:22:03] we currently shove a bunch of gratuitous stuff onto the GET for banner names, so we can collect banner impression data [20:22:39] so we end up caching a different copy of a banner for each distiinct GET that ultimately resolves to the same image or whatever [20:23:26] and those all expire on short TTLs, so the proxies are constantly churning all these copies and clients have to wait for that [20:24:28] we're looking at the idea of moving all the data collection to a separate request on a system that can log the querystring, and strip it before heading to the cache [20:25:04] bits looks attractive for that because it already runs varnish (which has url-rewriting features) and it's already involved in the overall request [20:25:25] but all this depends on us being able to collect logs from bits [20:27:33] dschoon ^^ [20:27:55] ottomata ^^ [20:28:05] reading [20:28:48] yeah, that all seems reasonable. [20:29:05] sure? [20:29:11] and splitting out the data collection from the banner serving would simplify several things [20:29:23] do we log bits? [20:29:24] i imagine you're intimately familiar with all the industry-standard language for talking about this stuff? [20:29:44] (ottomata should know the answer to that?) [20:30:13] * Jeff_Green is probably not familiar. I'm gonna talk in terms of 'buckets' and 'sludge' and 'eggs' [20:30:13] as part of what we're building, we want to have campaign-tracking machinery in place to make this sort of thing easy [20:30:41] ya, I know you guys have already been talking about doing this and more [20:31:34] I'm just trying to come up with something we can start using in a week without a huge list of new requirements, because for now we can live with the crappy old log collection and parsing schemes we already use [20:32:49] Jeff_Green so i did a quick grep for the bits domain in the sampled archive files [20:32:52] and i do find it [20:32:54] so rather than doing all this server-side crap, you just load whatever banner you want, and also make a call in JS to log the campaign, source, medium, content, and related terms [20:33:03] right [20:33:08] so for that, you're on track i think [20:33:17] so afaik, bits traffic is available on udp2log [20:33:49] example: GET https://bits.wikimedia.org/skins-1.18/common/images/poweredby_mediawiki_88x31.png [20:33:58] oh, that reminds me [20:34:12] i should probably send around this doc i was working on ages ago [20:34:19] about the industry-standard metrics and terms for doing web analytics [20:34:27] but anyway, here's the bit about campaign tracking [20:34:29] **Campaign Tracking** [20:34:29] - Campaign: A name to identify the campaign. Ex: `fundraising-2013`, `jimbo-strip-poker` [20:34:30] - Medium: Marketing medium through which the event was generated. Ex: `cpc`, `email`, `newsletter`, `banner`, `irc-bot` [20:34:31] - Source: Identify the advertiser or channel which generated the event. Ex: `google`, `citysearch`, `newsletter4`, `wiki` [20:34:33] - Content: Used to differentiate versions of an event. For example, if you have two call-to-action links within the same email message, you can use `utm_content` to differentiate them so that you can tell which version is most effective. [20:34:35] - Term: Used to track Paid Keyword Campaigns, indicating which keyword triggered the event. [20:34:56] drdee: maybe is it that I can't collect it from oxygen? b/c that's where I tried [20:35:14] Jeff_Green: no, that shouldn't matter [20:35:15] if you notice `utm_foobar` in a URL, that's GA's campaign tracking stuff [20:35:37] utm_campaign, utm_source, etc [20:35:44] yup [20:36:00] alright, so what the heck am I doing wrong?? /me cries [20:37:32] ottomata, can you verify way i said about bits traffic? [20:40:22] about what? [20:41:13] it's in the sampled-1000 log, it must be my use of udp-filter [20:42:22] ottomata, that bits traffic is available on udp2log [20:42:35] so, Jeff_Green i think you are good to go, we just have to setup the filter [20:42:38] ok cool [20:43:21] ah HA [20:43:27] it's working on locke. [20:43:35] i wonder what I did wrong when I tried it on oxygen [20:43:45] thanks! [21:11:53] btw, drdee, your sampled logs are all loaded in (uncompressed [21:12:01] /user/otto/logs/sampled/ [21:12:31] and poop i don't think my udp2log byte count is working :( [21:13:51] i know :D [21:13:52] [14:35:49] oh btw, the new sampled files in hdfs work perfect :) [21:14:05] great [21:14:18] also cool, i got a working kafka hadoop consumer! [21:14:19] took forever to compile it [21:14:23] and I did so pretty hackily [21:14:24] but it works! [21:14:26] nice [21:14:36] so it would be batched, run by a cron job or somethig [21:14:38] so why do you say that udp2log byte count is not working [21:14:46] no output in file [21:14:47] and it has been hours [21:15:00] so gonna try to figure out why [21:15:04] but i gotta run [21:15:45] k [21:15:59] turned it off for now [21:16:28] oh you are running something! [21:16:29] good to know [21:16:30] ha [21:16:38] saw crazy logs and thought it was somethign I did [21:17:13] ok laters! [21:26:18] dschoon, about hadoop versions: http://www.cloudera.com/hadoop-details/ [21:26:34] so cdh4 runs on 0.20.2 and a whole bunch of back ported patches ;) [21:46:54] drdee, is there anything I can work on this week? [21:47:10] yeah totally! [21:47:38] shoot [21:48:35] so take a look at reportcard.wmflabs.org [21:49:02] basically, we need a pig script for each chart on that page [21:49:22] ok [21:49:40] and maybe you can spend some time looking into oozie as well, set it up on labs [21:49:47] and see how it interacts with pig [21:49:52] can I get the log formats? or maybe some anonymized samples? [21:49:58] and how we can start building a library of pig scripts [21:50:08] which focus on re-use [21:50:14] ok [21:50:29] yes, i will anonymize a log file [21:50:36] is using the Apache combined logs ok? [21:50:48] we have a slightly different format [21:51:10] kafka? [21:51:12] so for this you will need one of our log files [21:51:23] ok [21:51:23] no this is irrespective of kafka [21:51:58] so if you can start looking into oozie, have some thoughts about a pig library [21:52:10] then i will look into anonymizing the logfile [21:52:20] ok [21:53:22] I just need a sample so don't have to do all of it [21:53:39] do you want me to look up how to hash in pig? [21:59:48] drdee, is there anything I can help with for the anonymization? I can make a SHA1 and salting UDF [22:00:24] no you don't need to worry about that part [22:00:26] ok [22:01:52] start with looking into oozie and how pig interacts with that [22:02:10] ok I'm reading up on oozie [22:09:45] so louisdang, ideally, you abstract the pig scripts into pig macro's [22:09:57] so we can reuse those macros into larger scripts [22:10:00] see http://pig.apache.org/docs/r0.9.1/cont.html#macros for documentation [22:10:17] louisdang: are you on the analytics mailing list? [22:10:24] do you have all our email addresses? [22:10:27] yes I'm on the list [22:10:38] email, I can lookup [22:10:40] it just occurs to me we're all in different timezones, and not everybody can keep up to date on happenings in IRC [22:10:57] so if you do start doing research into things like oozie, it's best to email your thoughts [22:11:29] i'm dsc@wikimedia.org -- ottomata is otto@wikimedia.org -- drdee is dvanliere@wikimedia.org [22:11:49] alright, I can make a wiki page too [22:11:52] sure, that'd work too! [22:11:53] thats also cool [22:12:06] just let us know when you update things, so we can go check it out [22:12:20] ok [22:18:00] drdee, what do you think about using hive to gather the metrics? [22:20:33] we will use hive but most likely more for adhoc querying, the pig jobs we need for recurring tasks [22:24:19] ok, I see. Pig scripts are probably easier to modify and re-use. [22:24:49] easier to extend [23:33:18] dschoon, oozie is installed, [23:33:24] sweet [23:33:27] URL to play? [23:35:16] pm'ed you [23:39:31] ty [23:40:15] ori-l: do you have a couple moments to talk to me about how udp2log works internally to varnish?