[12:48:12] drdee: morning :) [12:48:21] hey good morning! [12:48:24] how was your weekend? [12:49:03] drdee: guests gulped my time [12:49:13] drdee: and yours ? [12:49:39] still buying baby stuff :D [12:50:04] family guests or friends? [12:50:12] drdee: family [12:50:22] drdee: you have a newborn ? congratulations [12:50:23] ! [12:50:26] not yet, son [12:50:31] i mean soon [12:50:33] early november [12:50:42] :) [12:51:28] sooooooooo....... [12:51:34] webstatscollector / udp-filter? [12:53:27] drdee: working on them [12:53:33] still trying to get the format to UTF-8 [12:53:53] not easy [12:54:45] I have read some articles on UTF-8, been writing some code on them since 2h ago [12:55:38] I wish I found some guide that explained wchar_t usage and this article got close http://www.joelonsoftware.com/printerFriendly/articles/Unicode.html [12:56:10] but it seems I'm still getting weird characters on the filter for _bot lines [12:56:45] I closed my mobile phone and blocked some people on skype and closed my intercom just so I could focus on this [12:57:56] drdee: I think the task itself with solving the UTF-8 might take more time than I expect uhm. What should we do ? [13:04:23] can you explain "but it seems I'm still getting weird characters on the filter for _bot lines" [13:05:14] drdee: so filter gets most entries in UTF-8 [13:05:27] *most* :D [13:05:31] drdee: I am verifying this with cat sample-*.log | ./filter | file - [13:05:42] if file tells me UTF-8 then I decide it worked ok [13:05:47] but.. [13:05:57] I'm using it on the sample log in wikistats [13:07:21] and if I get Non-ASCII from file (I mean /usr/bin/file) [13:07:39] then I do something like a binary-search to see what line of the logs produced that and then I start debugging filter to see what went wrong [13:08:39] can you give me an example of a URL that throws an error [13:11:04] yes, I'm looking in the log to track it down [13:11:17] I'll paste it here [13:14:22] amssq41.esams.wikimedia.org 266289577 2012-07-01T01:45:09.300 158 0.0.0.0 TCP_MISS/200 9564 GET http://ru.wikipedia.org/wiki/%D1%F2%EE%EB%EE%ED%E0%F7%E0%EB%FC%ED%E8%EA CARP/91.198.174.53 text/html - - Mozilla/5.0%20(compatible;%20YandexDirect/3.0;%20+http://yandex.com/bots) [13:14:33] this one for example [13:16:31] why is that giving a problem? because it is a valid url [13:17:19] it is a valid url but filter decodes it to [13:17:25] 1 ru_bot 1 9564 ������������� [13:17:38] mornin team [13:17:41] but why does that need to be decoded [13:17:42] ? [13:17:47] hey morning milimetric [13:18:17] milimetric: how is your unicode knowledge? [13:18:22] drdee: because otherwise you would see %E6%F8 stuff like that in logs [13:18:34] drdee: I mean, in the collector output [13:18:38] uh, ok, what you guys working on [13:18:58] we are trying to decode utf8 byte strings [13:19:07] perl? [13:19:09] not really working? [13:19:10] no [13:19:11] C [13:19:34] k, so what's the string / what's your code / what's the problem? [13:19:47] this URL http://ru.wikipedia.org/wiki/%D1%F2%EE%EB%EE%ED%E0%F7%E0%EB%FC%ED%E8%EA [13:19:54] turns into this after decoding [13:19:58] ������������� [13:20:02] that's wrong :D [13:20:11] OOOHHHHH WAIT [13:20:14] I go to javascript console [13:20:14] decodeURI("http://ru.wikipedia.org/wiki/%D1%F2%EE%EB%EE%ED%E0%F7%E0%EB%FC%ED%E8%EA") [13:20:17] URIError: URI malformed [13:20:22] so it's a malformed URI this one [13:20:24] :( [13:20:28] really? [13:20:33] well that doesn't look URI encoded though [13:20:34] drdee: please try it out in your console [13:20:54] decodeURI should decode that stuff [13:21:02] but if you past that URL in your browser it does open the page [13:21:11] so your browser does understand it [13:21:31] let's compare it with [13:21:32] decodeURI("http://ru.wikipedia.org/wiki/%D0%A4%D1%80%D0%B5%D0%B9%D0%B4%D0%B8%D0%B7%D0%BC") [13:21:35] which actually works [13:21:37] "http://ru.wikipedia.org/wiki/Фрейдизм" [13:21:39] right, it's a unicode URI [13:21:40] hm... [13:21:52] http://stackoverflow.com/questions/1802066/problems-with-decodeuri-with-characters [13:22:12] so you would need to decode the % as %25 as well [13:23:29] reading [13:29:27] drdee: hmm, but there's no %25 in %D1%F2%EE%EB%EE%ED%E0%F7%E0%EB%FC%ED%E8%EA [13:29:42] hm, it's entirely possible that http://ru.wikipedia.org/wiki/%D1%F2%EE%EB%EE%ED%E0%F7%E0%EB%FC%ED%E8%EA just isn't a valid URL and there's a redirect to the right page. Because notice that this is the same page: [13:29:42] right [13:29:42] http://ru.wikipedia.org/wiki/%D0%A1%D1%82%D0%BE%D0%BB%D0%BE%D0%BD%D0%B0%D1%87%D0%B0%D0%BB%D1%8C%D0%BD%D0%B8%D0%BA [13:29:46] and it decodes it properly [13:29:55] compared to this which works but doesn't decode: http://ru.wikipedia.org/wiki/%D1%F2%EE%EB%EE%ED%E0%F7%E0%EB%FC%ED%E8%EA [13:30:37] the stackoverflow post about the %25 is kinda not useful - that's just how you'd encode % signs if you wanted to preserve them in the URI [13:30:43] if it was a redirect then it should say on the wiki page (assuming it is a wiki redirect, not a http redirect because those you wouldn't see) [13:31:07] http redirect I mean - can a wiki do that? [13:35:02] yea, if I look at those two URIs they have almost no characters in common except %21 [13:35:09] uh, %D1 [13:35:16] hey morning ottomata [13:35:22] morning! [13:36:09] did you / jeff send the pig script results to Zack on friday (after running the script with only filtering for campaign = B12)? [13:36:46] no, ran out of time, B12 is wrong for 2011 [13:36:52] it is B11 [13:36:53] didn't know [13:36:57] looking for results now [13:37:11] average_drifter: maybe we shouldn't trust this example entry, and look for some new examples from the sampled log file [13:37:15] ottomata, cool [13:37:58] nah, there's no redirect, checked out the HTTP activity [13:38:02] and poop, hadoop streaming didn't work anyway [13:38:04] well [13:38:05] either that [13:38:09] or there are no B11's [13:42:41] ottomata, about hadoop streaming, not sure if you know this, but it allows you to write you map/reduce programs in any scripting [13:42:41] program and it just sends the data to stdout [13:43:24] yes [13:43:31] i am using udp-filter with hadoop streaming [13:43:35] i got 0 results with the B11 filter though [13:43:43] i need to make sure i'm doing it right [13:45:04] so how are you using udp-filter with hadoop streaming? [13:45:08] (just curious :)) [13:45:15] dse hadoop jar /usr/share/dse/hadoop/lib/hadoop-streaming-1.0.2-dse-20120707.200359-5.jar -input /user/otto/logs/banner0/bannerImpressions-20110917-27.log -output /user/otto/logs/banner0/bannerImpressions-filtered-20110917-27.log -mapper "/usr/bin/udp-filter -p BannerLoader\&banner=B11" -reducer /bin/cat [13:45:42] and I am def doing it wrong, there are B11s in my file [13:46:22] that's sweet because that means that we C performance mapper :D :D :D [13:46:29] SUPER COOL [13:46:42] yeah but so far it is not working…! [13:46:45] but I think it should [13:46:49] that's a detail [13:46:51] not sure yet what i'm doing wrong [13:46:51] ;) [13:47:09] uhmmmmm, is udp-filter installed on all boxes [13:47:09] so hard to know why though, it works when run manually [13:47:10] hm [13:48:00] i think that was the downside of streaming, it wouldn't copy the map/reduce programs to your boxes itself, or something like that [13:48:09] but i could be wrong [13:48:14] you can make it [13:48:17] ok [13:48:21] but in this case I don't need to [13:48:26] since it is installed on all machines [13:48:28] ok [13:48:50] what messages do you find in the logs? [13:48:56] oh it runs just fine [13:49:06] just my output file is empty [13:49:14] k [13:49:15] drdee: should we just give up decoding the URIs in the logs? [13:49:39] drdee: it seems like much more complicated than I thought [13:49:45] yeah park it for now [13:49:49] ok [13:50:47] but you also need a reducer, right? [13:51:43] ottomata, what is your expected output using udp-filter? [13:52:16] drdee: ok then I'll start extracting the code of filter and integrating it to udp-filters [13:52:58] the output shoudl be the same, i'm trying to filter with B11 [13:53:19] just trying to prune the file [13:53:34] ok [13:54:20] ahhh, ithink it is the escaped \& [13:55:09] ok, trying again... [13:56:33] :) [13:58:10] milimetric: i think you should take this week the lead with generating the report card :D i'll help you as soon as Erik Z emails us the files [13:58:33] ok, cool [13:58:43] (drdee ^) [14:16:03] hmm, drdee [14:16:05] https://gist.github.com/3812038 [14:16:07] doesn't look right [14:16:58] no it doesn't :( [14:19:20] will try a couple of different ways [14:31:45] drdee, have you guys seen this library before: https://polychart.com/ [14:32:14] I spent a few hours this weekend studying it, it's open source and seems a lot like what we're going for [14:38:36] no i haven't seen it [14:48:07] whoa yeah [14:48:49] right on! [14:48:58] this could be useful: http://jonas.nitro.dk/tig/ [14:49:04] a command line git browser [15:15:34] drdee [15:15:39] yo [15:15:41] I filtered in pig rather than with udp-filter [15:15:46] got the same results I pasted earlier [15:15:52] I think B11 did not work properly in 2011 [15:16:17] wow [15:16:28] are you sure it is B11? [15:16:43] asking peter g now, but I did what he told me [15:16:43] maybe loop in Jeff_Green? [15:16:45] AND we did get results [15:16:52] so its not like there weren't any B11s [15:16:58] right [15:17:19] i would like to see the script they used last year for their counting [15:17:28] yeah [15:17:28] that way we know how they got their numbers [15:17:36] right now it is fishing i an deep valley [15:18:06] maybe pgehres has that script [15:47:33] grabbing fooooood, running back to my place, be back in a min [16:30:21] milimetric: https://bugzilla.wikimedia.org/show_bug.cgi?id=40662 all yours :) [16:30:45] i think the confusion is that the wheel keeps spinning but that you should click on 'Add Metric' [16:31:24] lol, what in the world is the "aurora browser" [16:31:34] yea, I'll try to explain how it works [16:31:43] ty [16:31:51] i think aurora is the beta channel of firefox [16:31:58] or maybe alpha channel [16:32:09] morning dschoon [16:32:12] or maybe a beta channel [16:32:15] you know, for the fish? [16:32:21] mornin! [16:33:02] beta blockers are what i need [16:33:14] instead i have betas, and blockers. [16:33:47] bay to breakers? [16:33:47] hahaha [16:33:59] i use to think that was called beta breakers [16:34:08] breaker blockers would also help. [16:34:15] er, blocker breakers [16:37:01] for a fun time, go to wikia.com and check your JS console [16:38:02] also count the number of cookies you end up with. i got 28. [16:39:21] drdee done, I didn't have an account on that. Is there anywhere else I should monitor for bugs? [16:39:36] nope [16:39:49] ori-l: the lesson is that users actually don't care about cookies. [16:40:16] drdee / milimetric -- there's a new bugzilla "karma" nonsense that prevents new users from actually doing anything useful with tickets, btw. you can ask sumanah for an override. [16:41:04] dschoon: ikr! that was how i discovered it. spage was complaining that we were adding another cookie. i did a quick "industry survey". 28 is a bit much, though. [16:42:05] shrug. quantity isn't really that big a deal. i bet it's not even 1k. [16:42:08] a lot of people definitely care about cookies. Enough to push browsers to features like opt-out and to warrant write-ups in NYT and WSJ about cookies. [16:44:00] maybe it is more likely: a few people care really hard about cookies [16:44:17] i think ottomata is right. [16:48:54] this is pretty enjoyable. http://geekblog.oneandoneis2.org/index.php/2012/09/30/to-understand-the-command-line [16:54:05] https://plus.google.com/hangouts/_/2e8127ccf7baae1df74153f25553c443bd351e90 [17:14:46] it's asana day so please update / close / add your asana tasks today [17:14:53] ok! [17:25:11] eek, iunno, my compy might still be sick [17:25:11] it just shut down [17:25:15] all of the sudden [17:25:19] and then was weird as it rebooted [17:25:21] :( [17:25:38] seems ok now... [17:34:04] drdee:i'm seeing some oddly formatted lines in some of the recent squid logs [17:34:14] has anything changed that you know of? [17:34:27] show me (without ip addresses) [17:34:43] for example, no timestamp in this one [17:34:46] (one sec, cleaning ip) [17:35:05] nothing has changed AFAIK [17:35:07] ARP/IPXXXXXXX image/png http://fr.wikipedia.org/wiki/Am%C3%A9rique_du_Nord - Mozilla/5.0%20(compatible;%20MSIE%209.0;%20Windows%20NT%206.1;%20WOW64;%20Trident/5.0) [17:35:20] is that the entire line? [17:35:31] yeah [17:35:36] HUHHHHH????? [17:35:39] ARP? [17:35:40] i know [17:35:41] where did you get it from? [17:35:43] is that usually there? [17:35:46] it is from a zero file [17:35:48] not usually [17:35:58] which file exactly? [17:36:01] basically I am running some scripts that used to work, but now are getting messed up [17:36:08] let me find the name [17:36:10] one sec [17:38:48] going slower than expected [17:38:51] np [17:41:09] brb a moment [17:42:48] drdee,dschoon, quick q [17:42:54] yo [17:43:02] should I bother looking into installing and setting up YARN? [17:43:04] or just MRv1? [17:43:07] i think I can do both... [17:43:39] okay, drdee [17:43:48] looks like it is confined to the digi-malaysia filter [17:43:58] ottomata, i think you should try first YARN [17:44:04] ok [17:44:10] if that is too hard then go back to MRv1 [17:44:36] and it looks like a few fields are getting mashed together [17:45:20] what file are you looking at?? [17:45:45] here is a linecp1003.eqiad.wmnet 1763715602 2012-09-29T09:13:19.like%20Gecko)%20Version/4.0%20Mobile%20Safari/534.30 [17:46:44] which file :) ? [17:48:19] erosen..which file? [17:48:43] I am locked out of this channel? [17:48:47] not anymore I guess [17:48:48] weird [17:49:38] can't send the filename [17:49:52] a/squid/archive/zero/zero-digi-malaysia.log-20120930.gz [17:50:10] aha it wants to parse it like a command because it is an absolute path [17:52:18] so that 0930 file has a truncated line [17:52:24] but it does not have the ARP/IP stuff [17:52:30] which file has the ARP/IP stuff? [17:53:24] okay it has multiple truncated lines [17:54:12] ottomata: i don't think its fair at all [17:54:12] drdee, i'm talking with pgehres about fr stuff [17:54:19] but its not my call [17:54:25] he says the https://gist.github.com/3812038 results are right for sept 2011 [17:54:32] need context :) [17:54:43] re filtering for banner=B11 [17:54:46] k [17:55:07] so, pgehres, I should compare nov 2011 to sept 2012? [17:55:13] yeah [17:55:15] can I just pick any 10 day period? [17:55:19] in nov 2011? [17:55:20] pgehres: how do you guys know that the number of banner impressions this year is lower compared to last yer? [17:55:41] ottomata: start on 11/18 and you should be good [17:55:48] drdee: i don't [17:55:55] this is all coming from zack [17:56:11] i agree -- go with yarn [17:56:59] the arp thing seemed tone coming from lines with CARP in them [17:57:04] haha, maybe we should be talking to zach [17:57:08] drdee: you might be best off talking to zack and hearing his side [17:57:20] also, who is zach? [17:57:22] which I also don't understand [17:57:26] zack* [17:57:33] ottomata: zack exley, chief revenue officer [17:57:53] zexley@wikimedia.org [17:58:09] erosen: got it [17:58:12] drdee, can you follow up with him? [17:59:27] erosen:it just truncates some lines and ARP/IP is truncated from CARP as you mentioned [17:59:37] okay, some lines are truncated, that's the issue [18:00:21] interesting [18:02:59] drdee: we'll have a review tonight on a udp-filter integration [18:03:07] drdee: not exactly now but in a few hours [18:03:08] cool [18:03:12] sure [18:03:58] erosen: how about creating a histogram for all digi malaysia zero filters and counting the number of fields (using space as a delimiter) [18:04:16] that would give a good idea on how big the problem is [18:04:28] ya [18:04:30] can do [18:04:55] and it's pretty straightforward, right/ [18:05:14] yeah [18:05:21] i'm in meeting right now [18:05:22] so one sec [18:05:31] although my suspicion is that this is not confined to digi-malaysia [18:16:43] erosen.... [18:17:22] ya [18:18:39] do you trust the zero filters at all? [18:19:13] don't have a good answer [18:20:52] sorry, still meeting [18:22:17] brb food [18:23:06] so there are 665 hits on ms.m.wikipedia.org domain on september 30 [18:23:20] and there are 60485 total hits [18:23:59] so about 1% of visits go to the malaysian wiki [18:24:07] that's odd [18:26:29] hmm [18:31:40] according to udp-filter/maxmind, 20% of those requests do not originate from Malaysia, 80% does [18:50:23] back [19:09:33] grumble, pyasana seems to no longer fully support Asana api, i see a fork coming! [19:09:40] ha [19:09:46] i haven't updated my stuff yet. [19:09:48] i'll do that soon [19:09:51] thx [19:10:17] dschoon: i can't fork a github repo to our gerrit repo, right? [19:10:23] no. [19:10:37] is that a 'no way' [19:10:45] i don't even know what that would mean. [19:10:48] or is that a 'there is actually a trick' [19:10:51] unless you mean forking on github [19:10:55] and then pushing to gerrit [19:11:04] you just add both remotes. [19:11:08] that's all i do with limn [19:11:11] k [19:36:28] drdee: here are is set of token lengths for squid files generated buy zero filters over the last two days: [19:36:28] {14: 2140633, 15: 139431} [19:36:43] ok [19:36:46] that's a know bug [19:37:16] yeah [19:37:30] so the messed up lines aren't actually missing columns [19:37:38] they just seem to have merged them in some way [19:37:46] mmm [19:37:47] k [19:39:00] also, the file zero-digi-malaysia.log-20120929.gz, seems to have been erased [19:39:01] thoughts? [19:39:59] you mean 0930? [19:40:11] yeah [19:40:12] whoops [19:41:19] ughh my mistake [19:41:21] fyi, i'm about to wipe DSE [19:41:25] yo [19:41:27] from cluster [19:41:35] wipe away [19:44:09] off to lunch, bbf [19:51:03] drdee: question [19:51:14] shoot [19:51:16] drdee: do filter rules inside udp-filter take precedence over our transformations ? [19:51:32] which transformations [19:51:38] drdee: so what we have in filter.c does two things, it filters but it also transforms the output [19:52:20] it should filter first and then transform, that's more efficient as else you could be transforming stuff which you will throw out later when filtering [19:53:22] drdee: alright, can we make a convention when we talk about filter.c rules we call them internal traffic rules ? [19:56:29] drdee: I think I should use the existing model to transform data, that is to make my own replace_* functions [19:57:10] drdee: because for example now there are two functions that do transformations [19:57:25] drdee: replace_ip_addr and replace_spaces_with_underscore [19:58:01] ok, wait, [19:58:03] drdee: but I think the kind of transformations that filter.c are more than that [19:58:16] i thought you meant a different kind of transfomration [19:58:17] drdee: I mean filter.c does not preserve fields and such, not even order [19:58:25] let's skype [19:58:28] alright [20:11:54] drdee: quick question forgot to ask [20:12:09] drdee: you said applying the internal traffic rules will be by default [20:12:16] drdee: will the format transformation also occur by default ? [20:12:42] what format transformation? the one for collector? [20:12:56] drdee: yes [20:13:04] drdee: or is that currently done already by udp-filters ? [20:13:09] no, that's also a command line switch [20:13:21] so default behavior is to output line as-is to stdout [20:13:49] new behavior is optional command line switch to output behavior in collector compatible format [20:13:50] drdee: alright, so we have 2 new commandline switches , one for bots and one for transforming output tailored for collector [20:13:56] yes [20:13:57] ok [20:23:47] grumble grumble grumble [20:24:01] archived tasks in asana can't be pulled through the API [20:24:19] which makes us look like slackers :[ [20:26:43] dschoon, ottomata, milimetric: i suggest that asana-stats auto-archives tasks older than 2 months, that way we can still generate progress reports, but the asana UI doesn't get too cluttered, sounds right? [20:26:54] 2 months! [20:26:55] oh right [20:27:01] k [20:27:06] i should do that [20:27:07] you can always show archived tasks [20:27:07] and no. [20:27:14] it does not delete tasks [20:27:14] i don't think auto-archiving open tasks is a good idea. [20:27:22] only closed tasks [20:27:29] hm [20:27:30] ok [20:27:49] it's not a good idea but the question is how does goodness('leaving tasks open for 2 months') compare to goodness('auto closing #{that}') [20:28:03] sorry h [20:28:05] oh. ok [20:28:06] that is not what i meant [20:28:15] i am only talking about completed tasks [20:28:29] yep, oh ok-ed above ^ [20:28:35] if they are archived then they are not accessible through the API [20:28:44] archive != delete [20:28:58] sounds good to me [20:29:02] k [20:29:29] dschoon, btw, I pushed some more code, cleaned it up a bit and organized everything into a layout with g aplenty. [20:29:35] sweet [20:29:39] i'll check it out in a sec! [20:29:48] maybe we can get some slick zooming and panning working [20:29:50] the only thing not working is my little tracking line disappeared [20:30:16] yep, I gotta bail out soon because it's my wife's 30th and I have to pick up her present [20:30:57] btw, if any of you are dyslexic (even slightly like me) then you should check out this new font: Open Dyslexic - http://dyslexicfonts.com/ [20:31:00] it helps A LOT [20:34:15] oh! [20:34:17] awesome! [20:34:25] congrats! [20:34:34] or happy birthday [20:34:41] or whatever appellation is appropriate :) [20:38:19] :) thank you [20:40:00] dschoon, cool thing of the day for me: d3.mouse(container)[0] is much cooler than d3.event.x + weirdFudgeFactor(container) [20:42:32] nite everyone, see you 'morrow [20:47:42] later [20:48:02] ottomata: is this a normal df row on stat1? [20:48:10] dev/mapper/stat1-root 14597932 13865864 0 100% / [20:48:23] i.e. 100% Use% [20:49:33] nopers [20:49:41] someone filled up their homedir! [20:49:44] lemme check [20:50:05] drdee [20:50:05] 4.7G ./diederik [20:50:11] 2.5G ./erosen [20:50:15] me? [20:50:21] 1.3G ./olivneh [20:50:32] i can make some space [20:50:52] you know what, maybe I will just make a large /home and mount it [20:51:03] eek, lotsa people logged into stat1 right now! [20:51:49] ottomata, are you referring to my home folder? [20:51:59] yes [20:52:01] milimetric: yeah, sweet [20:53:34] fixed [20:54:10] same [20:55:37] hmm, ok I'm ready to mount this new /home, but there are too many people logged in right now [20:55:40] I will do it tomorrow morning [21:04:40] ...why are you mounting a huge /home, ottomata? [21:05:14] so that people don't fill up / [21:05:19] this is on stat1 [21:05:31] i'm just trying to understand. [21:05:40] is /home not a nfs mount there? [21:05:47] no, it is only nfs in labs [21:05:51] ohh [21:05:53] wait. [21:06:01] how does you having a huge /home help things? [21:06:21] doesn't have to be huge, but having /home as a separate mount keeps people from filling up the root partition [21:06:28] ohh [21:06:28] i see. [21:06:32] you mean, all of /home [21:06:37] not /home/ottomata :) [21:06:38] / is only 14G on stat1 [21:06:38] yes [21:06:57] we have the space, so I am making a 1TB LVM /home [21:07:39] cool [23:09:48] what lightweight hash implementation can we use in C ? [23:09:54] I mean.. in the udp-filters [23:09:55] for what? [23:10:03] murmur hash possibly [23:10:13] for example like just having stuff like [23:10:16] switch(string) { [23:10:20] case "str1": .. [23:10:23] case "str2" :... [23:10:26] } [23:10:36] this can usually be done in C++ quite easily with map [23:10:43] but in C... [23:10:54] murmur hash then [23:10:59] wait [23:11:02] drdee: would you consider that overcomplication ? [23:11:03] why not switch(integer) [23:11:05] yes [23:11:26] but if we have 11 cases in the switch ? [23:11:38] benefit versus complexity ? [23:11:48] well you do what we already do with GEO_FILTER etc [23:11:53] those are just integers [23:12:06] it's too much overhead for the use case [23:13:13] alright [23:41:16] average_drifter: ready for a review? [23:52:04] personally, i like questions like, "but if we have 11 cases in the switch??"