[03:38:28] yay I kinda got the zooming effect I wanted [12:38:55] hello drdee [12:39:19] yo [12:39:23] drdee: do you guys need to parse UAs in Kafka as well ? [12:39:24] drdee: https://github.com/tobie/ua-parser [12:39:31] ready for merge? [12:39:40] yes we do need to parse ua's [12:39:47] i am thinking on best approach [12:39:55] drdee: please have a look at the lib above [12:40:01] it has a java implementation [12:40:05] it has a huge collection of regexes [12:40:16] and creates a big array of parser objects, each with its own regex [12:40:37] then it passes a UA from the log through all those parser objects and whichever matches, you get the OS version, device etc etc [12:41:01] the regex collection is here https://raw.github.com/tobie/ua-parser/master/regexes.yaml [12:41:09] it is up-to-date with current devices on the market [12:42:18] it basically overlaps with part of Erik's code for parsing UAs [12:44:18] I'm still doing the integration of the iPad code, trying to replace pieces of the existing code with the UA::iPad [12:46:18] yes i am familiar with the work of steve saunders and the browserscope project [12:58:58] average_drifter, i really want to merge the wikistats changes today [12:59:01] are we on track? [12:59:43] drdee: yes [13:05:58] morning milimetric [13:06:05] have you seen http://semver.org/ ? [13:06:13] morning :) [13:06:21] would like to adopt that for limn and actually for all our projects [13:06:32] nope, I'll check it out [13:06:59] it's basically common sense [13:06:59] but very important [13:20:11] drdee: can I run profiling on wikistats at some point, or is it outside of scope. when talking to Erik he mentioned that it takes 1day to process 1day of data, but maybe that is the purpose of Kafka, considering it uses Hadoop and distributing work with jobs and workers etc [13:20:20] drdee: maybe it's not the case to profile [13:20:39] there is an easy fix to improve performance [13:21:17] wikistats does geocoding, and for every ip it shells out, starts mini C program to do the geo coding and returns to wikistats [13:21:22] this needs to be deprecated [13:21:26] what we need to do is: [13:21:48] 1) make geocoding work in udp-filter as a separate field (you wrote the code we need to test that) [13:22:07] yes [13:22:10] 2) pipe all existing sampled log files through the new udp-fiter and append the geo coding field [13:22:31] 3) remove the shell stuff from wikistats [13:22:40] 4) have wikistats recognize the final field as the geo field [13:22:49] this will be a massive performance improvements [13:23:01] you should totally work on this [13:23:07] but first lets merge the older stuf [13:29:10] milimetric what do you think about semver? [13:29:34] sorry, I'm obsessed with edit ui again [13:29:46] i could use a break though, i'll read now [13:41:00] geez, my internet just went down for like 5 minutes [13:41:09] semserver seems like ... common sense [13:41:12] drdee [13:41:24] word [13:41:30] you would think it's common sense [13:41:30] i thought that's how everyone did versioning [13:41:39] not us [13:41:46] well, limn doesn't have a public API yet :) [13:41:46] have you looked at mediawiki version numbering? [13:42:08] every release is an minor bump in version number [13:42:28] major major new features, nah minor version number [13:42:38] :) [13:42:53] yes it is common sense [13:42:53] my friends in the .net world think there should be a major release for everything [13:42:59] :D [13:43:08] yeah they err on the other side [13:43:15] but wait, is mediawiki breaking api compatibility with minor releases? [13:43:39] i am sure stuff has been deprecated [13:43:52] well that's still a minor release according to semserver [13:44:08] major == break in API [13:44:29] the only objection I have with that is starting at 0 [13:44:41] I think 0.blah releases are for before you have a public API [13:46:25] you have to dig through the CHANGES.txt in mediawiki [13:46:31] i am sure something changed [13:47:34] is there any html specialized diff tool out there ? [13:48:18] milimetric: http://lists.wikimedia.org/pipermail/mediawiki-api-announce/2009-February/000005.html [13:48:31] drdee, are you working with hadoop today [13:48:40] would like to, but you have preference of wawy [13:48:46]  [13:48:46] yep, that would break it :) [13:48:47] wawy = way [13:49:02] i want to poke at vpn for a second again [13:49:02] wawy is a nice word IMHO [13:49:10] i think I know why it was being dumb [13:49:10] go ahead [13:49:17] average_drifter not that I know of, what are you fighting with? [13:49:17] mmk, it might break hadoop networking for a sec [13:49:18] but not long [13:49:25] go for it [13:49:38] milimetric: tryin to compare reports [13:49:41] i also have some hadoop permission issues for later today [13:49:46] html reports with a lot of data in them [13:49:55] average_drifter, probably not worth spending your time on [13:50:07] don't debug the html output of wikistats [13:50:30] drdee: yes but it directly relates to the work I'm doing [13:50:33] what is the problem? [13:50:44] so I wrote some code to detect multiple versions of ipad [13:50:54] now I'm trying to figure out if it picks up multiple versions [13:50:59] I'm looking at the reports [13:51:08] because currently that's the only visible concrete output of the scripts [13:51:14] milimetric another one: http://lists.wikimedia.org/pipermail/mediawiki-api-announce/2009-February/000007.html [13:51:33] can't you check the csv output? [13:51:44] I'll try [13:52:52] some good suggestions about html diffing along with a guy who wrote a decent-looking one in C#: http://stackoverflow.com/questions/31722/anyone-have-a-diff-algorithm-for-rendered-html [13:53:15] you can fire up Mono and run that if you can't figure it out some other way [13:53:31] but I agree with Diederik, html diffing could be a time sink [13:53:47] someone suggested using lynx to get text-only render and then diffing that [13:54:37] milimetric,still not exactly sure what the problem is [13:54:44] i meant average_drifter [13:54:46] heh, drdee, yeah, maybe we can start a movement to convince them to use this versioning scheme. That seems like it would appease a lot of the third party users because it would make upgrades less mysterious [13:55:00] talk to robla :) [13:55:16] i'm onboard and I'll use it for Limn as well as spread the word. [13:55:17] thx [13:55:19] drdee: so I basically expect more ipad versions to appear in the repots, since now I have code that recognizes multiple versions of ipads [13:55:27] check [13:55:29] drdee: and that's what I was trying to achieve through the html diff tool [13:55:36] not check [13:55:42] drdee: to see that percentages change and that more versions are present [13:55:58] the intermediate output are csv files, right? [13:56:04] drdee: yes [13:56:14] why not checking those? [13:56:21] drdee: yeah, I'm giving those a try right now [13:56:26] ko [13:56:30] ok [13:56:34] I'm setting up two runs with "old_code" and "new_code" and diffing those up [13:56:55] k [14:01:44] drdee: the method you suggested has proven useful [14:01:46] drdee: http://i.imgur.com/WRlFa.png [14:01:55] left is "before", right is "after" [14:02:40] drdee: thanks [14:02:54] excelletn! [14:17:25] drdee: so now there are some clear improvements in SquidReportOperatingSystems.htm [14:17:33] drdee: multiple version of ipads are showing up [14:46:45] milimetric: can you update the Limn section at http://www.mediawiki.org/wiki/Wikimedia_engineering_report/2012/October#Wikimedia_analytics [14:47:43] drdee: milimetric: what's the question for me? [14:48:37] robla, not sure if i understand your question [14:49:55] (06:54:45 AM) milimetric: heh, drdee, yeah, maybe we can start a movement to convince them to use this versioning scheme. That seems like it would appease a lot of the third party users because it would make upgrades less mysterious [14:49:56] (06:54:59 AM) drdee: talk to robla :) [14:50:21] ohhhhhhh [14:50:26] mediawiki version numbering schemes [14:50:36] heh! [14:50:44] and that we should adopt semver [14:53:23] maybe when we hire the Release Manager, we might have the bandwidth to futz with that, although, somewhat ironically, the release manager is going to be focused on deployments, not releases [14:55:21] :) [14:55:28] ottomata, can you add the page view logging stuff at http://www.mediawiki.org/wiki/Wikimedia_engineering_report/2012/October#Wikimedia_analytics [14:55:37] i already took care of the rest [14:55:50] but feel free to add if i missed something [14:57:12] any news? [14:57:42] all the kafka, pixel service thinking / working [14:58:05] that is part of kraken, i think [14:58:20] page view logging is what? udp2log + udp-filter stuff, i think [14:58:21] right? [14:58:32] btw, i tried running the hive query in hue as myself but that caused the same error [14:58:32] and also log format changes [14:59:02] IMHO, page view logging should become kafka /pixel service [14:59:17] udp2log + udp-filter stuff ==> analytics infratructure [14:59:31] (i mean it's old infrastructure) but feel free to move stuff around [15:01:20] pixel is not page view though [15:01:30] and we aren't using kafka for page view yet [15:09:43] but it's a status update right? we are making progress and we should report on the progress we are making towards that goal [15:17:05] drdee: I tried updating the Limn status a couple of times and the popup dialog just sits there spinning. [15:17:14] I'll try again in a bit. I had "The migration to d3 rendering was completed. The design for the new Edit UI is complete. We are porting over the monthly metrics meeting dashboard to the new Limn and are aiming to show it off in December." [15:17:20] it takes a while [15:43:31] it's still going. 30 minutes later. How "a while" is a while? :) [15:43:46] :)( [15:43:46] what browser ? [15:43:49] chrome [15:44:02] i also used chrome to update the status [15:44:04] weird [15:55:15] good morning [15:55:26] hey good morning louisdang! [15:55:47] would you have a second to help debug the pig script? [15:56:00] ok [15:56:10] I wrote some more stuff on the etherpad yesterday [15:56:25] i pasted my own new version as well [15:57:47] i tried your version but it doesn't run [15:58:10] ok [15:58:10] and it doesn't even give an error message [15:58:13] hm [15:58:25] going to ether pad doc, ook [16:11:55] hey drdee I asked a question on the etherpad [16:21:14] test [16:29:38] ok [16:58:22] https://plus.google.com/hangouts/_/2e8127ccf7baae1df74153f25553c443bd351e90 [17:01:50] brb [17:20:37] drdee, fixed the error in 2nd version [17:21:06] feedback3.pig I mean [17:21:50] i'll be back in an hour. my head still hurts and i want to make sure i avoid the wikiplague [17:21:57] so i need to sleep a little more. [17:22:11] (and, robla, i am working from home today in wikiplague avoidance) [17:33:18] ottomata, wanna help fix some hadoop permission issues? [17:34:26] jaja, will do in a bit, need to get lunch and run an errand in jus a sec [17:34:28] what's the prob? [17:34:59] ok, same thing as this morning [17:35:05] hive? [17:35:11] but i also get the problem running it as myself [17:35:11] hue / hive [17:35:11] hm [17:35:26] do you know what it is trying to do? [17:35:28] create a file/directory? [17:35:42] it says it cannot find a file [17:35:50] and indeed that file does not exist :) [17:36:02] i tprobably can't create that file [17:37:15] try now [17:40:15] running [17:41:02] ottomata, nope still error: [17:41:03] java.io.FileNotFoundException: /tmp/hue/hive_2012-10-26_10-40-01_056_6295486335620919908/-mr-10001/e3da9083-1b6d-455d-931b-fe0245949726 (No such file or directory) [17:42:27] ok one more try.oook one more time try it [17:50:50] drdee, be back in a little while, gotta run a couple of errands [17:50:57] try again [18:05:44] I think I just figured out what it's like to have a crush on a piece of technology. I literally have an elevated pulse and dilated pupils after reading their principles and design philosophy: http://docs.meteor.com/ [18:43:52] i am gonna miss demo friday :( i have to get some stuff before 5pm [18:53:05] drdee, any luck? [18:54:14] nope, same error [18:55:22] hmk [18:55:49] how do I reproduce? [18:55:52] what are you doing? [18:56:06] man, i feel so much better now [18:56:33] heh. milimetric, you're going to like meteor a lot less if you read their code [18:56:36] it's like some horrible frankenstein. [18:56:58] cobbled together from random parts. it doesn't even run on node [18:57:17] drdee: https://gerrit.wikimedia.org/r/30197 [18:57:17] hm, that's not true [18:57:23] I completely separate what something does from how it does it :) [18:57:32] whole point of a framework [18:57:41] i can't trust they're smart if their code sucks [18:57:50] which makes it hard for me to buy into an idea [18:58:06] :) i can trust they're smart if their code works. [18:58:37] but I don't need that. I already know they're smart because of the goals they set and the way they let you structure projects. [18:58:41] averag_drifter, can i give feedback in 2 hours from now, i have to run some errant [18:58:45] s [18:58:47] i'll read more then [18:58:54] also, I don't think anyone would have a good impression of Limn on first look, but there's some good stuff in there [18:59:08] dschoon i am skipping demo friday because of errants , my apologies ;) [18:59:15] at least it follows project best practices [18:59:16] their seven principles of Meteor I really like [18:59:17] http://docs.meteor.com/ [18:59:20] it's like, we're aware of the rest of the world [18:59:23] ok, drdee [18:59:33] and the way they handle the client and server folders I really like [18:59:46] the fact that they do SRP authentication which is pretty unique [18:59:48] i'll talk about the schemas, i guess [19:00:01] i don't know anything about those parts. [19:00:06] i guess i'll do a bit more reading [19:00:34] well, i don't mean to influence. I think it's crucial that we actually write our little test apps. I'm writing one in meteor for sure and maybe one in sammy [19:00:45] absolutely [19:00:45] both with knockout I think [19:01:00] oh, the one other "let's bias dave" thing I'll say [19:01:20] i really like how meteor plays nice with any other framework [19:01:45] you can add coffee, backbone, mongo, etc. even easier than with npm [19:01:49] cool [19:02:02] but i do think it's important to integrate into the npm ecosystem [19:02:03] and you can package your own library into a "smart meteor" thing [19:02:04] drdee: ok [19:02:15] yeah, like, get this [19:02:15] it's the engine that drives adoption [19:02:23] you can bundle your meteor app and then installing it is just [19:02:44] 1. unzip 2. npm install fibers [19:03:22] yeah, i don't have all the parts straight yet but I like the jive talkin they do on their pages. We'll see if it stands up to a nice torture test [19:03:30] jive is good! [19:08:29] awesome: http://epetitions.direct.gov.uk/petitions/31659 [19:16:58] brb lunch [19:27:32] dschoon: I signed with this as my address [19:27:32] 221B Baker StreetLondon, Greater London NW1 6XE, United Kingdom [19:33:32] well clearly we're roommates. [19:34:16] between turing and oscar wilde, the crown has a fuck of a lot to atone for [19:37:56] I think what happened to Turing is probably one of the greatest setbacks to philosophy and science ever. Imagine if he'd lived 40 more years to see the internet. We'd probably be on Mars by now. I was livid when I saw the simple minded reason the UK chose to give for not pardoning him posthumously. [19:38:09] Yes. [19:38:20] At least they apologized for it. [19:38:20] But that's a very weak step. [19:39:56] (They basically did the same thing to Oscar Wilde -- he was imprisoned for sodomy and some sort of immorality charge, where he got 1) so depressed he never wrote again; 2) so depressed he became a christian; 3) a respiratory disease that killed him two years later.) [19:40:45] tl;dr: the people who change the world are never normal, and then we kill them for it and keep their stuff. [19:49:21] i'm confused about how you guys use tldr [19:49:23] :) [19:49:42] but yea, Wilde is one of the only people whose physical books I still own :) [19:49:42] "in summary" [19:50:03] You get to feel SO SMART when acting in an Oscar Wilde play. [19:50:08] you did? [19:50:19] Because you're the wittiest fucker in the universe for about an hour. [19:50:20] milimetric: recognize any of those places ? http://www.youtube.com/watch?v=WTLsAnxUwzQ [19:50:31] lol. I should try that. Maybe it'd make me feel smarter [19:50:46] i usually just rap with my friends from detroit to feel like that :) [19:51:01] haha [19:51:01] and yeah [19:51:15] i was cast as Algernon in the Importance of Being Earnest [19:51:22] oh awesome [19:51:32] I have this weird feeling that a lot of people don't get that play [19:51:47] hehe [19:52:00] It's such a great play. [19:52:05] average_drifter yeah, that's downtown :) I live a bit northwest [19:52:15] :) [19:52:22] but I never saw cat guy [19:54:29] there's a hackathon tommorow in Bucharest [19:54:44] dschoon: favorite fiction book of all time? [19:54:45] it's 24h [19:55:05] The Alexandria Quartet, by Lawrence Durrell [19:55:34] it's technically four books, but combined they're shorter than anything Dostoevsky ever wrote :) [19:55:44] :P [19:55:47] wait how'd you know ... [19:55:59] did I tell you mine was Brothers Karamazov? [19:56:01] because everyone picks Brothers Karamazov [19:56:05] you did not. [19:56:06] bull [19:56:14] but i can always tell the type. [19:56:22] i actually think you can sort the world pretty evenly on this question. [19:56:28] lol, you're funny. Stephanie does that to me [19:56:33] I'm like guess how many... [19:56:36] and she's like 43 [19:56:44] :D [19:56:46] it's never 42, of course. [19:56:52] but yeah. [19:56:58] haha, i tried to avoid that [19:57:04] and the best i could think of was to add 1 [19:57:17] when the first book was published, T.S. Eliot reviewed it [19:57:20] 3 min. to demo time [19:57:28] "Durrell is quite possibly the savior of english literature." [19:57:43] wtf... something positive from Eliot [19:57:49] maybe a dozen of my favourite quotes are from the books. [19:57:49] now I gotta read this [19:59:12] from the first page: [19:59:12] I have been looking through my papers again tonight. Some have been converted to kitchens uses, some the child has destroyed. This form of censorship pleases me for it has the indifference of the natural world to the constructions of art. [19:59:48] I'll warn you. It's the sort of book that half the people who read it hate it. [19:59:56] it's like an impressionistic painting [20:00:32] but he's a master of language. the books are beautiful. it's where i got my obsession with words. [20:00:42] Under the palms, in a deep alcove, sit a couple of old men playing chess. Justine has stopped to watch them. She knows nothing of the game, but the aura of stillness and concentration which brims the alcove fascinates her. She stands there between the deaf players and the world of music for a long time, as if uncertain into which to plunge. Finally, Nessim comes softly to take her arm and they stand together for a while, she w [20:00:42] atching the players, he watching her. At last she goes softly, reluctantly, circumspectly into the lighted world with a little sigh. [20:00:48] demo time? https://plus.google.com/hangouts/_/4d17c1bee0c30e050921f0c1d83773c393267237 [20:01:26] dschoon drdee ottomata average_drifter erosen [20:14:33] so .. speaking of chess.. [20:14:41] dschoon: are you a player ? [20:14:51] not well. :) [20:15:34] i did write a Go app, though http://tweiqi.com/ [20:15:42] i wanted to make it so people could play over twitter [20:15:50] but i never finished :) [20:41:39] I wanna ask something about wurfl [20:41:54] so as I understand it's currently the most complete UA parser ? [20:42:06] sort of seems like it [20:42:15] but i'm not really an authority [20:42:34] if it has some less permissive license.. erm.. can someone come and get their regexes and just write somethin in Perl that does the same thing that they do ? [20:42:35] I mean getting their regexes is considered "stealing" ? [20:43:19] not clear. [20:43:33] but earlier versions had a better license [20:43:39] which means we can fork it [20:43:51] which is what diederik is talking to ppl about [20:43:56] so i'd chat him up [20:44:18] i did just get a business card from the guy at mozilla who is in charge of some variety of "partnerships" [20:46:03] "Organizations that intend to use this file or the data contained in it commercially or under different licensing terms should contact ScientiaMobile (http://www.scientiamobile.com) to learn about licensing options for WURFL [20:46:06] " [20:57:27] dschoon, robla mentioned opa as another framework we could glance at: http://opalang.org/ On first look it seems like a less opinionated meteor [20:57:53] then again, it's yet another language to learn, though it is pretty similar to js [20:58:35] big caveat: that was my 30 second investigation (alternativeto.net search) [20:58:44] it has a purty website, though [20:59:00] and headshots of many people that have some involvement in it! [20:59:22] look, there's a dog in the background [21:04:09] huh [21:04:24] i'll def check it out [21:38:22] average_drifter: no we are not going the wurlf way [21:38:32] drdee: hey [21:38:34] drdee: hm, ok [21:39:27] i am in touch with openddr / apache device map project [21:39:38] drdee: how about https://github.com/shon/httpagentparser ? or https://github.com/tobie/ua-parser ? [21:39:38] that promises to develop a true open based device db [21:39:47] drdee: oh cool [21:39:48] tobie/ua-parser is the way to go [21:39:54] for ua parsing [21:40:09] that's from leading folks from google [21:40:17] and has libs in many languages [21:40:17] drdee: as described in the screencast, I started adding support for Perl in tobie/ua-parser [21:40:33] i like it! [21:40:40] I'll release it on CPAN this weekend [21:41:03] <3 hackers are so great [21:41:09] "fun? that's releasing code on CPAN." [21:41:48] opa looks pretty awful, btw. [21:42:04] drdee: do you think we could use that in wikistats ? dschoon I think or someone else(not sure about the IRC nickname) said in the meeting we had on G+ that they tried tobie/ua-parser has a limited set of regexes and doesn't parse some UAs [21:42:04] 1. it's a language, not a library. so you have to write their pointlessly different dialect of JS [21:42:27] average_drifter: i think that was erosen [21:42:35] erosen: hey man :) [21:42:56] 2. opa doesn't actually add anything interesting that nowjs doesn't already do with pure js. (no need for a stupid language) [21:43:07] yeah [21:43:08] yo [21:43:20] i did try it about two months ago [21:43:28] 3. it doesn't solve the actually hard problem, which is pure client-side. interacting with a server is easy and solved. [21:43:36] erosen: btw thanks for details on tobie/ua-parser. do you still have the tests you did two months ago ? maybe in a folder somewhere on your machine ? [21:43:50] i was just parsing squid logs and trying to recreate the very charts which you probably originally made with hand crafted regexes [21:43:58] hmm [21:43:58] i hate frameworks like this. it's reinventing the wheel, and it's a wheel that's totally from 5 years ago. [21:44:07] let me check my git history [21:44:57] oh, and it wants to use mongodb, one of the shittiest and overhyped databases in existence. [21:45:06] none of the ua-parsing libs is 100% complete, [21:45:14] ua-parser looks very extensible [21:45:27] wurfl is a no-go because they stopped being opensource [21:45:59] ya extensible was my sense as well [21:46:24] brb [21:46:52] and development is active and i have spoken with these guys so i think it's your safest bet for now [21:47:09] i was thinking of collecting ua's from our logs that are not parsed by ua-parser and submit them as a test case [21:47:38] drdee: yeah, definitely. that would help them extend ua-parser [21:47:54] drdee: I looked at the headers of their source files, one of them says Google, another Facebook, and another Twitter [21:48:02] drdee: is it developed by people from those companies ? [21:48:08] average_drifter: I don't thinkI have any usable code [21:48:08] but it was pretty easy to set up [21:48:40] erosen: I'll fork their lib and do a test run on github [21:48:53] cool [21:48:56] drdee: am I allowed to publish UAs from stat1 logs ? like in a .zip ? [21:49:13] drdee: AFAICS they don't contain any sensible data [21:49:42] yes google,folks from Facebook and google work at it [21:49:53] send me the file first [21:51:10] I will [21:58:45] i side with david on opa not being either complete or compelling enough to consider [22:02:16] louisdang: just merged your github pull request [22:02:17] thanks so much! [22:02:47] you're welcome. Thanks for giving me the opportunity to learn more about big data [22:03:11] no problem! [22:03:24] so…..about hue: we are still having a permission issue [22:03:33] ok [22:03:35] i tried it with my own account and i have the same problem [22:03:39] and i don't know the cause [22:03:42] about pig [22:03:50] the test run did work [22:03:56] but the full run didn't [22:04:03] also no clue :( [22:04:12] did you try Penny? [22:04:13] mk [22:05:18] I can try fixing my first version which should have less tasks [22:06:50] I actually seem to have lost my nova credentials so I'll ask the labs channel about that [22:06:55] but penny seems like a lot of work for monitoring a pig job [22:07:07] you can just ask for a reminder [22:07:11] labsconsole.wikimedia.org [22:08:19] a reminder? [22:08:40] password reminder [22:09:40] i can log into labs console, but it says: No Nova credentials found for your account. when I try to check my projects [22:10:14] oh that sounds like a bug [22:10:17] just log out [22:10:24] and log in again [22:10:31] ok, but I can't ssh into bastion neither [22:11:09] hm works now [22:12:40] downloading the dumps can I get the feedback file? [22:15:26] drdee, can I get feedback_page_traffic.tsv? [22:15:42] yes 1 sec [22:16:12] can you read this: [22:16:13] https://trello-attachments.s3.amazonaws.com/5069e3859c41bc8f712411a3/50884a14bba877566a007734/fe41dafd0e3439082632fb489a491b9e/fpt_pageids_posts.tsv [22:16:27] thanks [22:16:30] yes [22:16:44] reallly? [22:16:49] surprising [22:17:04] why is that? [22:20:29] well i assumed you needed to be logged into trello [22:20:38] but all good, you've got the data [22:21:15] yeah. I'll try another script sometime this weekend [22:21:37] i am trying to find the log file :) [22:21:46] but that's actually not easy [22:37:19] dschoon: 1 added one new field to the avro schema, and i've added a question for the timestamp field [22:37:31] okay, i'll check it out [22:44:09] see you dudes, haev a good weekend! [22:46:10] later man! [23:23:25] brb food [23:23:25] k, I'll be here reading meteor :) [23:36:20] back [23:37:09] tremendously useful when working on design: https://github.com/ooyala/livecss