[00:19:05] average_drifter: try again [00:19:33] average_drifter: you have access to: https://github.com/wikimedia/fast-field-parser-xs now [00:25:33] preilly: https://github.com/wikimedia/fast-field-parser-xs/commits/master [00:25:44] preilly: just commited [00:25:44] works [00:25:44] !! [00:25:44] thanks :) [00:27:44] average_drifter: np [00:31:38] drdee: we have code now to count /w/index.php https://github.com/wikimedia/fast-field-parser-xs/commit/0c785c6a881a55bebcfeb9661b06c56fde3c4542#L0R403 [00:31:42] drdee: new run started [00:31:47] drdee: ETA ~120m [00:31:54] oh sorry [00:31:56] around ~30m [00:32:00] because I'm doing just 3months [01:00:38] drdee: http://stat1.wikimedia.org/spetrea/new_pageview_mobile_reports/r22-parallel-30x-20x-check-and-bots-and-non-api-search-disabled/pageviews.html [01:02:58] drdee: so in r22 we're 20M too high on "en" in november. in r21 we were 20M too low [01:03:58] maybe if I put the bot checks back it will decrease to exactly the amounts in wikistats [01:08:59] drdee: we also have logic to separete the GET parameters so we're one step further to implement the logic for deduplicating [02:22:17] ok [14:26:37] morning guys [14:26:42] morning ottomata [14:28:58] morning! [14:29:48] can't start grunt on an01 [14:30:12] nm i need coffee [14:30:31] pig [14:30:33] grunt [14:30:33] pig [14:30:34] grung [14:30:38] oh, I symlinked it to oink [14:30:40] for fun [14:30:41] :p [14:30:45] :D [14:31:13] can i still tweak the hadoop memory stuff? is that now in wikimedia/operations-puppet? [14:31:22] we are definitely OOM issues [14:32:40] yeah [14:32:43] its still the same setup for now [14:32:46] it'll be a while [14:32:49] but what i'll do real quick [14:32:55] is disable the apache proxy via puppet [14:32:57] there [14:39:43] k [14:58:44] heyaaa drdee, when you edit stuff [14:58:45] what do you use? [14:58:56] edit what stuff? [14:58:57] specifically, when you edit puppet manifests [14:59:15] nano [14:59:20] haha, even locally? [14:59:25] no textmate for you? [14:59:35] or sublime2 [14:59:41] hmm [14:59:45] depends on my mood :) [14:59:47] aye [14:59:59] i would use sublime for puppet, except textmate has a built in syntax checker [15:00:11] but, alternatively [15:00:16] if you install puppet [15:00:19] you can run [15:00:19] puppet --parseonly unixdaemon/manifests/init.pp [15:00:25] or whatever .pp file you want to check [15:00:29] and it will do a syntax check [15:00:35] might want to put that in a git pre commit hook [15:00:46] (fixing a syntax error in jmxtrans right now :p) [15:00:52] no worries though, I make them ALL the time [15:01:13] did you want to include this kraken::monitoring::hadoop::datanode on hadoop datanodes? [15:08:04] yes please [15:08:13] i am a total noob when it comes to puppet [15:08:18] cool [15:08:33] i have a question about the mobile data.... [15:08:36] so [15:08:37] https://github.com/wikimedia-incubator/operations-puppet/commit/eacea6a235fbcdbbe80845637714e9ac7bc436e4 [15:08:40] real quick [15:08:43] before your question [15:08:51] so, the monitoring.pp thing is just a syntax fix [15:08:56] you had a dangling { [15:08:59] missing its partner } [15:09:01] fixed that [15:09:03] but [15:09:06] so [15:09:14] since you want this class to be included on all datanodes [15:09:26] there is a class in modules/kraken/manifests/hadoop.pp [15:09:36] this file is responsible for including and configuring all of the cdh4 classes [15:09:42] with wmf/kraken specific stuff [15:09:49] aigh [15:09:52] class kraken::hadoop::worker [15:09:58] is the one that includes the cdh4 stuff [15:10:14] so that is a good place to include other worker node kraken specific classes [15:10:20] so I just included your monitroing datanode class there [15:10:29] since all of the datanodes include this [15:10:33] that will get it applied to all of them [15:10:36] (when puppet runs) [15:10:38] ok. [15:10:38] done! [15:10:40] mobile data whaazzuuup? [15:10:45] cool, ty! [15:10:46] i just can't seem to split the kafka id from the hostname in pig [15:10:52] do you need to? [15:10:55] yes [15:11:01] i am writing a packet loss pig script [15:11:02] you need the hostname in pig? [15:11:04] ah nice! [15:11:06] but i need the hostname [15:11:07] so that should be a tab [15:11:09] i know [15:11:12] ok, [15:11:13] but it doesn't split [15:11:25] might be helpful in a bit when we switch to tab as sep in all logs :) [15:11:30] LOG_FIELDS = FOREACH LOG_FIELDS GENERATE STRSPLIT(hostname,'\\\\t') as hostname; [15:11:32] totally [15:11:40] so many backslashes! [15:11:40] i also tried single and double backspaces [15:11:44] yeah [15:11:45] slahes i mean [15:12:22] i tried space, everything [15:12:27] it just won't split [15:12:34] i always get a tuple back of size 1 [15:12:43] try extract? [15:12:43] containing still the kafka id and the hostname [15:12:52] that didn't work at all for :( [15:13:20] let me try again [15:13:43] FLATTEN (EXTRACT(byteoffset_hostname, '\d+\s(.+)) as hostname:chararray [15:13:44] maybe? [15:14:05] i think double backslashes there though [15:14:15] FLATTEN (EXTRACT(byteoffset_hostname, '\\d+\\s(.+)) as hostname:chararray [15:15:44] Could not infer the matching function for org.apache.pig.piggybank.evaluation.string.RegexExtract as multiple or none of them fit. Please use an explicit cast. [15:15:59] DEFINE EXTRACT org.apache.pig.builtin.REGEX_EXTRACT_ALL(); [15:18:13] i have [15:18:14] DEFINE EXTRACT org.apache.pig.piggybank.evaluation.string.RegexExtract(); [15:18:38] extract works but it doesn't match [15:18:45] my extract or yours? [15:18:50] i think mine has a different syntax [15:19:05] i meant the function works [15:19:05] (i just grabbed something I had worked on a while ago) [15:19:23] use my define extract for my example [15:19:27] i'm sure you can use piggybank's [15:19:34] but not in the way I typed [15:22:15] changing locations, be back in a few [15:34:13] nback [15:34:13] nback [15:34:15] ah [15:34:15] back [15:44:41] welcome [16:01:06] milimetric [16:01:12] hi [16:01:16] are you logging into hue/hadoop as milimetric or dandreescu? [16:01:20] dandreescu [16:01:23] hm [16:01:28] analytics1001.wikimedia.org [16:01:35] but your labs console shell name is milimetric [16:01:35] hm [16:01:37] right? [16:01:41] yes [16:01:43] interesting [16:01:49] don't know why it's like that [16:01:49] i didn't know they could differ [16:02:02] i don't htink I can give dandreescu the proper permissions [16:02:04] use milimetric [16:02:05] so I go ssh dandreescu@analytics1001.wikimedia.org [16:02:16] ok, but milimetric can't ssh into an01 [16:02:17] yeah [16:02:18] ungh [16:02:19] that is a problem [16:02:20] ungh [16:02:20] so maybe add him [16:02:23] ungh [16:02:35] hmmmmmmMMmmm [16:03:14] I thikn we need to ask Ryan_Lane abourt this [16:03:19] so [16:03:20] wait [16:03:22] when you log into a labs machine [16:03:24] what is your shell name? [16:03:29] milimetric? [16:03:55] yes [16:03:59] milimetric [16:04:48] das weeeiiirrd [16:04:57] we should probably change your shell user to milimetric [16:04:59] everywhere [16:05:02] ok [16:05:12] i actually have a big ldap vs. shell todo to figure out [16:05:21] can you sudo -u milimetric anything? [16:05:27] as dandreescu on an01 [16:05:29] on an01? [16:05:30] (I doubt it somehow) [16:05:31] yeah [16:05:32] sec [16:06:01] yes [16:06:19] i can do sudo -u milimetric whateverIWantAsFarAsICanTell [16:07:56] doesn't give me any more privileges than dandreescu but it seems to let me edit dandreescu's files [16:09:50] ok [16:09:52] interesting [16:09:54] how about [16:10:01] sudo -u milimetric hadoop fs -ls /wmf/raw [16:10:45] yep, found 2 items [16:10:50] cool [16:10:58] so milimetric is allowed because he is in the analytics project in labs [16:11:04] (which means ldap) [16:11:09] dandreescu doesn't exist in ldap [16:12:02] cool, gotcha [16:12:34] so ssh into an01 via dandreescu, then sudo -u milimetric everything [16:13:12] ah, ok, sudo -u milimetric pig doesn't work because I don't have a /home/milimetric [16:13:20] yeah [16:13:22] problem [16:13:25] on my todo to figure out [16:13:30] cool, thanks man! [16:13:37] not urgent [16:14:09] maybe try setting one of the ENVIRONMENT vars listed in man pig [16:14:17] maybe not though [16:14:18] hm [16:14:30] what's the error? [16:14:32] something about history? [16:14:56] maybe you could do [16:16:02] hm naw [16:16:04] hm [16:16:05] oh [16:16:06] maybe [16:17:09] naw hm, i dunno [16:21:46] don't worry ottomata - I don't need pig for anything anytime soon [16:21:49] it's just a nice to have [16:21:54] so I can help debug scripts and stuff [16:22:11] don't worry, I'll bug you when I need it [16:22:17] fry dem bigger fish :) [16:26:40] milimetric: any progress on gp.wmflabs.org? [16:26:59] hey erosen [16:27:07] yes, sort of [16:27:08] i'm hoping to have something working at gp.wmflabs by tomorrow afternoon [16:27:33] but if new limn is going slowly i might switch the urls around a bit [16:27:35] i fixed the error that I was having on my side which was the exact same as yours [16:27:46] isee [16:28:03] so is it a problem with the formats now? [16:28:12] erosen, once I get my first module in for ops review and right an email [16:28:18] I will look into fixing X-CS in kraken [16:28:21] i've got your data locally and trying to see what the problem is [16:28:39] ottomata: actually i was just looking and it looks like there is a fair amount of x-cs stuff in there [16:28:51] so I was just writing a script to do a more thorough summary [16:29:25] oh! [16:29:26] really? [16:29:40] why you surprised? [16:30:45] well, because I thought it was broken and hadn't fixed it [16:31:05] where are you seeing the data erosen? [16:31:08] for X-CS [16:31:18] oh or you are looking in the zero files on stat1? [16:31:33] i copied a file from hdfs [16:31:45] hdfs://analytics1010.eqiad.wmnet/wmf/raw/webrequest/webrequest-wikipedia-mobile/2013-01-30_16.00.00 [16:33:48] oh in mobile [16:33:50] ja ok cool [16:33:55] yeah they should all be in there too [16:33:57] hmmmm [16:34:05] we don't need a separate zero filter anymore [16:34:18] maybe that is good enough for you? i was going to create a special filter for X-CS, but all those logs are coming from the only machines that would set X-CS [16:34:23] so all of the X-CS logs should be in those [16:34:56] i guess having just the lines with x-cs could make things faster [16:35:07] drdee is there an up to date log line parser udf? [16:36:07] no, but this is something that we should join forces on with average_drifter, we started a 'decision tree' at https://docs.google.com/a/wikimedia.org/drawings/d/1V4TTdTjcPwn3wfueQLfNxyfwg1tf-PBoL25nYSiOMGk/edit [16:36:28] he is getting super familiar with the ins-outs of mobile pageviews [16:38:42] nice [16:39:43] erosen, it would probably be more stable if we extracted the lines from the mobile logs manually [16:39:55] that's fine with me [16:39:59] maybe regulaly as an oozie job [16:40:01] that would be cool [16:40:06] yeah [16:40:31] it might be fine to just filter on x-cs != "-" in a udf every time [16:40:40] or it might be painfully slow so we'll do it once and save it [16:40:47] for now I'm updating my python scripts [17:00:18] ottomata: someone create an account for me on Wikitech called milimetric? [17:00:31] i requested that [17:00:45] yes [17:00:49] I did that [17:00:52] ct asked me to [17:00:53] but I already have an account on wikitech... [17:00:56] Bwah! [17:00:57] as? [17:01:03] dandreescu? [17:01:04] dandreescu - my email account' [17:01:06] ?? [17:01:09] ha, we need to get that sorted out [17:01:14] wikitech as in the email list? [17:01:15] you shoudl have one username [17:01:16] if you can [17:01:17] or is there something else [17:01:20] yeah, seriously [17:01:23] nono [17:01:36] wikitech is the ops wiki [17:01:39] are you referring to http://wikitech.wikimedia.org/view/Main_Page [17:01:41] http://wikitech.wikimedia.org/view/Main_Page [17:01:42] yeah [17:01:46] OOOh [17:01:52] i chose milimetric there [17:01:55] ok [17:01:56] because that's what you are on labs [17:02:06] and it will probably be easier to use that one than dandreescu if we had to choose one [17:02:15] this is the ticket https://rt.wikimedia.org/Ticket/Display.html?id=4452 [17:02:15] they told me to use milimetric on labs because my wikipedia username was milimetric [17:02:18] and they said it should match [17:02:22] yeah, probably a good idea to stick with that [17:02:28] i've got 'aotto' some places too [17:02:31] but I avoid using that if possible [17:02:39] and use Ottomata / otto everywhere [17:02:42] ok, cool [17:02:51] labs will actually have two names associated with you [17:03:00] your upper case uhhh, name you use to login into wikis and other things [17:03:04] and a lowercase shell name [17:03:06] they can be different [17:03:10] but would be good to keep the the same [17:34:17] holy god! [17:34:22] i just ran sed for the first time [17:34:29] woooooo! [17:34:38] I feel like I just went on a rollercoaster [17:42:30] ottomata: I don't recall where we left some analytics system issues yesterday. Specifically trying to get average_drifter setup on analytics1001, while the ldap integration on the system breaks puppet user modifications/adds [17:43:06] yeah [17:43:16] for analytics1001, i need to look into ldap vs. shell problems [17:43:28] i'm a bit bogged down in some dooky i got myself in [17:43:44] also we have a tab delimiter deploy coming up this week or monday [17:43:46] sooooooooo [17:43:54] No worries (from me), I am just doing the RT triage duty followup [17:43:56] i won't get to this until next week [17:43:56] yeah [17:44:03] and I'm on RT duty next week [17:44:06] I'll ensure the ticket stays open, and I'll update with this info [17:44:07] so i'll take extra time to work on things like this [17:44:09] ok thanks [17:44:14] average_drifter: ^ [17:44:16] there's the 000 default thing on stat1001 too [17:44:21] yep [17:44:21] i'll get to that next week [17:44:22] ok, I got it [17:44:24] :) [17:44:36] I mean, I'll wait 'till next week on this one [17:56:03] Morning. [18:00:42] morning [18:01:45] afternoon! [18:31:07] https://github.com/wikimedia/riemann-jmx [18:44:46] ^^ drdee preilly [18:44:59] ty [18:45:03] (forked to our wmf because i originally forked it) [18:58:50] erosen_: did you want to do apache stuff now? [18:59:15] not right now, as I'm still in weekly analyst meeting [18:59:20] ah, shit [18:59:21] forgot about that. [18:59:22] ok [18:59:25] and we should probably loop dan in [18:59:37] as he messed with the apache config [18:59:46] maybe in an hour? [19:00:21] that is, we should loop milimetric in ^^ [19:00:31] howdy [19:00:34] sure [19:00:49] hour's good erosen_ and dschoon - gotta go grab lunc [19:02:22] sounds good [19:02:36] k [19:14:37] preilly, can I have write access to these? [19:14:50] or this: [19:14:52] https://github.com/wikimedia/puppet-cdh4 [19:14:58] (not sure about others yet) [19:15:09] (I just tried to edit the readme on github, and it made me fork and submit a pull request) [19:15:23] ottomata: done [19:15:32] danke [19:15:37] ottomata: Can you try again to confirm [19:16:00] can you check these for me: [19:16:00] https://github.com/wikimedia/puppet-kafka [19:16:00] https://github.com/wikimedia/puppet-jmxtrans [19:16:00] https://github.com/wikimedia/puppet-storm [19:16:10] yeah one sec [19:16:24] yeah looks good [19:16:31] button says 'commit changes' rather than 'propose changes' now [19:17:00] ottomata: kafka, jmxtrans and storm should work now too [19:17:24] average_drifter: ping [19:17:44] check out https://www.mediawiki.org/w/index.php?title=Roadmap&curid=9685&action=history - I think some of your changes to the roadmap got overwritten [19:17:45] cool, thank you! [19:18:22] also you should probably update https://www.mediawiki.org/wiki/User:Spetrea so it's easier to find you [19:18:59] average_drifter, where were the new udp-filter .debs? [19:19:06] (sorry, I know you already told me once) [19:22:06] ottomata: http://stat1.wikimedia.org/spetrea/releases/ [19:22:11] ok thanks [19:32:25] ottomata: there are some varnish conf changes that need to happen for mobile beta site tracking [19:32:32] how much do you know about our varnish setup? [19:32:50] 'cause if you're an expert, it probably makes more sense for me to hand that off to you. [19:32:55] otherwise, i'm cool to do it [19:33:01] (and go through gerrit, etc) [19:33:22] drdee: ok, im on another request, your wikitech one [19:33:35] kool [19:33:41] drdee: you wanted Drdee, Milimetric, Ottomata, etc. [19:33:45] i think i just got access [19:33:46] mediawiki usernames and all. [19:33:53] ottomata might have done it [19:33:58] ottomata, can you confirm? [19:34:40] hrmm, yea [19:34:44] all these users are on wikitech [19:35:13] oh, maybe needs editor flags [19:35:52] i know nothing! [19:36:03] RobH [19:36:09] yes, i did it, but didn't have perms to make them editors [19:36:12] CT asked me to do this this morning [19:36:28] dschoon, re varnish: I KNOW NOTHING! [19:36:32] except where the configs are [19:36:55] ok, well, it looks liek the ticket was there [19:36:59] but half the shit on it was already done [19:37:08] but now they are all editors as well, 4452 done [19:37:46] ottomata: ok, i'll handle it then. [19:37:51] i'ma get some food. [19:37:55] ty RobH [19:37:57] didn't eat breakfast :) [19:37:59] welcome [19:38:00] danke! [19:46:57] back [20:07:03] changing locations, back in a bit [20:21:59] back [20:22:59] dschoon, milimetric: got a sec to set up gp.wmflabs.org correctly? [20:23:15] yeah erosen, gimme a few for lunch [20:23:17] yep, ready [20:24:02] want to give it a try milimetric ? [20:24:35] i'm not sure what david was alluding to earlier with the domain name setup [20:25:10] i was probably the one alluding to it [20:25:12] so let's see, we've got the path change from /data/datafiles/ to /data/datafiles/gp [20:25:22] and the negative values problem [20:25:23] it seems like it is probably just a vhost [20:25:36] and whatever's wrong with that last graph in Education [20:25:40] anything else? [20:25:59] vhost - i have 0 skills on that [20:26:03] k [20:26:07] negative values we'll have to do in limn [20:26:11] why don't we just wait for david [20:26:13] i think i have a reasonable fix for that [20:26:20] the vhost thing i'll walk y'all through [20:26:20] i realized I need to eat lunch as well [20:26:24] we'll get in a hangout after lunch :) [20:26:33] yeah [20:27:03] 1:15 ish? [20:28:06] sure, sounds great [20:30:54] erosen: that last graph on the education tab doesn't work on global-dev either: http://global-dev.wmflabs.org/graphs/ar_wp_active_editors [20:31:02] hmm [20:33:43] milimetric: it does seem to work, but takes literally a minute [20:33:54] oh! [20:33:54] ok [20:33:56] it is using a datasource that combines all lange [20:33:59] language [20:34:00] s [20:34:07] we're working on optimizations for large datasets [20:37:36] packetloss.pig is finished and it works, i can confirm that there is no packet loss on a 24 hour period for mobile page view data, i pm'ed the gist [20:38:35] * drdee_ wonders whether that is the sound of champagne bottles popping? [20:39:19] there are a lot of -1s [20:39:32] read the file [20:39:36] isn't that the cycling? [20:39:41] when the seq resets? [20:39:55] no [20:39:59] oh, *folder* [20:40:01] hm [20:40:10] look at the raw data [20:40:11] drdee, just add one [20:40:18] since the count is inclusive [20:40:23] max seq - min seq + 1 [20:40:28] yes i should have done that [20:40:29] SORRY [20:40:37] but also read the file [20:40:42] what about [20:40:43] cp1044.wikimedia.org: 2013-01-29 15 PACKETLOSS= 98 [20:40:46] -1 is off by 1 error, the high hundreds happend at the end/beginning of day and so that data is stored in a different folder. [20:41:12] the one I pasted is not at the beginning or end of the day [20:41:13] well that's totally negligible [20:41:43] we have more than 1M log lines per host per hour [20:41:47] hm, but don't say no packet loss then! [20:41:48] hehe [20:42:02] that is statistically not larger than0 [20:42:15] cp1043.wikimedia.org: 2013-01-29 13 PACKETLOSS= -1 [20:42:25] 13 means 1pm UCT, right? [20:42:32] 1pm is the *middle* of the day [20:43:39] so? [20:44:36] a little bit more excitement guys, seriously [20:44:54] okay, help me understand [20:45:00] because i don't see why 1p should have a -1 [20:45:09] what precisely do you mean by "off by one" [20:45:25] i'm not excited yet because i can't explain this result to anyone else :) [20:45:32] read the chat [20:45:32] ottomata: max seq - min seq + 1 [20:45:39] -1 == 0 [20:45:42] (also, where's the script? maybe reading it would help) [20:46:21] "the high hundreds happend at the end/beginning of day and so that data is stored in a different folder. " [20:46:33] why should that be -1? it means the answer is unknown... [20:46:38] not 0. [20:46:44] i want to believe! [20:47:04] seriously you are just trolling now [20:47:13] no no [20:47:27] just explain it to me slowly. i'm stupid sometimes. [20:47:31] -1 does not mean answer is unknown [20:47:37] >>> for datum in data: [20:47:37] ... datum = datum.split(',') [20:47:38] ... seq_max= long(datum[2]) [20:47:40] ... seq_min= long(datum[3]) [20:47:42] ... seq_count = long(datum[4]) [20:47:43] ... host = datum[0] [20:47:44] ... ts = datum[1] [20:47:44] ... print '%s: %s PACKETLOSS= %s' % (host, ts, (seq_max-seq_min-seq_count)) [20:47:57] ohhhhhh [20:48:00] just take the raw data [20:48:05] and do it yourslf [20:48:13] i thought you were *actually* finding the intersection of the sets of sequence numbers [20:48:17] and calculating the cardinality [20:48:27] hence me going, "-1 is not a legitimate answer!" [20:50:21] okay, hm. [20:50:24] yes. [20:50:28] now officially EXCITED [20:50:31] this is fantastic. [20:50:37] packet loss is dead! [20:50:51] sorta! [20:50:52] mostly! [20:50:54] while things work [20:50:56] hopefully! [20:50:59] don't go trumpeting yet! [20:51:09] udp2log->kafka shell producer not reliable! [20:51:09] i am being excited as per drdee_'s demands [20:51:12] hahah [20:51:27] brb a moment! [20:51:28] KafkaHadoopConsumer is hacky schmacky [20:51:32] (excited!) [20:51:46] I am using 4 cisco nodes to consume this traffic [20:53:43] ottomata, can you give me push rights to wikimedia/kraken [20:54:32] hmm [20:54:46] i don't htink i can [20:54:54] i thought preilly made you admin [20:55:13] i can create repos [20:55:31] i don't think i can manage membership [20:56:24] back [21:02:05] drdee_, ottomata nor can i. [21:08:50] drdee, this is just an fyi tip [21:09:02] tabs work well for aligning things from the left margin [21:09:11] not so great for aligning assignment or usage strings [21:09:32] so for example, if you are writing a help message, you should use spaces to align your strings [21:09:46] (just saw tabs when you added referrer usage info in udp-filter) [21:14:19] drdee_: you can do kraken now [21:14:41] ty preilly [21:15:09] drdee_: np [21:28:22] dschoon, erosen, we're past due to meet no? [21:28:34] i'm good [21:28:39] waiting for erosen to come back [21:41:35] hey sorry [21:41:36] milimetric, dschoon [21:41:40] word. [21:41:46] howdy [21:41:48] hangout? [21:41:51] ya [21:41:59] https://plus.google.com/hangouts/_/2da993a9acec7936399e9d78d13bf7ec0c0afdbc [21:45:03] milimetric: commme [21:45:59] dschoon: domas has root everywhere by-the-way just an FYI [21:46:14] coolio [21:46:22] but he still needs a hdfs account :) [21:46:34] dschoon: you could make that too [21:46:38] heh heh [21:46:42] that was the plan [22:01:50] even installing pandas on osx is painful [22:07:07] heh [22:33:06] woo hoo! globa-dev dashboard works on new limn: http://gp-dev.wmflabs.org/ [22:33:12] with only one sed command :) [22:33:46] !log The new version of Limn is now rendering Evan Rosen's awesome dashboard: http://gp-dev.wmflabs.org/ [22:33:48] Logged the message, Master [22:34:19] for all those that were afraid of sed like me, this tutorial is fantastic: http://www.ibm.com/developerworks/linux/library/l-sed2/index.html [22:38:25] oh doh! an obvious problem is the footer which is still the reportcard footer [22:38:32] erosen ^^ [22:38:55] great work guys! milimetric, erosen, dschoon [22:42:44] drdee_, preilly, our new repo at wikimedia/reportcard doesn't like me: [22:42:44] ERROR: Permission to wikimedia/reportcard.git denied to milimetric. [22:43:05] when you have time, please add :) [22:44:06] milimetric: try now [22:44:21] thx preilly, all better :) [22:45:00] milimetric: cool [22:45:20] signing out for the day, later everyone [22:54:30] average_drifter [22:54:36] drdee is going to give you details [22:54:44] but can you create a new deb from this commit? [22:54:44] https://gerrit.wikimedia.org/r/gitweb?p=analytics/udp-filters.git;a=commitdiff;h=8f13679be3181ab6d836994cb4d73a57fd887074;hp=939beabb71964dee5b71ed6b47f64070ad8e1811 [22:54:50] I wanted to tell you [22:55:04] I modified changelog [22:55:04] https://gerrit.wikimedia.org/r/gitweb?p=analytics/udp-filters.git;a=blob;f=ChangeLog;h=86c0f0eb521a21cd230959533a3f141b6ae7200f;hb=8f13679be3181ab6d836994cb4d73a57fd887074 [22:55:17] so that it kinda matches how the distribution is supposed to look [22:55:23] for WMF apt [22:55:25] when you build the deb [22:55:29] so, build it for lucid [22:55:35] and then, when building for precise [22:55:52] edit ChangeLog and change the two places at the top where it says 'lucid' to 'precise' [22:55:57] ottomata: please only modify the changelog through the git commits [22:56:07] oh my goodness! fancy [22:56:12] I can ammend my commit [22:56:28] but we want the distribution to be as is there [22:56:33] udp-filter (0.2.6-1~lucid) lucid-wikimedia; urgency=low [22:56:43] ottomata: if you need customization to the git2deblogs.pl please make an issue on github and I will fix it [22:57:09] hmm, ok, link to github repo? [22:57:09] ottomata: you can make the issue in the debianize repo [22:57:15] github.com/wikimedia/debianize [22:57:16] ottomata, the change log is automatically generated using git2deblogs.pl [22:57:27] or maybe wikimedia-incubator [22:57:40] yes, maybe wikimedia-incubator [22:58:25] ottomata: you probably want another parameter to git2deblogs so we can append "~lucid" to the version [22:58:45] drdee_: you discovered a bug with the time of the commits being the same, please add that as an issue to the debianize repo as well [22:58:53] yes sir! [22:59:08] I will solve both of these, but now I need to focus on getting the mobile reports out the door [23:00:35] preilly: can you explain please the difference between github.com/wikimedia and github.com/wikimedia-incubator ? [23:00:54] preilly: it's unclear to me how/why a project should sit in one or the other [23:01:02] average_drifter: github.com/wikimedia-incubator is for forks [23:01:26] average_drifter: and tests nothing that we're supporting [23:01:32] ok average drifter [23:01:33] https://github.com/wikimedia-incubator/debianize/issues/3 [23:01:37] ottomata: thanks [23:01:54] then debianize should probably move to the regular wikimedia account [23:02:00] preilly: thanks, more clear now [23:02:11] average_drifter, can I let you fix my changelog then, when you build the .deb? [23:04:07] ottomata: yes [23:06:38] cool, thank youuu! [23:10:29] yw [23:27:13] preilly, can you make sure otto, diederik, dan, and myself have full admin on all the analytics repos we added to both github.com/wikimedia and github.com/wikimedia-incubator? [23:30:29] full_admin_rights += average_drifter :D [23:31:36] yes. [23:31:41] truth. [23:36:09] laters all! [23:50:52] dschoon: can you guys talk to ^demon about Github access issues moving forward [23:50:58] sure. [23:51:13] just hopping on the bandwagon :) [23:51:14] dschoon: I'll be away for FOSDEM and I don't want anyone blocked is all [23:51:21] yep