[00:33:16] what card is that x_cs one? [00:53:01] (to answer my own question, it's #645) [01:16:02] drdee: my work for 645 is done. just fyi. [01:16:22] what's the take away? [01:18:10] check the card. [01:18:18] i did [01:18:24] i dunno. that's not my part. [01:18:28] i just ran the analysis. [01:19:12] are you now a code monkey :D [01:19:22] and outsourcing the brain part? [01:19:38] i cannot claim to care very much about the consistency of X-CS, no. [01:20:11] you know my opinion, in any case. i expect we'll either find out it's right or that the zero team broke it. [01:22:18] bbl reading sartre [01:50:48] erosen [01:55:02] erosen [01:55:44] do you count requests or 'article' views for wikipedia zero reports? [01:55:48] erosen ^^ [02:08:18] hey erosen [02:08:22] sup? [02:08:31] do you count requests or 'article' views for wikipedia zero reports? [02:08:50] in the past I have just used article views [02:10:41] pm'ed you [02:10:43] erosen ^^ [02:31:31] drdee: back; that is how long it takes to walk from the train to my house [07:25:42] New review: Yuvipanda; "(1 comment)" [analytics/limn-mobile-data] (master) - https://gerrit.wikimedia.org/r/60608 [13:44:05] morning guys [13:44:09] average: around/ [13:44:18] ottomata quick chat? [13:44:22] yoyoo [13:44:47] how much more work is left to do for 570 (vagrant & umapi) [13:44:56] ah, i can work on that today, i mean [13:45:03] i have not looked at it since last week [13:45:10] ah you wanna showcase [13:45:10] ok [13:45:26] all that's left is to run your stuff witih puppet [13:45:32] should be easy [13:46:29] awesome! [13:47:13] then…..do you think it's feasible to finish 622 at the end of friday (move multrelay from oxygen to gadolinium) ? [13:47:40] if not, then we should reboot oxygen this week as it will crash in 6 days time [13:51:23] average, are you there? [13:51:25] drdee: I'm here [13:51:28] COOL! [13:51:31] I'm right here, I haven't finished them yet [13:52:27] no worries, which perl script generates the data for http://stats.wikimedia.org/wikimedia/squids/SquidReportClients.htm ? [13:54:11] SquidCountArchive.pl generates the data [13:54:19] but the report itself is built by SquidReportArchive.pl [13:54:58] no [13:55:02] i don't want to do that on friday [13:55:05] and i'll be out next week [13:55:07] before friday :) [13:55:17] not sure [13:55:22] that's why i was asking if you could finish it before friday afternoon [13:55:27] if not, then we have to reboot oxygen [13:55:32] aye [13:56:45] thank you average! [13:59:43] average, do you know how the mobile apps on http://stats.wikimedia.org/wikimedia/squids/SquidReportClients.htm are identified? [14:06:56] average ^^ [14:09:41] drdee: yessir [14:09:42] 534: if($agent2 =~ s/^(.*) CFNetwork.*$/iOS: $1/io) { [14:09:43] 537: $agent2 = "iOS: ".$ipad_data->{browser}; [14:09:43] 609: { ($version = $agent2) =~ s/^.*?(Dalvik\/\d+\.?\d*).*$/Android: $1/o ; } [14:09:45] 615: { ($version = $agent2) =~ s/^.*((Wiktionary|Wikipedia) ?Mobile(\/| )(\d|\.)+).*$/Android: $1 (WMF)/o ; } [14:09:48] 617: { ($version = $agent2) =~ s/^.*((Wiktionary|Wikipedia) ?Mobile(\/| )(\d|\.)+).*$/iOS: $1 (WMF)/o ; } [14:09:52] 856: elsif ($agent2 =~ /^iOS: /io) [14:09:55] drdee: Wikistats is only detecting iOS and Android apps at the moment [14:11:34] drdee: those are the lines that process the apps, they extract the name of the app and the version [14:11:59] this is inside SquidCountArchiveProcessLogRecord.pm [14:12:25] cool, could you add those lines to http://www.mediawiki.org/wiki/Mobile/User_agents to the Un-Official Apps section? [14:12:55] yes [14:16:13] drdee: added [14:17:05] ty [14:24:10] drdee, [14:24:12] how do I make this work? [14:24:12] from user_metrics.api.session import APIUser [14:24:17] from user_metrics.api.session import APIUser [14:24:22] ImportError: No module named user_metrics.api.session [14:25:00] milimetric: ^^ [14:25:09] hah [14:25:11] but you wrote this write? [14:25:11] yeah the problem is that the file is in the scripts folder [14:25:13] yes [14:25:20] how did you run it? [14:25:32] should it be in user_metrics/scripts or sometihng? [14:25:39] by moving it up a folder, so i think it's best to add the following lines to the scripts [14:25:43] import os [14:25:52] sorry [14:25:56] import sys [14:26:04] sys.path.append('..') [14:26:09] let me try this right away [14:26:13] hold on [14:27:29] [travis-ci] develop/4641049 (#132 by milimetric): The build has errored. http://travis-ci.org/wikimedia/limn/builds/6788561 [14:31:41] ottomata, i pushed a fix to the script, try again [14:31:54] k [14:32:21] same [14:32:23] no dice [14:32:42] ioh wait [14:32:43] one sec [14:32:49] sorry i htink i pulled in the wrong place :p [14:33:06] naw, still same [14:33:07] ImportError: No module named user_metrics.api [14:33:19] ah drdee [14:33:21] relative paths! [14:33:22] aahhhhhhhh [14:33:32] we can't assume that the working dir is $0 [14:33:32] yes it SUCKS! [14:33:37] fine fin efine [14:33:38] i'll deal with it [14:33:43] i don't know how else to fix this [14:33:45] but you can't expect people to cd into your dirs [14:33:55] i dunno much about python, but i'm sure there's a way to execute code :p [14:34:06] no, this is the weakest part of python [14:34:12] the whole module import stuff [14:34:27] ha [14:34:28] so wrong. [14:34:28] milimetric: can you help us out? [14:34:29] sys.path.append('../') [14:34:29] ha [14:34:32] whatevs! [14:34:36] not my project :p [14:34:49] I KNOW :( [14:35:03] i am just trying to get 570 out of the door [14:35:27] not my chair, not my problem [14:38:18] sorry guys, i'm having a limn issue [14:38:25] i can help out in a bit once this is sorted [14:38:28] aight [14:38:30] np [14:38:48] ottomata, as long as it works, that's all we care about [14:38:49] :D [14:42:20] cool, works [14:42:26] drdee, i'm looking at seed.sql now [14:42:31] i'm looking for something [14:42:33] but maybe you know [14:42:40] what's a good way to check if seed.sql has already been run? [14:42:52] also, are you sure you want this dump to have drop /create table in it? [14:42:56] DROP TABLE IF EXISTS `revision`; [14:42:57] ... [14:43:04] this will wipe someone's mediawiki db [14:43:06] maybe that's ok? [14:44:08] yeah good point; remove the DROP TABLE stmt [14:44:16] or comment it out [14:46:29] you can truncate the table if you want to erase it [14:46:31] TRUNC `revision`; [14:47:21] well we don't want to erase it, right? [14:47:30] no we don't :) [14:48:10] ok [14:58:03] hmmm [14:58:03] drdee [14:58:08] apparently i do have to drop create the table [14:58:24] the version of the page table you have is not the same as the one that ori-l ships with vagrant [14:58:29] ?????? [14:58:31] ERROR 1136 (21S01) at line 94: Column count doesn't match value count at row 1 [14:59:00] ok i have to fix the seed.sql file then [14:59:05] yeah, ori's only has 12 fields [14:59:19] but that's very weird because I created the seed.sql on the vagrant machine [14:59:19] there are no rev_* fields [14:59:20] in ori's [14:59:47] is there a name mismatch between the two? [15:00:30] http://codeshare.io/6oTHJ [15:02:07] drdee, if you like, we can just drop and create the tables as you ahve them [15:02:18] we can assume that user metrics devs != mediawiki devs [15:02:27] so they won't mind if we trash their dev tables :) [15:02:34] no, then we should fork ori's vagrant vm [15:02:44] that's just bad behavior on our side [15:02:58] well, none of this runs unless they manually include the user_metrics class in their site.pp file [15:03:38] ok [15:03:43] let's puppetize it for now, [15:03:55] but we/I need to fix the seed.sql [15:04:08] the page table should not include those rev_ fields [15:04:13] something weird happened [15:04:27] ok [15:04:33] i can commit the puppet stuff though, it shoudl be the same [15:04:38] it just wwon't work til you fix seed.sql :) [15:04:53] ok sounds good [15:08:02] [travis-ci] develop/8f04164 (#133 by milimetric): The build failed. http://travis-ci.org/wikimedia/limn/builds/6789690 [15:08:15] drdee, i just pushed a change to seed.sql too: removing the drop table statements [15:08:26] k [15:10:04] ok drdee and ottomata, all yours [15:10:13] limn bug squashed [15:11:34] its not really a big deal, but, if you have user_metrics cloned [15:11:39] try to run drdee's scripts/create_account.py [15:11:42] with an absolute path [15:11:51] dont' cd into the scripts/ dir [15:11:55] the fix is this [15:12:03] change sys.path.append('..') [15:12:04] to [15:12:13] sys.path.append(os.getcwd()) [15:12:17] nope [15:12:22] you can't rely on cwd [15:12:24] that could be anythig [15:12:35] okay so how to fix this then? [15:12:40] you can make assumptions about the directory the file is in [15:12:44] but that's not great either [15:12:46] so you could do [15:12:59] dunno in python, but in bash you could do [15:13:11] dirname(dirname($0)) [15:13:21] so, relying on the current script's path [15:13:27] rather than the user's cwd [15:13:32] that is slightly more static [15:16:01] anyyyyway [15:16:01] https://gerrit.wikimedia.org/r/#/c/61781/ [15:16:11] OOPs [15:16:14] but yeah its good [15:18:02] i'm just catching up... so I don't get what's urgent right now [15:18:24] i actually can't get vagrant up to work [15:18:28] so i can't test [15:19:02] what's the problem? [15:19:38] i followed ori's instructions and I get "The VM failed to remain in the "running" state while attempting to boot. [15:19:39] This is normally caused by a misconfiguration or host system incompatibilities. [15:19:39] Please open the VirtualBox GUI and attempt to boot the virtual machine [15:19:39] manually to get a more informative error message." [15:20:18] hm, weird, never seen that [15:20:40] so then it starts via the gui... [15:20:48] wierd [15:21:52] but doesn't look like mediawiki is set up at localhost:8080, just apache [15:21:53] hm... [15:24:25] maybe puppet didn't run since you couldn't boot via vagrant properly [15:24:27] try running [15:24:28] vagrant provision [15:25:23] this is what i get running it in the gui: [15:25:24] p, li { white-space: pre-wrap; } VT-x features locked or unavailable in MSR. (VERR_VMX_MSR_LOCKED_OR_DISABLED). [15:25:54] vagrant provision just tells me the vm's not running [15:26:20] try [15:26:22] vagrant halt [15:26:24] vagrant up [15:26:34] uhhh, that sounds like a problem with your host compy [15:26:37] if that doesn't work, delete the vagrant image using the gui [15:26:43] and try again [15:26:46] if that doesn't work [15:26:48] then welp [15:26:49] you're running linux, right? [15:27:15] https://groups.google.com/forum/?fromgroups=#!topic/vagrant-up/4rOCCWfILYc [15:27:48] there is a solution in that thread, milimetric [15:27:50] yep, linux, i'm looking at that google group thing [15:33:53] k, so that doesn't work, i think i have to reinstall the x86 virtualbox instead of the 64 bit one [15:43:50] brb, will try messing with bios to see if that fixes it [15:43:57] (still can't boot it) [15:52:58] ottomata, about oxygen [15:53:05] when do you want to do the pre-emptive reboot? [15:54:18] i would really love to get the multicast stream moved first [15:54:22] i will do my best to do that asap [15:54:33] ok, that worked (a few of the virtualization settings hozed my machine, but I got it right now I think) [15:57:34] milimetric: what's MSR? [15:58:05] milimetric: so, localhost:8080 works now? [15:58:21] not yet, the puppet configuration is still running i think [15:58:25] but the box at least comes up [15:58:32] no idea what MSR is but the issue was basically this: [15:58:41] my cpu had hardware virtualization disabled [15:58:42] the most recent thing you ran was vagrant up and it's still going? [15:58:48] yea [15:58:50] right [15:58:52] good [15:58:59] for my thinkpad, there were like two virtualization options [15:59:05] both of them hosed my computer [15:59:10] VT-d hosed my computer [15:59:19] but just the first one (Intel Virtualization I think) worked [15:59:20] * jeremyb_ only tried vagrant for the first time like ~10 days ago. idk why i waited so long. besides aversion to ruby [15:59:22] (this is in BIOS) [16:00:10] hm, now it seems to not be able to bind to the network [16:00:24] can't resolve the ubuntu archives to do apt-get stuff [16:00:50] `ip addr` on both guest and host? [16:01:02] and `ip route`, cat /etc/resolv.conf [16:02:10] need to work some on speeding things up. maybe with local git repos. i added a local apt repo caching proxy but that only shaved off ~2 mins. of course it was a relatively fast connection. (to vagrant up after vagrant halt -f && vagrant destroy) [16:02:44] * jeremyb_ usually has slower internets. i wonder if they're paying extra for that speed [16:02:52] US internets suck [16:03:14] except i guess CDA 230 is generally good :) [16:26:16] jeremyb_: if you can believe it, vagrant up is still running, not sure what in the world is going on [16:26:42] but it definitely doesn't have internet access [16:26:49] i'll work on it later after all the meetings today [16:27:06] erosen: shall we merge the work we've done for 388? [16:27:40] yeah, let's do it [16:27:43] you mean into master? [16:27:53] k, so you didn't find any more problems right? [16:28:00] yeah, i was gonna merge it into master and do git review [16:29:34] milimetric: I can help troubleshoot, ping me when you have time [16:29:56] i think it's just my host setup ori-l, so I won't bother you with my virtualbox noobyness [16:30:12] i didn't find any other problems [16:30:17] i'm a virtualbox noob too, and if you ran into problems other people might too [16:30:19] but i'll definitely ping you if anything can be done to the vagrant config [16:30:30] cool, or the docs! [16:30:39] well, so far it was just enabling virtualization in the bios [16:30:42] milimetric: mostly I discovered that the standard MySQLdb connection accepts utf-8 encoded byte strings by default [16:30:50] i didn't need that before [16:31:01] but it might be worth mentioning in the readme [16:31:09] yay erosen :) [16:31:12] ok, merging into master [16:31:19] cool [16:31:37] oh - so any gerrit people around here know what the best practice is for integrating a feature branch with a bunch of commits and submitting it for review? [16:31:42] ori-l: you might know this [16:31:49] i'm actually going to be dealing with a car issue for the next 30 min, so I'll be back a little before scrum [16:31:55] k, cool [16:31:57] no scrum today [16:31:59] yeah [16:32:02] no scrums on wednesdays [16:32:06] meant to say showcase [16:32:08] good point [16:32:16] k [16:32:27] car is started, got to move it while it works ;) [16:32:30] milimetric: how many changes, and how big, roughly? [16:33:16] the patch file is 1MB [16:33:19] that seems really big... [16:33:25] milimetric: actually, one of the best people to ask is matt, if he's nearby [16:33:38] yeah, true [16:33:40] no he's not around [16:33:45] he's really good in that he doesn't just know how to do it but how to do it properly [16:33:46] it's 989 lines of patch [16:34:46] who is going to be reviewing it? [16:35:19] actually, sorry -- i'm not going to give advice :P i don't even want to pretend like i have a good branching strategy [16:35:27] ask superm401 when he's back? [16:36:58] yeah, we've talked about this before, it's harder than it seems [16:37:26] basically i think maybe we're already doing it wrong [16:38:39] milimetric: re gerrit [16:38:41] you want to squash! [16:38:44] never done this myself [16:38:52] heh, i don't think so [16:39:01] because evan and i were working on it together [16:39:08] so squashing shared history is a nono [16:39:52] gerrit seems to me like a way of saying: oh you thought git was a distributed source control system? Wrong! it's actually just SVN! [16:39:54] :P [16:44:36] haha [16:44:37] yeah [16:44:39] oh hm [16:44:47] well i think you should be able to squash shared history, right? [16:44:50] its just not cool? [16:44:55] does git not let you do that? [16:45:14] no git lets you [16:45:20] but it says that is the capital sin [16:45:26] to squash once other people pulled [16:45:27] i started using git pull --rebase recetnly [16:45:28] it's nice [16:45:29] i see [16:45:36] hmmm [16:45:36] but, hm [16:45:41] hmmm, oh hmm [16:45:41] yeah, that makes a mess, if you've already pushed. [16:45:47] oh i see, hmm [16:45:54] but you haven't merged those changes onto the other branch [16:45:55] ah well [16:46:01] if you don't care about your dev branch [16:46:06] if you pushed, it doesn't matter. [16:46:07] you could just diff patch master and make a new commit [16:46:09] rewriting history is bad. [16:46:19] hm, yeah, and then just delete the old branch? [16:46:22] that might be the only way [16:46:23] yeha [16:46:26] clever. [16:46:39] you could even do this: [16:46:41] on your brnach [16:46:55] i tried to do it the logical way and I fubar-ed my repo [16:46:56] git reset SHA_before_the_changes_you_made [16:47:02] then [16:47:04] git stash [16:47:06] git checkout master [16:47:08] git stash pop [16:47:10] :) [16:47:12] git commit -a [16:47:15] no, it's ok, a patch will work fine [16:50:44] so. [16:51:00] anyone have suggestions for example activities for my data tools brownbag today? [16:51:07] i'm going to do some referer stuff [16:51:12] since everyone enjoys looking at searches [16:51:37] other ideas? [16:54:19] grrr [16:54:25] git review is giving me some trouble [16:54:46] dschoon, maybe one of those histograms you made, like maybe by status code? [16:55:05] sure [16:55:16] something simple like that could work [16:55:21] so git status is clean [16:55:25] git pull - nothing to update [16:55:45] git review - automatically goes into a crazy rebase [16:56:10] i made just one commit [16:56:15] so tempted to just f-ing push it [16:56:21] :D [16:56:28] i would. [16:56:33] it might be the only way. [16:56:41] i mean... can we not all admit that gerrit is just plain broken? [16:56:52] i think everyone in this channel thinks so. [16:57:06] my point is that everyone in the world thinks so. And we're all playing make-believe [16:57:21] i gotta get lunch before i rage against the machine [17:03:59] dschoon, you might already be doing this, but I thikn you shoudl highlight the hive querying against the raw data files [17:04:00] that is super cool [17:04:13] yeah [17:04:18] that's going to be most of the brownbag [17:04:26] how you can write sql against the raw logs [17:05:04] +1 [17:05:22] they queries you sent out are very clear and expressive [17:06:07] ottomata: wrt drdee email, which an0* box would you suggest ? [17:06:23] * the queries [17:06:44] either analytics1009.eqiad.wmnet or analytics1026.eqiad.wmnet [17:06:51] 1009 is the beefy cisco [17:11:21] ty, ori-l [17:11:23] 1009 wont let me login (publickey) [17:12:28] i've been having random problems with that as well, but on an01 [17:15:55] ottomata, does xyzram have access to an09? [17:16:19] probably not [17:16:23] ottomata: 1026 is the same: Permission denied [17:16:28] can you give him that :) [17:16:36] we'd need ops approval [17:19:33] let's file an RT ticket! [17:23:00] ottomata: when you get a chance, could you possibly merge https://gerrit.wikimedia.org/r/#/c/61627/ ? it touches documentation only, no code. [17:24:16] err, or maybe andrewbogott, since he's just merging another puppet change ^ [17:24:50] ottomata: thanks [17:24:59] done! [17:25:13] :) [17:26:02] heya milimetric, are you using the 10-11am Pacific window for the Zero partner testing stuff? [17:26:07] milimetric: Hey Dan, are you using your deployment window this monring? [17:26:11] lol [17:26:14] or, anyone else in here know? [17:26:15] kaldari: ;) [17:26:28] hm [17:26:34] i'm not sure what you guys are talking about [17:26:59] see the recurring deploy window here: https://wikitech.wikimedia.org/wiki/Deployments [17:27:30] milimetric: if you don't intend to use that then I'll remove it from the recurring calendar and you can then schedule a window on an as needed basis going forward [17:27:32] huh, odd [17:27:37] i never added that [17:27:40] must be a different dan [17:27:42] hrmm [17:27:44] heh [17:27:45] erosen might know [17:27:54] another dan an WP Zero stuff? [17:28:03] that's dan foy, i think [17:28:04] i don't work on WP Zero stuff [17:28:21] i'm mostly limn/kraken/usermetrics [17:28:22] Daniel Zahn?? [17:28:23] milimetric, greg-g what's up? [17:28:33] no, dan foy [17:28:35] http://wikimediafoundation.org/wiki/User:Dfoy [17:28:35] or maybe Dan Foy [17:28:38] ah [17:28:44] that dude [17:28:46] kaldari: you lied to me, I thought milimetric was dfoy! ;) [17:28:54] haha :) [17:28:56] milimetric: my apologies ;) [17:28:58] <- dan andreescu [17:29:00] well he is Dan :) [17:29:04] nice to meet you, milimetric :) [17:29:06] and the schedule just says Dan! [17:29:09] likewise sir [17:29:11] oops [17:29:13] yeah, bad schedule, my bad [17:29:26] I should have reviewed it more and confirmed with everyone their recurring slots [17:29:43] so, does anyone know dfoy's irc handle? [17:29:58] greg-g i'm not sure he's a regular irc user [17:29:58] I didn't know we had so many Dans :P [17:29:59] if not, kaldari, go for it, I don't see any activity in -ops [17:30:07] sweet [17:30:11] greg-g: he's more of a gchatter [17:30:15] kaldari: sorry for the weirdness with this one [17:30:20] no problem [17:30:21] dfoy@wikimedia.org [17:30:29] it's my bad for having a bug :) [17:30:34] erosen: yeah, emailed him already :/ [17:30:44] i don't trust people with monosyllabic first & last names [17:30:46] kaldari: exactly! Those should never ever happen [17:30:51] "get an extra syllable, damn it" [17:30:53] ori-l: good policy [17:31:05] actually, in all likelihood he's daniel [17:31:16] this says dan: https://wikimediafoundation.org/wiki/User:Dfoy [17:31:19] milimetric: hm. that would excuse it, certainly. [17:31:20] First rule of programming: No bugs! [17:31:20] I am the only true "just Dan" that I know [17:31:31] but my last name certainly makes up for the lack of syllables in my first [17:31:46] milimetric: oh, like birth certificate says "dan" not "daniel", [17:31:48] nice [17:31:49] right [17:32:30] also, does this mean I have to add YAWIC to my list? (Yet Another Wikimedia IRC Channel) [17:32:46] (probably) [17:32:55] syncing file now [17:33:06] oops wrong channel :) [17:33:32] greg-g: you're not on ours! [17:33:52] ori-l: uhhh, what is it again? [17:33:57] #wikimedia-e3 [17:34:05] not #wikimedia-eee ? [17:34:07] but, really, we should consolidate that and the ee channel [17:34:11] e2 is #wikimedia-ee ;) [17:34:28] #wikimedia-home-of-the-es [17:34:43] greg-g: no one has suffered more from that idiotic name than us, believe me :P [17:34:49] :) [17:40:04] ori-l [17:40:14] ottomata: [17:40:42] *suspense* [17:40:57] ah, if the vanadium relay went down for a few seconds or minutes, would that be real bad? [17:41:17] inexcusable [17:41:27] no, it's fine. just let me know the outage period if possible [17:41:47] milimetric; ready to demo #388? [17:41:57] ottomata: ready to demo #570? [17:42:09] yea drdee [17:42:22] cooL! [17:42:32] ok cool, i'm refactoring some of the puppetization of it [17:42:38] so we can more easily move it to gadolinium [17:42:48] ottomata: btw, can you maybe just ssh into one of the esams bits and try to ping vanadium once? i'm wondering if the eqiad migration + concomitant changes obviate the need for it [17:43:44] like cp3019 for example [17:44:56] nevermind; i can't ping it, so i don't think it'd work the other way around either [17:45:09] ha, i can't find it! [17:45:19] oh i think i don't have ssh config set up for that [17:45:20] ok [18:01:53] ottomata, average: sprint demo [18:40:51] ori-l, nm, doing this relay thing with no downtime [18:40:53] :) [18:41:04] wooo [18:41:06] nICE! [18:41:14] thanks! [18:41:17] that's bad ass [18:46:38] ori-l: ok, i got time to debug my vagrant setup now [18:46:48] so the first problem is that as-is, I'm not getting any network connection [18:47:03] what versions of vagrant/virtualbox are you using? [18:47:13] i am using 4.2 i belief [18:47:51] i tried installing 4.2 but ran into a lot of dependencies I couldn't find [18:48:18] ori-l is running ubuntu too so that'll be useful to know what version he's running [19:22:41] no, os x [19:22:47] in the middle of something, sec [19:22:59] reminder: brownbag in 10m [19:24:35] dschoon: will it be streamed? [19:24:42] unclear [19:24:44] no chip today [19:28:14] :( hangout [19:31:12] ottomata: can you help debug the metrics instance on stat1001? [19:31:26] i don't think i have access to read logs and stuff [19:31:48] looks to me like all our code is there but the form that uploads the csv doesn't work [19:31:59] so i figure the error log would tell me right away what's wrong [19:32:58] milimetric, you can't access stat1001? [19:33:03] i got in [19:33:10] but i don't think i can read logs... [19:33:25] you can [19:33:31] /var/log/apache2/ [19:33:37] they are owned by wikidev group [19:34:39] well, maybe there's just nothing in the logs? [19:34:50] nahh [19:34:58] it seems that the log files haven't been written to in awhile [19:35:00] because vim /var/log/apache2/error.metrics.log shows me a blank file [19:35:07] (at least the ones owned by wikidev) [19:35:17] so yes you need ottomata :D [19:35:56] huh, no, the error.metrics.log.1 has data in it [19:35:58] but it's old [19:36:17] yes that's my point, it doesn't seem that data is written anymore [19:36:23] so probably it's being written to another file [19:36:33] (which we can't read because it's owned by root0 [19:37:01] yeah, error.log is not accessible [19:37:22] ottomata: when you get back, I'd love to see what that error log's hiding from me ^ [19:39:50] milimetric: back in a just a few... [19:43:33] stat1001.wikimedia.org:/var/log/apache2/error.log I believe has the answers we seek [19:53:48] ok okok [19:53:49] whaa [19:54:42] :) what's whaa [19:54:55] milimetric [19:54:58] yes [19:55:00] what's your prob? [19:55:07] what's your prob man! [19:55:09] got a prob?! [19:55:09] haha [19:55:10] :) [19:55:18] how do I make something break? [19:55:23] k, so i do the upload on metrics.wikimedia.org [19:55:30] uploaaadddd [19:55:33] ok go here: https://metrics.wikimedia.org/uploads/cohort [19:56:16] you can make up a random csv file that has just "Milimetric,en" in it [19:56:18] login? [19:56:27] yes, you got the htaccess password? [19:56:30] yes [19:56:37] ok, i'll pm my login [20:00:58] there we go [20:01:02] logs back where they should be [20:01:07] and you can see the error now [20:03:53] https://mingle.corp.wikimedia.org/projects/analytics/cards/610 [20:03:55] and closed! [20:08:55] ottomata: people are wowing at hadoop on #wikimedia-dev [20:09:02] you're missing out :P [20:09:29] yeah, :( no hangout :( [20:11:14] ottomata: so the problem is that i was using the http link to the jQuery CDN instead of the https one [20:11:31] ah [20:11:36] if i push the fix, will you pull it? [20:12:42] we all should be able to deploy updates [20:13:53] milimetric: yup! [20:13:56] drdee: yup! [20:13:56] use a protocol relative link [20:14:00]