[11:11:00] drdee: hi [11:11:46] hi [11:13:03] I'm polishing git2deblogs, some tests broke, they will be fixed in the next quarter hour, then I'll push the repo to gerrit, and I'll create instructions for Ops to create the package themselves [11:13:29] Andrew suggested I read git-buildpackage . It's unclear to me if this is an Ops standard for building packages or if I'm allowed to use git2deblogs [11:17:51] it's better to use git-buildpackage, that's the Ops standard [11:52:58] ok [12:39:35] average: things going smooth? [12:42:34] milimetric: are you around? [12:53:42] drdee: yes, reading git-buildpackage docs and fiddling with it [12:53:57] ok, let us know if you need help [12:54:07] yep [13:14:54] hi drdee yep i'm here [13:57:09] average: can you also document the steps on how to use git-buildpackage? [13:59:20] drdee [13:59:22] its not really that simple [13:59:30] the page I linked to average is pretty good documentation [13:59:42] you kinda have to understand a lot about the package you are building and how git-buildpackage works [14:00:03] i've documented how to do a python package with it here: [14:00:03] https://wikitech.wikimedia.org/wiki/Git-buildpackage#How_to_build_a_Python_deb_package_using_git-buildpackage [14:05:34] i didn't say it was easy :) but it would be very helpful [14:05:38] average: ^^ [14:07:02] well sure, I just imported the dsc with git-import-dsc , but I'm reading the docs [14:07:08] to find out the whole process [14:07:20] ok [14:07:25] where'd you get a dsc? from your previous debianization? [14:07:32] ottomata: yes [14:07:36] aye hm. [14:07:50] ottomata: should I do that or go for a clean from-scratch approach ? [14:07:54] i'm not sure [14:08:11] i'd try it from scratch, at least at first, to see how things go, git-buildpackage really abstracts a lot of the hard stuff [14:08:16] i guess from scratch [14:08:22] ok [14:08:22] it almost makes keeping debianization separate from upstream really nice [14:08:34] your upstream tarball or git repo should not have a debian/ directory [14:08:45] oh [14:09:02] alright, re-doing [14:09:04] gbp workflow keeps the debian/ stuff in separate branches [14:09:21] sorry, let me rephrase that [14:09:26] your repo can have a debian/ directory [14:09:30] just not in the main working branch [14:09:49] (master, if you are using gbp with your repo, instead of creating a new one just for debianization) [14:09:52] so probably [14:09:55] you will have your dclass repo [14:10:00] and master will not have a debian/ dir [14:10:23] debian/ dir will be on the debian-branch [14:10:47] yeah, exactly, which gbp by default calls master [14:10:51] but you can change that in gbp.conf [14:10:56] probably call it 'debian' or whatever [14:11:05] also, its nice if you use tags in your git repo [14:11:37] that way you can use —pristine-tar option and have the source packages automatically built and named based on tags in the git repo [14:12:32] (or something like that, i still don't have a full grasp on all the gbp stuff) [14:20:41] ottomata: are you around? [14:21:25] can you grep through the webstatscollector page count files on dumps for may 29-may 30 and search for 'Canwebelieveinstats' and let me know in which files that string is found? [14:22:35] which ones? [14:23:08] page counts [14:23:08] ok [14:23:13] raw? [14:23:15] or ez [14:23:15] ? [14:23:19] raw [14:24:08] whoa lots of data [14:24:19] 48 files [14:24:23] but quite some data yes [14:24:27] 4.8 G [14:24:33] ~ [14:24:49] you just need which files have that? [14:24:53] yup [14:25:03] and the counts for each observation [14:25:07] counts! [14:25:12] like number of times in the file? [14:25:30] no, that string should only appear once per file [14:25:36] but just grab the entire line [14:25:39] oh ok [14:25:51] and it won't appear in all 48 files either [14:25:54] ok [14:25:57] i think you will find it 3 or 4 times [14:45:26] Canwebelieveinstats capitalized like that? [14:45:36] this didn't print anything [14:45:37] zgrep -m 1 Canwebelieveinstats pagecounts-201305{29,30}*.gz [15:32:40] hey erosen, I'm gonna go grab lunch but you got time to talk in 20-30? [15:32:48] (minutes) [15:40:29] yo [15:40:30] yeah [15:40:35] milimetric: ^^ [15:41:54] rounce123: are you in the office today? [15:42:16] I am [15:42:28] at my desk if you can chat now I could come to the 6th floor [15:42:47] sure, I'm also happy to come down to 3 [15:42:53] you pick :-) [15:42:53] i have to pick up a commuter check anyway [15:42:58] i'll see you in 3 [15:43:02] cool [15:43:04] on 3 :) [15:43:09] sure [16:16:27] where's Andrew ? [16:16:48] just wanted to asking somethin 'bout gbp [16:16:53] *ask [16:17:25] scrum is in 45m right ? [16:18:45] average: andrew is on vacation for a few days [16:18:55] average: also I think scrum is in 45, yes [16:22:42] erosen: thanks [16:22:45] np [17:39:18] ottomata: hiya :) [17:51:30] ottomata: yay, built the package using git-buildpackage [17:51:44] not sure if I did it right but it worked [18:15:56] milimetric: do you know if it is still straightforward to support a webhook for updating the git repo backing an limn instance [18:18:58] yeah, erosen, it's probably not too crazy [18:19:18] two hours maybe... [18:19:23] I have a bit of coco that david and I had written that sat in server.co [18:19:32] milimetric: yeah, definitely not a high priority [18:20:37] yeah, should be similar to that old webhook, we'd just have to figure out the whole permissions thing [18:21:36] hm... the problem is - it would get complicated if you consider that people are creating graphs [18:21:42] (or datasources) [18:21:50] because then what do you do? commit everything? [18:23:41] yeah [18:23:44] i have everything in a repo [18:23:49] why would it be complicated? [18:23:52] milimetric: ^^ [18:24:26] well, so let's say three people create a bunch of junk datasources and graphs [18:24:33] when you hit the webhook, would it commit those too? [18:24:48] it would pull, is what I'm imagining [18:24:55] oh you need a pull hook! [18:24:57] yeah [18:25:00] sorry [18:25:19] ok, well then the only problem is what to do with the uncommitted stuff - right now there isn't much [18:25:23] but it would error out if there was [18:25:35] i suppose we could git stash && git pull && git stash pop... [18:25:47] does anyone know why https://mingle.corp.wikimedia.org/projects/mobile/cards/503 doesn't exist anymore? its a completed card that i wanted to reference [18:25:48] basically david and I configured jenkins to do some json validation and then tell the server to pull anytime gerrit gits a commit pushed to it [18:26:08] milimetric: or the build could error, which is fine too [18:26:32] s/gits/gets/ [18:27:55] so how was jenkins authenticating to be allowed to trigger that webhook? [18:28:04] nm, wrong product [18:28:05] seems like anyone could do it then [18:28:10] milimetric: anyone could [18:28:17] glad you found it tfinc [18:28:18] but that should be okay [18:28:31] the model would be: don't push the data unless you want it on the server [18:28:32] right hm, I guess yea [18:28:40] though I agree it is a little sketchy [18:28:50] yep, k, then it's simple [18:28:58] tfinc: maybe this ? https://mingle.corp.wikimedia.org/projects/analytics/cards/503 [18:30:32] milimetric: I created a card so we don't have to think about this right now: https://mingle.corp.wikimedia.org/projects/analytics/cards/729 [18:30:46] k [19:14:50] drdee: hi [19:14:55] drdee: built one package with git-buildpackage [19:15:02] awesome! [19:15:18] the hard part still lies ahead. figuring out how git-dch --release works [19:15:22] haven't figure that out yet [19:15:51] looking for more docs on git-buildpackage [19:18:58] I'll write here what I did (I will also document this) [19:19:11] basically the steps are the same as before, having an configure.ac and Makefile.am [19:20:02] git-buildpackage --git-ignore-new [19:20:09] this will try to build the package [19:20:25] some trial & error until the package is being produced [19:20:36] gpg key will be necessary in ~/.gnupg [19:21:05] names and e-mail in debian/changelog will need to match the ~/.gnupg key [19:21:14] (presumably for the current release) [19:21:31] k [19:21:45] when all's in place and the build works git-buildpackage --git-tag --git-ignore-new [19:22:16] this will build the package and will create a Git tag called debian/ [19:23:09] there is a file .git/gpb.conf which I'd like to find out more about [19:23:24] documentation is a bit scarce from my point of view at the moment [20:04:02] average: real quick [20:04:11] you want to add debian/gbp.conf and set things there [20:04:21] ottomata: oh, thanks [20:04:28] ottomata: and man gbp.conf for details [20:25:43] hey erosen, just pushed tests for all our tables [20:25:49] mapping mediawiki ones now [20:29:20] erosen: can you email me the github profile / linked in page of the python guy? [20:32:03] erosen's not around drdee [20:32:18] hey do you know how to get to the mysql install inside vagrant? [20:32:21] like from outside? [20:33:10] google? [20:34:17] :) googling [20:34:35] milimetric: whatcha want to do? [20:35:07] map the vagrant mediawiki tables with SQLAlchemy [20:35:26] so i'd have to open up that mysql install for my laptop to see [20:35:48] but eh, it's optional, i'll just use SQLite in memory for now [20:37:02] go have fun, it's your day off :) [20:40:15] yes ottomata go away! [20:40:55] ? [20:41:07] haha, naw its eve, my friend's gone to bed [20:41:17] what do you mean map the tables? [20:41:22] did you install mysql on the vagrant vm or your local host? [20:41:25] (I would do it on the VM) [21:10:20] erosen, ping [21:10:26] yurik: hey [21:10:28] sup? [21:10:49] i have a question for you too at some point [21:11:02] hi! was wondering if you could show me around, so i know how to get stuff to dan when he needs a new batch of his favorite drug [21:11:15] hehe [21:11:37] if i understand you correctly, are you curious about the logging / aggregation system? [21:11:47] yep [21:11:54] sure [21:11:57] hangout? [21:12:00] yeah [21:12:14] https://plus.google.com/hangouts/_/2da993a9acec7936399e9d78d13bf7ec0c0afdbc [21:26:00] oops, missed your msg ottomata, yeah, the vagrant mysql [21:26:11] i just need to access it from my local box [21:26:18] so i figure there's some firewall or something [21:26:33] i just got done mapping the tables, they're running against SQLite right now [21:27:03] if I can get the vagrant mysql to be accessible from outside the VM, I'll test against that tomorrow [21:27:32] why do you need it outside the VM? [21:27:38] to use a gui? [21:27:43] also [21:27:48] well, for the new user-metrics [21:27:50] i dunno if sqlalchemy has this built in [21:28:02] but, can I ask for migrations or at least version controlled schema up front? [21:28:14] yeah, i love those [21:28:25] i'll check if SQLAlch has it [21:28:27] that way I don't have to go and 'show create table …' all over the place to productionize this [21:28:34] yeah, no, totally [21:28:36] like I had to do when I build the old user-metrics vagrant vm [21:28:40] i used to use this thing called FluentMigrator [21:28:44] best goddamn tool ever made :) [21:28:49] yeah anything is cool [21:29:07] cool, but no, not for a gui [21:29:09] also, if you are keeping your db connection info in the code [21:29:17] be aware of other environments [21:29:21] i need the mysql to be accessible so i can connect to it like any other db [21:29:37] isn't your code running in the vagrant vm? [21:30:21] nono [21:30:25] this is the new user-metrics [21:30:28] not the old one [21:30:42] different repo and all that [21:31:05] jajaja [21:31:06] i know [21:31:09] but [21:31:15] your app is running in the VM, right? [21:31:42] no [21:31:47] it's a different repo [21:32:00] and it shouldn't assume it has local db access to the mysql anyway [21:32:02] do you have a vm to develop the new thing with? [21:32:25] (I was assuming you were using a new vagrant vm for development) [21:35:25] no, we didn't think of that yet [21:35:31] well, so we talked about it briefly [21:35:40] and decided that we don't need the vagrant for most of our purpose [21:36:01] well, no - we decided requiring vagrant to dev. is bad actually [21:36:12] because it's kinda heavy and it has annoying problems on linux apparently [21:36:42] so instead we'll just use SQLite in-memory for dev and change configuration when we go to test/prod [21:36:59] but for testing, I'd like to make sure my mappings are correct against an actual mediawiki install [21:41:06] wellLLllLlll, deving on vm is actually good [21:41:12] because then your dev env is more like production [21:41:22] will probably want to run in it apache wsgi, or whatever we want to run it with in prod [21:41:24] whatever [21:41:34] and, vm is nice, because then your changes can be productionized [21:41:44] hopefully using the same puppet module for dev and production [21:45:34] milimetric: do you want to access the mysql inside vagrant from outside ? [21:45:51] that's what I understood, is that accurate ? [21:46:08] yeah average [21:46:11] ok [21:46:14] check this out [21:46:14] and that's a good point ottomata [21:46:22] go inside the vm /etc/mysql/my.cnf [21:46:26] bind-address changed to 0.0.0.0 [21:46:29] but being able to run tests and other things *not* inside the VM is good too [21:46:30] restart mysql inside the vagrant [21:49:00] oh cool ok [21:49:00] thanks average! :) [21:49:21] milimetric: that's not all of it [21:49:29] there's one more thing [21:50:34] milimetric: inside the vagrant, fire up the mysql client [21:50:41] GRANT ALL ON *.* to root@'10.11.12.1' IDENTIFIED BY ''; [21:50:58] mysql needs to be restarted on the vagrant [21:50:59] and then [21:51:09] you can connect to the mysql inside the vagrant with [21:51:40] mysql -uroot -h10.11.12.13 [21:52:02] I'm assuming your host OS has IP 10.11.12.1 [21:52:11] and your vagrant vm has ip 10.11.12.13 [21:52:32] and yeah, that's it [21:52:43] ok cool [21:52:53] i'll mess with it first thing tomorrow [21:52:54] thanks again [22:03:32] ottomata: were you able to find the "Canwebelieveinstats" string in the raw page count dump files [22:03:33] ? [22:03:48] naw, nothing came out (i only looked once) [22:03:56] mmmmm [22:03:59] that's odd [22:05:52] probably did something wrong [22:08:09] k [23:14:05] ottomata: have you successfully loaded the webrequests into hive before? [23:16:39] ? [23:16:59] basically I'm trying to use hive to look in to the x-cs oddness [23:17:19] and I am curious if you have the boilerplate code to take the log files and turn them into hive tables [23:17:38] i guess the first question is whether you've used hive [23:17:44] ottomata: ^^ [23:17:51] something like LOAD FROM …. INTO TABLE ... [23:18:04] ah, no [23:18:05] i haven't [23:18:14] k [23:18:15] i actually haven't done much with hive yet [23:18:31] so are david and diederik the only folks who've used hive [23:19:54] probably [23:20:07] k [23:21:06] ok totally bedtime for me [23:21:08] laters guuysss [23:21:13] laterrrz