[07:40:52] Analytics / Wikimetrics: tox runs all tests (including manual ones) - https://bugzilla.wikimedia.org/69183#c1 (Antoine "hashar" Musso) tox is just a wrapper around virtualenv, it then execute a given command. In this case 'nosetests'. Good news, nosetests let one flag tests with an attribute which can... [09:19:52] Analytics / Wikimetrics: tox runs all tests (including manual ones) - https://bugzilla.wikimedia.org/69183#c2 (nuria) >You could go with a ManualTestCase that will have an attribute manual_test Right, we already have such an attribute. I just opened the bug to make sure we do not forget to update our... [09:42:44] qchris: question if you have 1 min [09:42:49] Sure. [09:44:34] remember the vagrant::setting puppet 'function' [09:45:02] you added the ability to map automatically the 5000 port [09:45:06] for wikimetrics role [09:45:16] Yes. [09:46:01] did you tested that in staging/dev? [09:46:35] you mean labs-vagrant? [09:46:55] no, I mean our regular staging [09:47:08] nothing should happen there [09:47:09] No. It is vagrant. Our regular staging does not use vagrant. [09:47:32] Our regular staging uses plain puppet. [09:47:47] i know, but teh function will be executed [09:47:56] The function isn't even there :-) [09:48:08] Let me get the relevant code ... [09:48:37] https://gerrit.wikimedia.org/r/#/c/150487/ [09:48:46] nuria: ^ is the relevant change. [09:48:55] Look at the Project in the top left. [09:49:01] It is "mediawiki/vagrant" [09:49:16] That's the repository the commit is in. [09:49:25] But on staging, we do not clone that repository. [09:49:57] So the code is not on staging. [09:50:31] retardation again [09:50:40] i thought it was on wikimetrics module [09:50:44] very well [09:51:10] No worries. The whole vagrant, puppet thing is twisted and gets me confused again and again too. [09:51:51] ok, will merge corresponding wikimetrics change and i think that is teh last one you had pending [09:52:56] i think i found solutions for our celery issues regarding hanging sessions and issues with celery chains, will work on that today [09:53:04] Don't worry about the wikimetrics changes. They're mostly cleanup for a change that we rushed in for milimetric. So I guess it's ok if milimetric reviews them. [09:53:14] After all, he might disagree on me with a few things. [09:53:23] That's why I only added him as reviewer. [09:53:45] "Solution for celery issues" ... that sounds great! :-) [09:53:54] (CR) Nuria: [C: 2] "Thanks for doing this." [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/152057 (owner: QChris) [09:54:04] (Merged) jenkins-bot: Drop instructions for manual port forwarding in Vagrant [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/152057 (owner: QChris) [09:54:37] I think teh wikimetrics things were ready to go, i tested all testing changes. [09:54:51] Thanks for merging them. [09:56:17] for celery, i think we can do our session management with worker signals so we tie session cycles to worker lifecycle [09:56:18] http://celery.readthedocs.org/en/latest/userguide/signals.html#worker-signals [09:56:30] I saw that comment on the bug. [09:56:56] Great if celery has an easy solution. [09:57:07] and I think we have to get rid of the chainable requests as we are not using them for the purpose they were designed. celery groups will work i think [09:58:19] I leave this discussion for milimetric. I am sure he has opinions there too. [10:12:43] (CR) Nuria: Fix report chain stopping (1 comment) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/150475 (https://bugzilla.wikimedia.org/68840) (owner: Milimetric) [10:39:22] (CR) Nuria: "Also, once we fix the exception handling we need to add throttling. There should be a maximum number of jobs we schedule." [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/150475 (https://bugzilla.wikimedia.org/68840) (owner: Milimetric) [10:54:40] Analytics / Quarry: Unicode in query results in strange behavior - https://bugzilla.wikimedia.org/69224 (Aaron Halfaker) NEW p:Unprio s:normal a:None Example query: use kkwiki_p; SELECT COUNT(*) FROM categorylinks WHERE cl_to = "Мәдениет"; Expected: +----------+ | count(*) | +----------+ |... [11:09:54] Analytics / Quarry: Broken mwoauth dependency - https://bugzilla.wikimedia.org/69227 (Sam Smith) NEW p:Unprio s:normal a:None Following a pip install -r requirements.txt, python -m quarry.web.app fails with the following: Traceback (most recent call last): File "/usr/local/Cellar/python/... [11:10:24] Analytics / Quarry: Broken mwoauth dependency - https://bugzilla.wikimedia.org/69227 (Sam Smith) a:Aaron Halfaker [11:12:18] (PS2) Nuria: Removing usage of celery chains from report scheduling [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/150475 (https://bugzilla.wikimedia.org/68840) (owner: Milimetric) [11:12:34] (CR) jenkins-bot: [V: -1] Removing usage of celery chains from report scheduling [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/150475 (https://bugzilla.wikimedia.org/68840) (owner: Milimetric) [11:24:34] qchris: have a sec to run a test? [11:24:47] Let me finish one other thing. 1 sec. [11:25:06] np, we can do it later [11:25:20] as in this EU evening [11:25:49] Now I am there. [11:25:53] What should I test nuria [11:26:19] can you get this chnageset https://gerrit.wikimedia.org/r/#/c/150475/ and run teh manual test [11:26:28] tests/manual/parallel_reports.py [11:26:53] Sure. Booting the wikimetrics machen. [11:26:58] Remember that you run it before? [11:28:28] Oh. Those. Yes, I run them before, but it never worked for me :-/ [11:30:03] let's see if we have fixed that [11:34:23] (PS1) Yuvipanda: Bump mwoauth version [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152262 [11:35:18] (CR) Yuvipanda: [C: 2] Bump mwoauth version [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152262 (owner: Yuvipanda) [11:35:25] (Merged) jenkins-bot: Bump mwoauth version [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152262 (owner: Yuvipanda) [11:36:26] nuria: With PS2 of change 150475 the tests pass. [11:36:37] ok, good, test completed [11:36:46] Without it, they fail. [11:37:00] I will talk about changes with dan [11:37:08] Sounds good. [11:37:30] and we need more tests i think but overall i think we should not use chains and they were teh source of mamy of our troubles [11:37:49] also i removed 'sleeps' which are always a red flag [11:38:01] *many of our troubles [11:40:33] (PS1) Yuvipanda: Switch to using port 5000 for dev server [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152264 [11:42:29] (CR) QChris: "PS2 makes tests/manual/parallel_reports.py pass for me." [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/150475 (https://bugzilla.wikimedia.org/68840) (owner: Milimetric) [12:08:20] qchris: is our tasking cancelled ? i just realized i have an interview scheduled right in the middle of it [12:08:44] nuria: I thought so. [12:08:52] ok, good [12:09:01] Since milimetric is not around, I think oll those meetings wmoved to friday. [12:09:07] s/oll/all/ [12:09:36] k [12:18:22] Analytics / Quarry: Broken mwoauth dependency - https://bugzilla.wikimedia.org/69227#c1 (Aaron Halfaker) NEW>RESO/FIX Fixed in 0.2.2 [13:45:20] analytics1021: [13:45:22] 3/3 kafka.server.BrokerTopicMetrics.AllTopicsMessagesInPerSec.FifteenMinuteRate CRITICAL: 7.42708492353e-59 [13:54:36] gage? [14:01:03] mutante: andrew is out today -- is that alert repeating? [14:01:50] tnegrin: yes, it started a little over 1 day ago [14:02:05] hmm -- the graphs I look at all look normal [14:02:06] at wikimania but not sure how criticial it is [14:03:06] SF comes online in a few hours -- can you sleep it for 2 hours? [14:03:12] I will have gage look at it [14:03:24] (I don't think it's critical) [14:04:14] yes, i can [14:04:18] ok, thanks [14:04:35] thank [14:04:37] thanks [14:28:06] qchris: could i use dev to do some testing? [14:28:31] nuria: I do not use dev. So from my point of view: Yes. [14:28:46] k [14:28:47] But milimetric said that terrrydactyl should use it to test her things. [14:28:59] So I guess you should also check with her. [14:29:23] her code still needs work in vagrant [14:29:39] is still not production ready [14:29:59] so i do not think she will need dev for probably couple more days. [14:30:07] Analytics / Wikimetrics: tox runs all tests (including manual ones) - https://bugzilla.wikimedia.org/69183#c3 (Antoine "hashar" Musso) Great! Let me know if there if you encounter any issue with tox configuration. Will be happy to brainstorm with you. Also adding relevant Jenkins jobs is quite easy to... [14:30:23] but ... just thinking ... that i will wait after wikimania cause i do not want to interfere with staging [14:30:28] and i need to load test [14:33:10] dev and staging are decoupled. But still. Sure. Doing things after wikimania sounds reasonable. [14:38:20] qchris: but i would need to test precisely teh report creation so i will just hold, no worries [14:38:29] plenty of things for me to do [14:39:08] on vagrant things work pretty ok, i'd say [14:39:17] for that changeset [14:41:10] (PS1) Nuria: Testing repo is setup correctly [analytics/dashiki] - https://gerrit.wikimedia.org/r/152725 [15:02:54] Analytics / General/Unknown: Kafka broker analytics1021 having issues on 2014-08-06 ~1:44 - https://bugzilla.wikimedia.org/69244 (christian) NEW p:Unprio s:normal a:None Created attachment 16152 --> https://bugzilla.wikimedia.org/attachment.cgi?id=16152&action=edit analytics1021-AllTopics... [15:03:38] Analytics / General/Unknown: Kafka broker analytics1021 having issues on 2014-08-06 ~1:44 - https://bugzilla.wikimedia.org/69244#c1 (christian) Created attachment 16153 --> https://bugzilla.wikimedia.org/attachment.cgi?id=16153&action=edit Cluster-MessagesInPerSec-OneMinuteRate [15:04:08] Analytics / General/Unknown: Kafka broker analytics1021 having issues on 2014-08-06 ~1:44 - https://bugzilla.wikimedia.org/69244#c2 (christian) Created attachment 16154 --> https://bugzilla.wikimedia.org/attachment.cgi?id=16154&action=edit Cluster-RequestsPerSec-OneMinuteRate [15:08:22] Analytics / General/Unknown: Kafka broker analytics1021 not receiving me on 2014-08-06 ~1:44 - https://bugzilla.wikimedia.org/69244#c3 (christian) (In reply to christian from comment #0) > It seems to have happened around 2014-08-07 01:44 Wrong day. That should be [...] around 2014-08-06 01:44 [15:08:52] Analytics / General/Unknown: Kafka broker analytics1021 not receiving messages since 2014-08-06 ~1:44 - https://bugzilla.wikimedia.org/69244 (christian) [15:37:23] (PS1) Phuedx: WIP Migrate User model to SQLAlchemy [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152751 [15:37:28] (CR) jenkins-bot: [V: -1] WIP Migrate User model to SQLAlchemy [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152751 (owner: Phuedx) [16:29:38] hi gage [16:29:42] jgage: [16:32:08] Analytics / General/Unknown: Kafka broker analytics1021 not receiving messages since 2014-08-06 ~1:44 - https://bugzilla.wikimedia.org/69244#c4 (Toby Negrin) Gage -- can you please take a look at this? It looks like a broker has died. At the very least we should disable the alarms. thanks, -Toby [16:46:36] (PS1) Yuvipanda: Serve and consume JSON output data with proper content-type [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152784 [16:46:58] (CR) Yuvipanda: [C: 2] Serve and consume JSON output data with proper content-type [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152784 (owner: Yuvipanda) [16:47:23] (CR) Yuvipanda: [V: 2] Serve and consume JSON output data with proper content-type [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152784 (owner: Yuvipanda) [16:47:37] (CR) Yuvipanda: [C: 2] Switch to using port 5000 for dev server [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152264 (owner: Yuvipanda) [16:47:42] (Merged) jenkins-bot: Switch to using port 5000 for dev server [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152264 (owner: Yuvipanda) [17:10:53] qchris, have a sec? How hard is it to host the old limn graph on a wiki? Do we have setup instructions anywhere? [17:11:11] (PS2) Phuedx: WIP Migrate models to SQLAlchemy [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152751 [17:11:13] (CR) jenkins-bot: [V: -1] WIP Migrate models to SQLAlchemy [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152751 (owner: Phuedx) [17:11:40] yurikR1: I do not know of setup instructions. [17:11:46] I never set up limn myself. [17:11:52] Milimetric did that in the past. [17:12:20] milimetric, around? :) [17:12:23] Limn1 has a setup that you could steal. [17:13:11] yurikR1: are you implementing the W0 graphs? [17:13:12] milimetric, qchris, i think you two will be happy with this patch https://gerrit.wikimedia.org/r/#/c/152095/ [17:13:25] tnegrin, i am exploring that area :) [17:13:31] why? [17:14:12] yurikR1: Heya. Nice patch :) [17:14:26] (PS3) Phuedx: WIP Migrate models to SQLAlchemy [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152751 [17:14:31] (CR) jenkins-bot: [V: -1] WIP Migrate models to SQLAlchemy [analytics/quarry/web] - https://gerrit.wikimedia.org/r/152751 (owner: Phuedx) [17:14:32] mostly because we would like to be a bit more flexible with implementation before hadoop comes online, and make deployment more rapid [17:14:55] will you calculate the numbers? [17:15:19] i will scan zero logs on stat1002 with a script, and upload data directly into zerowiki [17:15:30] yes [17:15:37] and manage the configs? [17:15:44] ? [17:15:54] which configs? [17:16:02] christian just wrote code to use your new API [17:16:39] tnegrin, yes, but unfortunatelly that code does not do proper filtering, as that requires substantial additional investment oh qchris part [17:16:53] and without filtering, we are forced to maintain complex logic in varnish [17:17:07] which makes everything very unscalable [17:17:15] I don't understand -- why did Dan ask us to do this then? [17:17:33] he did, but apparently there was a misunderstanding somewhere [17:18:23] for example, i discovered that there is already some filtering being done only from qchris a few days ago. And than I was told not to rely on it [17:18:25] I still don't understand -- that was a few days ago [17:18:32] correct [17:19:25] again -- I don't understand why you asked us to write software last week and now you do not want to use it [17:19:34] tnegrin, we are using it [17:20:10] its just that it is not enough for our goals - I was under the impression that once analytics starts using api, it will do the proper filtering of the data [17:20:36] and it turned out that partial filtering was already done, but it should not be relied on, and fixing it would be too big of an undertaking [17:21:03] so the feature was not adequately specified? [17:21:59] apparently so - both sides had different understanding of the "use api" feature [17:22:27] hence i'm exploring alternative options until hadoop is in place [17:23:37] by hosting it oursevlves, we will temporarily remove this burden from analytics, while providing data security and flexibility [17:24:40] all those issues were raised a long time ago, but your team seems to be under too much load, so I would rather try to lighten it and find a workaround [17:25:29] I'm sure you agree that it would have been better to spec the feature correctly before we started the work [17:26:56] tnegrin, i wrote the specs at https://www.mediawiki.org/wiki/Requests_for_comment/Unfragmented_ZERO_design#Analytics [17:27:14] the first portion specifies that without any change to analytics, we can easily move forward [17:27:35] the second, about the api, specifies all the conditions that determine if the traffic is zero or not [17:27:51] there is really no point to use api unless analytics wants to filter [17:28:10] (nor is there a reason to pull data from meta config pages) [17:28:48] at some point many months ago, analytics implemented a partial solution - pulled config data from meta to filter [17:28:52] what part details the filtering -- this? [17:28:53] Lastly, please note that it is possible for the X-Analytics to contain X-CS, while the API to not have a valid configuration for that time frame. If that's the case, analytics should ignore that X-CS and treat it as a regular, non-zero traffic. [17:29:23] that too [17:30:49] ok -- so we need to get the config from the API and then implement additional checks based on that config? [17:31:04] i realized that analytics was doing partial filtering 3 days ago when talking to qchris - and suddenly it made sense why you were dependent on meta config [17:31:33] but than qchris said i should not rely on that filtering [17:32:05] which makes using API or pulling data from metawiki or zerowiki pointless - you can do stats simply by analyzing X-Analytics's zero= value [17:34:18] why didn't we just do that then? [17:34:26] you asking me? [17:34:27] :) [17:35:03] i was under the impression that that was exactly what you were doing [17:35:04] tnegrin: sorry. "do what"? [17:35:21] analyze the X-Analytics zero= value [17:35:32] tnegrin: We're analyzing it. [17:35:35] you do. but you also try to do filtering :) [17:35:42] but you don't do it properly [17:35:47] hence its useless [17:35:57] yurikR1: Right, we do partial filtering, because the legacy code does. [17:36:04] We did not implement/touch it. [17:36:05] exacly :) [17:36:29] We can rip it out, but this partial filtering helped quite a lot of times. [17:36:51] ? [17:37:00] We had to rerun for days, where the zero= tags in the X-Analytics header was wrong. [17:37:02] you mean when we misconfigured varnish? [17:37:06] Yes. [17:37:22] I filed bugs against the vcls several times. [17:37:28] yep, which is exactly why we are trynig to get rid of varnish in the first place and let analytics do the proper filtering :) [17:37:48] And it also helped a lot when wikipedia zero's different pages around configuration were not in sync. [17:38:21] yep, because we migrated to the secure zerowiki, and had to maintain two copies :) [17:38:37] sorry, back in 20 min, need to get to a meeting. I will read the log when back [17:39:03] Enjoy your meeting :-) [17:39:18] thanks yuri [18:08:23] hi everyone [18:08:35] i'm back - at the hackathon Philly venue [18:08:39] and nobody is here yet :) [18:08:57] so I'm available to talk / etc. [18:09:03] milimetric, !!! [18:09:04] :) [18:09:10] howdy yurikR [18:09:12] thanks for the patch [18:09:14] milimetric, https://gerrit.wikimedia.org/r/#/c/152095/ :) [18:09:19] +2 is welcome [18:09:25] ;) [18:09:25] ok, did you test it? [18:09:29] naturally ) [18:09:42] i couldn't find the geo lib [18:09:45] i'll just look at it very quickly because I don't want to hold you up - I'm not currently using it for anything [18:09:52] topojson? [18:09:52] cool [18:09:55] no [18:10:02] see comment [18:10:05] k [18:10:10] thx :) [18:10:23] i might push it to prod (zerowiki) soonish [18:12:42] yurikR: https://github.com/d3/d3-geo-projection [18:14:52] yurikR: it looks like that's part of the new d3 now though, lemme dig for a sec [18:15:07] (line 4200 of d3.js) [18:16:24] oh I see, this is just extra super fancy projections, no need for them right now yurikR, I was just waging a war against mercator at the hackathon :) [18:16:26] https://github.com/mbostock/d3/wiki/Geo-Projections [18:17:48] yurikR: merged [18:21:05] milimetric, awesome! thx :) [18:21:20] sec, i'm in a meeting, are you around in an hr? [18:21:26] yep [18:31:39] milimetric, back [18:31:57] cool, I envy your short meetings :) [18:32:01] hehe [18:32:05] i wish there were that short ;) [18:32:19] ok, so here's the ultimate question: [18:32:29] how do we keep the nicities of the current limn? [18:32:43] i.e. mouse tracking/point highlighting, [18:32:47] zoom in [18:32:48] etc [18:33:09] do you know where the old limn code is? I would love to integrate it into this extension too ) [18:33:28] i can do ... [18:34:16] yurikR: this extension is just using vega, it's only similar in name [18:34:21] i understand that [18:34:30] but i don't see any reason not to have multiple engines there [18:34:31] the mouse tracking / zooming is quite hard to write [18:34:59] well, david had made a patch way back to let old limn run inside mediawiki [18:35:16] do you know where it is? [18:35:27] looking now, I think on his github [18:35:35] this is the style of graphs that i want (PMed) [18:36:41] yurikR: https://github.com/dsc/limn-mediawiki-ext [18:38:26] and yurikR, I think this is the branch he had his commits in: https://github.com/dsc/limn/commits/develop [18:39:26] right, so to get that functionality, we have two options: add it to vega (which I intend to have happen, one way or another) or finish porting old limn [18:40:01] either way's fine by me, but I've got to focus on a few other things before I'll get to help [18:40:11] milimetric, i was kinda hoping to have a stop-gap measure of hosting old style graphs on zerowiki until vega is ready [18:40:52] right, yurikR, easier said than done I think [18:41:20] yurikR: maybe it's easier to just use rickshaw or something like that... [18:41:25] anyway to just include all the .js files and generate some bootstraping html to point to a dashboard page? [18:41:33] any way [18:42:49] yurikR: something would have to host the JSON / YAML metadata [18:43:11] milimetric, we could host it as separate pages in the same wiki [18:43:17] and/or something would have to convince the limn Resource base class to fetch from somewhere else [18:43:30] especially with jsonconfig ext wich will ensure that json is valid [18:44:03] yeah, I think I'll get shot if I try to help with this but you're welcome to use david's old attempt to guide you through that [18:44:30] e.g. dedicate a namespace in zerowiki: config:dashboards:blah and pull it via actio=plain or api [18:44:49] it really might be easier to just throw up rickshaw and the features it has out-of-the-box: http://code.shutterstock.com/rickshaw/examples/ [18:45:21] right yurikR, that would work, but you'd have to convince a few fairly rigid pieces of limn to fetch from there [18:45:43] AndyRussG|away: btw, did you see my review? [18:46:00] Hi milimetric :) [18:46:04] Yes I did, thanks a lot [18:46:14] bleh. will see what i can do. would be nice to migrate away from gp labs to a real wiki ) [18:46:35] Sorry I haven't had a chance to re-submit stuff yet 8p [18:46:44] Sooooooooooooon.... :) [18:46:47] no prob AndyRussG|away, just wanted to make sure you saw it [18:47:00] and to make sure you knew you were high up in my miles long priority list :) [18:47:01] Yep indeed, thanks 4 checking [18:47:13] Ah cool nice to hear :) [18:47:34] yurikR: I fully completely agree and hate that I can't help too much :( [18:47:53] milimetric: Fixing and continuing that is also high up on my list! Though a wee bit higher is popping off to the pharmacy before it rains :) [18:47:56] np, even links are a good start ) [18:48:09] yurikR: feel free to ping me, and maybe we should set up a visualization hackathon soon [18:48:19] ttyl, thanks again :) [18:48:23] it seems like a ton of us are bumping up against problems that are fairly easy to solve [18:48:36] we should :) [18:49:24] Analytics / Wikimetrics: Epic: AnalyticsEng has robust code to backfill metrics data - https://bugzilla.wikimedia.org/69252 (Kevin Leduc) NEW p:Unprio s:enhanc a:None Part tech debt, part new features: make backfilling faster and more reliable. [18:49:27] brb, restarting [18:50:23] Analytics / Wikimetrics: Epic: AnalyticsEng has robust code to backfill metrics data - https://bugzilla.wikimedia.org/69252 (Kevin Leduc) p:Unprio>Highes [18:52:02] hi, looking at kafka broker now [19:02:25] we recently upgraded it to kafka 0.8.1.1 and discovered a stale init.d script which we replaced; we'd hoped that would eliminate the problems we've had with this host. however this looks like the same timeout problem. the other 3 brokers are ok. [19:12:37] Analytics / Wikimetrics: Story: AnalyticsEng uses TimeSeries support to backfill data - https://bugzilla.wikimedia.org/69253 (Kevin Leduc) p:Unprio>Highes [19:14:10] Analytics / Wikimetrics: Story: AnalyticsEng has editor_day table in labsdb - https://bugzilla.wikimedia.org/69254 (Kevin Leduc) NEW p:Unprio s:enhanc a:None create a permanent and vetted version the "editor_day" table from the analytics-store box in labsdb [19:14:37] Analytics / Wikimetrics: Story: AnalyticsEng has editor_day table in labsdb - https://bugzilla.wikimedia.org/69254 (Kevin Leduc) [19:14:53] Analytics / Wikimetrics: Story: AnalyticsEng has editor_day table in labsdb - https://bugzilla.wikimedia.org/69254 (Kevin Leduc) p:Unprio>Highes [19:18:14] Analytics / Wikimetrics: Wikimetrics can't run a lot of recurrent reports at the same time - https://bugzilla.wikimedia.org/68840 (Kevin Leduc) [19:18:14] Analytics / Wikimetrics: Epic: AnalyticsEng has robust code to backfill metrics data - https://bugzilla.wikimedia.org/69252 (Kevin Leduc) [19:18:14] Analytics / Wikimetrics: Need to create a permanent and vetted version the "editor_day" table - https://bugzilla.wikimedia.org/69145 (Kevin Leduc) [19:18:14] Analytics / Wikimetrics: session management - https://bugzilla.wikimedia.org/68833 (Kevin Leduc) [19:18:15] Analytics / Wikimetrics: Story: AnalyticsEng has editor_day table in labsdb - https://bugzilla.wikimedia.org/69254 (Kevin Leduc) [19:18:16] Analytics / Wikimetrics: replication lag may affect recurrent reports - https://bugzilla.wikimedia.org/68507 (Kevin Leduc) [19:18:17] Analytics / Wikimetrics: Story: AnalyticsEng uses TimeSeries support to backfill data - https://bugzilla.wikimedia.org/69253 (Kevin Leduc) [19:20:52] Analytics / Wikimetrics: replication lag may affect recurrent reports - https://bugzilla.wikimedia.org/68507 (Kevin Leduc) p:Unprio>Highes [19:26:22] Analytics / Wikimetrics: Story: AnalyticsEng has editor_day table in labsdb - https://bugzilla.wikimedia.org/69254#c1 (Kevin Leduc) collaborative tasking done on etherpad: http://etherpad.wikimedia.org/p/analytics-69254 [19:30:22] Analytics / Wikimetrics: Need to create a permanent and vetted version the "editor_day" table - https://bugzilla.wikimedia.org/69145#c1 (Kevin Leduc) Collaborative tasking done in etherpad: http://etherpad.wikimedia.org/p/analytics-69145 [19:31:07] Analytics / Wikimetrics: Story: AnalyticsEng uses TimeSeries support to backfill data - https://bugzilla.wikimedia.org/69253#c1 (Kevin Leduc) Collaborative tasking done in etherpad: http://etherpad.wikimedia.org/p/analytics-69253 [19:34:23] Analytics / Wikimetrics: Wikimetrics can't run a lot of recurrent reports at the same time - https://bugzilla.wikimedia.org/68840#c4 (Kevin Leduc) Collaborative tasking on etherpad: http://etherpad.wikimedia.org/p/analytics-68840 [19:37:07] Analytics / Wikimetrics: session management - https://bugzilla.wikimedia.org/68833#c5 (Kevin Leduc) p:Unprio>Highes Collaborative tasking on etherpad: http://etherpad.wikimedia.org/p/analytics-68833 [19:40:22] Analytics / Wikimetrics: replication lag may affect recurrent reports - https://bugzilla.wikimedia.org/68507#c1 (Kevin Leduc) Collaborative tasking on etherpad: http://etherpad.wikimedia.org/p/analytics-68507 [19:45:07] Analytics / Wikimetrics: Need to create a permanent and vetted version the "editor_day" table - https://bugzilla.wikimedia.org/69145 (Kevin Leduc) [19:45:22] Analytics / Wikimetrics: Wikimetrics can't run a lot of recurrent reports at the same time - https://bugzilla.wikimedia.org/68840 (Kevin Leduc) [19:49:37] Analytics / General/Unknown: Kafka broker analytics1021 not receiving messages since 2014-08-06 ~1:44 - https://bugzilla.wikimedia.org/69244#c5 (Jeff Gage) NEW>RESO/WOR This is the same broker we've had timeout issues with in the past. We were hopeful that the upgrade to Kafka 0.8.1.1 might resol... [19:49:53] Analytics / Wikimetrics: Need to create a permanent and vetted version the "editor_day" table - https://bugzilla.wikimedia.org/69145 (Kevin Leduc) p:Unprio>Highes [19:49:53] Analytics / Wikimetrics: Wikimetrics can't run a lot of recurrent reports at the same time - https://bugzilla.wikimedia.org/68840 (Kevin Leduc) p:Unprio>Highes [19:50:06] analytics1021 has been upgraded, rebooted, and is back in service. [19:51:08] Analytics / Wikimetrics: session management - https://bugzilla.wikimedia.org/68833#c6 (nuria) Actually, not worker signals but rather task-signals: http://celery.readthedocs.org/en/latest/userguide/signals.html#task-signals I will have some tests ready before tasking tomorrow [20:06:37] Analytics / Quarry: List all queries - https://bugzilla.wikimedia.org/69189#c1 (Southparkfan) UNCO>NEW Marking as confirmed. [20:43:22] Analytics / Quarry: Provide the ability to view the queries submitted by a particular user - https://bugzilla.wikimedia.org/69174#c1 (Southparkfan) I like that idea. When that is implemented, is it possible to make a "My Queries" link/button to view your queries (the place to put that button can be at... [21:13:10] Analytics / Quarry: Show the execution time in the table of queries - https://bugzilla.wikimedia.org/69264 (Helder) NEW p:Unprio s:normal a:None Currently the table has the columns Title, Author, Status and Timestamp, but I think an extra column showing the number of seconds/minutes each qu... [21:13:54] Analytics / Quarry: Make the table sortable - https://bugzilla.wikimedia.org/69265 (Helder) NEW p:Unprio s:normal a:None Allow sorting the rows by Title, Author, Status and Timestamp (aka class="sortable"). [21:20:12] Analytics / Quarry: Add a list/table of popular queries - https://bugzilla.wikimedia.org/69266 (Helder) NEW p:Unprio s:normal a:None ...e.g. the top N queries by number of times they were executed (wasn't there something like this on toolserver's DBQ?). [21:51:39] (CR) Milimetric: [C: -1] "As mentioned inline, chains fulfill the important requirement we have of being able to configure the number of parallel connections we ope" (1 comment) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/150475 (https://bugzilla.wikimedia.org/68840) (owner: Milimetric) [22:58:23] Analytics / Wikimetrics: Story: AnalyticsEng has static file with list of projects and metrics - https://bugzilla.wikimedia.org/68822#c1 (Kevin Leduc) Collaborative tasking on etherpad: http://etherpad.wikimedia.org/p/analytics-68822 [23:13:22] Analytics / Visualization: Story: AnalyticsEng has website for EEVS - https://bugzilla.wikimedia.org/68351#c2 (Kevin Leduc) collaborative tasking on etherpad: http://etherpad.wikimedia.org/p/analytics-68351 [23:15:08] Analytics / Visualization: Story: EEVSUser loads static site in accordance to Pau's design - https://bugzilla.wikimedia.org/67806#c2 (Kevin Leduc) collaborative tasking on etherpad: http://etherpad.wikimedia.org/p/analytics-67806