[06:32:26] (PS6) Nemo bis: Add some more author aliases [analytics/wikistats] - https://gerrit.wikimedia.org/r/92069 [07:00:42] Curious http://www.webpagetest.org/result/141021_6X_B1Q/1/details/ [09:20:09] Analytics / Refinery: No new Pagecounts-all-sites files since 2014-10-10 17:00 - https://bugzilla.wikimedia.org/71994 (christian) PATC>RESO/FIX [09:27:24] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [09:27:24] Analytics / Refinery: Raw webrequest partitions for 2014-10-20T02/1H not marked successful - https://bugzilla.wikimedia.org/72295 (christian) NEW p:Unprio s:normal a:None The bits partition [1] for 2014-10-20T02/1H has not been marked successful. What happened? [1] ______________________... [09:38:54] Analytics / Refinery: Raw webrequest partitions for 2014-10-20T02/1H not marked successful - https://bugzilla.wikimedia.org/72295#c1 (christian) NEW>RESO/FIX The affected period is 10:37:16--10:37:20, which nicely matches the manual kafka leader re-election from 10:38. Mismatching data is minima... [09:45:24] Analytics / Refinery: Raw webrequest partitions for 2014-10-20T10/1H not marked successful - https://bugzilla.wikimedia.org/72295 (christian) [09:48:55] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [09:48:56] Analytics / Refinery: Raw webrequest partitions for 2014-10-20T13/1H not marked successful - https://bugzilla.wikimedia.org/72296 (christian) NEW p:Unprio s:normal a:None None of the webrequest partitions [1] for 2014-10-20T13/1H have been been marked successful. What happened? [1] _____... [10:09:08] Analytics / Refinery: Raw webrequest partitions for 2014-10-20T13/1H not marked successful - https://bugzilla.wikimedia.org/72296#c1 (christian) NEW>RESO/WON The affected period is 13:07:11--2014-10-20T13:25:38. It affected only ulsfo caches, but all ulsfo caches. The affected period shows round... [10:42:10] Analytics / Refinery: Raw webrequest partitions that were not marked successful due to network issues - https://bugzilla.wikimedia.org/72298 (christian) NEW p:Unprio s:normal a:None In this bug, we track issues around raw webrequest partitions (not) being marked successful due to network is... [10:42:24] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [10:42:24] Analytics / Refinery: Raw webrequest partitions that were not marked successful due to network issues - https://bugzilla.wikimedia.org/72298 (christian) [10:43:40] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [10:43:40] Analytics / Refinery: Raw webrequest partitions for 2014-10-08T23:xx:xx not marked successful - https://bugzilla.wikimedia.org/71876 (christian) [10:43:41] Analytics / Refinery: Raw webrequest partitions that were not marked successful due to network issues - https://bugzilla.wikimedia.org/72298 (christian) [10:44:25] Analytics / Refinery: Raw webrequest partitions for 2014-10-20T13/1H not marked successful - https://bugzilla.wikimedia.org/72296 (christian) [10:44:25] Analytics / Refinery: Raw webrequest partitions that were not marked successful due to network issues - https://bugzilla.wikimedia.org/72298 (christian) [10:44:25] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [10:47:39] Analytics / Refinery: Raw webrequest partitions that were not marked successful due to deployments gone wrong - https://bugzilla.wikimedia.org/72299 (christian) NEW p:Unprio s:normal a:None In this bug, we track issues around raw webrequest partitions (not) being marked successful due to ne... [10:47:55] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [10:49:23] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [10:49:24] Analytics / Refinery: Raw webrequest partitions from 2014-09-23T18:xx:xx onwards not marked successful - https://bugzilla.wikimedia.org/71213 (christian) [10:49:38] Analytics / Refinery: Raw webrequest partitions that were not marked successful due to deployments gone wrong - https://bugzilla.wikimedia.org/72299 (christian) [10:51:55] Analytics / Refinery: Raw webrequest partitions that were not marked successful due to configuration updates - https://bugzilla.wikimedia.org/72300 (christian) NEW p:Unprio s:normal a:None In this bug, we track issues around raw webrequest partitions (not) being marked successful due to con... [10:52:24] Analytics / Refinery: Raw webrequest partitions that were not marked successful due to configuration updates - https://bugzilla.wikimedia.org/72300 (christian) [10:52:24] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [10:52:25] Analytics / Refinery: Raw webrequest partitions that were not marked successful due to deployments gone wrong - https://bugzilla.wikimedia.org/72299 (christian) [10:52:53] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [10:52:54] Analytics / Refinery: Raw webrequest partitions that were not marked successful due to configuration updates - https://bugzilla.wikimedia.org/72300 (christian) [10:54:24] Analytics / Refinery: Raw webrequest partitions for 2014-08-29T20:xx:xx not marked successful - https://bugzilla.wikimedia.org/71463 (christian) [10:54:38] Analytics / Refinery: Raw webrequest partitions that were not marked successful due to configuration updates - https://bugzilla.wikimedia.org/72300 (christian) [10:54:38] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [10:54:53] Analytics / Refinery: Raw webrequest partition for 'upload' for 2014-10-10T15:xx:xx not marked successful - https://bugzilla.wikimedia.org/71948 (christian) [10:54:54] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [10:55:08] Analytics / Refinery: Raw webrequest partitions that were not marked successful due to deployments gone wrong - https://bugzilla.wikimedia.org/72299 (christian) [10:57:09] Analytics / Refinery: Raw webrequest partitions for 2014-08-25T1[67]:xx:xx not marked successful - https://bugzilla.wikimedia.org/70089 (christian) NEW>RESO/FIX [10:57:24] Analytics / Refinery: Raw webrequest partitions that were not marked successful due to configuration updates - https://bugzilla.wikimedia.org/72300 (christian) [10:57:24] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [10:58:40] Analytics / Refinery: Kafka partition leader elections causing a drop of a few log lines - https://bugzilla.wikimedia.org/70087 (christian) [10:58:40] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [10:58:40] Analytics / Refinery: Raw webrequest partitions for 2014-10-20T10/1H not marked successful - https://bugzilla.wikimedia.org/72295 (christian) [11:02:40] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [11:02:41] Analytics / Refinery: Raw webrequest partitions that were not marked successful but are too old to debug - https://bugzilla.wikimedia.org/72301 (christian) NEW p:Unprio s:normal a:None In this bug, we track issues around raw webrequest partitions (not) being marked successful, but the issue... [11:03:24] Analytics / Refinery: Single raw webrequest partition for 2014-09-30T06:xx:xx not marked successful - https://bugzilla.wikimedia.org/70331 (christian) [11:03:24] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [11:03:25] Analytics / Refinery: Raw webrequest partitions that were not marked successful but are too old to debug - https://bugzilla.wikimedia.org/72301 (christian) [11:03:54] Analytics / Refinery: Single raw webrequest partitions for 2014-08-26T16:xx:xx not marked successful - https://bugzilla.wikimedia.org/70090 (christian) [11:03:55] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [11:03:55] Analytics / Refinery: Raw webrequest partitions that were not marked successful but are too old to debug - https://bugzilla.wikimedia.org/72301 (christian) [11:05:09] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [11:05:09] Analytics / Refinery: Raw webrequest partitions for 2014-08-30T03:xx:xx not marked successful - https://bugzilla.wikimedia.org/70330#c1 (christian) NEW>RESO/WON Closing, as the issue is too far back to properly debug it. [11:05:23] Analytics / Refinery: Raw webrequest partitions that were not marked successful but are too old to debug - https://bugzilla.wikimedia.org/72301 (christian) [11:05:55] Analytics / Refinery: Raw webrequest partitions for 2014-08-25T11:xx:xx not marked successful - https://bugzilla.wikimedia.org/70088#c1 (christian) NEW>RESO/WON Closing, as the issue is too far back to properly debug it. [11:05:55] Analytics / Refinery: Raw webrequest partitions that were not marked successful but are too old to debug - https://bugzilla.wikimedia.org/72301 (christian) [11:05:56] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [11:06:57] Analytics / Refinery: Raw webrequest partitions that were not marked successful - https://bugzilla.wikimedia.org/70085 (christian) [11:06:57] Analytics / Refinery: Raw webrequest partitions that were not marked successful but are too old to debug - https://bugzilla.wikimedia.org/72301 (christian) [11:06:57] Analytics / General/Unknown: Raw webrequest partitions for 2014-08-23T20:xx:xx not marked successful - https://bugzilla.wikimedia.org/69971#c2 (christian) NEW>RESO/WON Closing, as the issue is too far back to properly debug it. [11:07:09] Analytics / Refinery: Single raw webrequest partition for 2014-09-30T06:xx:xx not marked successful - https://bugzilla.wikimedia.org/70331#c1 (christian) NEW>RESO/WON Closing, as the issue is too far back to properly debug it. [11:07:39] Analytics / Refinery: Single raw webrequest partitions for 2014-08-26T16:xx:xx not marked successful - https://bugzilla.wikimedia.org/70090#c1 (christian) NEW>RESO/WON Closing, as the issue is too far back to properly debug it. [11:28:56] !log Marked webrequest partitions for 2014-10-20T13/1H good ({{bug|72296}}) [11:32:54] Analytics / Refinery: Raw webrequest partitions for 2014-10-20T10/1H not marked successful - https://bugzilla.wikimedia.org/72295 (christian) [11:33:09] Analytics / Refinery: Raw webrequest partitions for 2014-10-20T02:xx:xx not marked successful - https://bugzilla.wikimedia.org/72252 (christian) [13:14:10] Analytics / General/Unknown: "ulsfo <-> eqiad" network issue on 2014-10-20 affecting udp2log streams - https://bugzilla.wikimedia.org/72306 (christian) NEW p:Unprio s:normal a:None Ops reported [1] a network issue between ulsfo and eqiad (2014-10-20 ~13:07). We did not see alerts on the ud... [13:15:09] Analytics / Refinery: Raw webrequest partitions for 2014-10-20T13/1H not marked successful - https://bugzilla.wikimedia.org/72296 (christian) [13:15:10] Analytics / General/Unknown: "ulsfo <-> eqiad" network issue on 2014-10-20 affecting udp2log streams - https://bugzilla.wikimedia.org/72306#c1 (christian) (In reply to christian from comment #0) > However, we saw alerts on the tighter monitoring the kafka pipeline. For the kafka pipeline, the bug is 7... [14:26:04] hi wm-bot2 [14:26:16] wm-bot2: where do your logs go? [14:28:44] !log set vm.dirty_writeback_centisecs = 200 (was 500) on analytics1021 (see also: https://bugzilla.wikimedia.org/show_bug.cgi?id=69667) [14:28:55] Analytics / General/Unknown: Kafka broker analytics1021 not receiving messages every now and then - https://bugzilla.wikimedia.org/69667#c22 (Andrew Otto) Set vm.dirty_writeback_centisecs = 200 (was 500) [15:09:46] ottomata: Swatch time sounds awesome [15:09:52] it'd be easier to say too - 823 [15:24:27] kevinator: do we have a release planning meeting today? [15:24:57] milimetric: mforns is all set up now, puppet working and all [15:25:06] puppet repo that is [15:25:47] :] [15:26:09] cool [15:27:03] milimetric , what is next on the list then? you got the bots (that i think just need some testing on staging) almost done and the others are either done or wip [15:29:14] nuria_: what's wip? I'll review anything outstanding now [15:31:09] (CR) Milimetric: Adding index on wiki_user (1 comment) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/167134 (https://bugzilla.wikimedia.org/71255) (owner: Nuria) [15:31:35] milimetric: the cvs is done, CR waiting [15:31:55] yep, looking now [15:32:04] milimetric: also i think bots changeset needs to be deployed and tried out in staging to see differences with teh data we have [15:32:06] nuria_: for some reason I never saw that in my dashboard, still don't [15:32:19] agreed nuria_ but we have to wait for labs to get fixed first [15:34:22] milimetric: right, right, cvs can be done w labs being [15:34:25] down [15:36:10] milimetric: i am going to document the purging setup springle has started on EL database on wikitech [15:38:40] k [15:38:55] milimetric: unless there is something i need to get done on teh sprint but it seems that no, there isn't [15:39:22] kevinator: what's next to grab if we are blocked on current sprint items? [15:39:34] I already grabbed the "next" thing which was the discussion with Sean about DW [15:42:16] nuria_: until kevinator gets back to us, i guess we're free to do anything [15:42:37] i would suggest starting to fix the batch insert bug though [15:42:55] that's a blocker that might need calendar time for review / deploy [15:47:06] milimetric: i talked to springle about that today a bit. it needs reserach before anything and a testing plan cause rememeber EL doesn't have a testing env whatsoever. [15:47:39] I think while we get started on that one we should do the scraping of logs that are older than 90 days. [15:48:02] sounds like a good reason to get started right away [15:48:19] What are we scraping the logs for? [15:51:27] milimetric: the item we have on teh backlog is to "remove logs than are older than 90 days" [15:51:32] *the backlog [15:52:16] oh! :) scraping means to look at [15:52:20] you mean delete [15:52:21] I will start doing research on the batching too, but i though we were doing the purging 1st [15:52:23] or scrapping [15:52:39] milimetric: right, let's say purging [16:03:21] ottomata: question if you may [16:05:16] milimetric, yt? [16:22:11] hi nuria_, just having lunch [16:23:56] milimetric, will start on the batching research, given that there is no testing env for EL need to figure out how to tests [16:48:54] nuria__: we can isolate just that logic that we're changing in a stand-alone test [16:49:28] milimetric: no, not really as El does not have any tests agains the db [16:49:30] normally that would be involved but in this case we're talking about a tiny amount of logic [16:49:34] *against a db [16:49:42] i'm not talking about doing anything automated [16:49:51] just literally copy paste the try/except insert/create logic [16:49:54] and change it to buffer [16:50:05] it wouldn't be a complete test but it would give us an idea of the impact [16:50:36] milimetric: but sql alchemy doesn't work that way right? there is no such a thing as buffer, [16:50:46] milimetric: there are transactions [16:51:23] right, we'd write our own buffer [16:51:28] so basically we'd make a function like: [16:51:36] def isolated_test(data): [16:51:40] try: [16:51:47] insert data [16:51:50] except: [16:52:02] create table and insert data [16:52:25] then we would call isolated_test from something that can hit it with like 20 - 30 requests per second [16:52:51] then we would implement some sort of buffer based on the characteristics of data and what table each data item belongs to [16:52:59] and we'd call that the same way, see the differences [16:53:15] like, super manual, but it wouldn't require us to stand up a whole cluster of services [16:53:50] * milimetric feels a bit of pride at the way wikimetrics was written with tests from the start [16:53:51] :) [16:53:55] jaja [16:54:18] milimetric: the etsting i was planning to do agains ma local box yes [16:54:26] milimetric: as i see no other alternative [16:54:45] yeah, local box would be fine [16:55:15] milimetric: but i do not see the buffering as an 'easy' fit on the code. [16:55:16] the main thing is, what kind of memory will the buffer consume [16:55:24] and how does it change the performance [16:55:31] definitely not [16:55:38] the buffer change will be somewhat tricky [16:55:51] we'll have to keep track of the schemas we're trying to insert to [16:55:57] milimetric: will do some research but other than transactions sqlalchmey only knows how to do discrete inserts [16:56:00] and if a batch has 5 schemas, handle the exception properly [16:56:11] oh - one sec, i looked this up [16:56:53] nuria__: sqlalchemy could just run the raw sql, so that's not a blocker [16:56:59] but there might be something elegant [16:57:11] (PS7) Nuria: Add index on wiki_user [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/167134 (https://bugzilla.wikimedia.org/71255) [16:59:01] milimetric: right, i think that is what is going to end up happening unless we are able to use transactions for grouping (which seems hard cause it will be 1 transaction per schema) [16:59:15] nuria__: I wonder if the __table__.insert operation from sqlalchemy core achieves what sean's looking for [16:59:21] we use it in wikimetrics and it's crazy fast [16:59:47] milimetric: you know, in the docs seems like it *might* [17:00:06] nuria__: the problem that superm401 pointed out with grouping is that some schemas may be really low frequency and so they'll very rarely get flushed [17:00:07] milimetric: look at "unit of work" here [17:00:08] http://www.sqlalchemy.org/features.html [17:00:31] milimetric: that is why sean called it "oportunistic" [17:00:41] the batching needs to "look ahead" [17:01:00] as in "did i just inserted a record for this schema, if so batch -up to a point-" [17:01:09] yeah, unit of work is just marketing speak though :) Ultimately the db might still be inserting each row one by one [17:01:26] in sqlserver there's a way to do an actual bulk insert that outperforms a batch of individual inserts by orders of magnitude [17:01:41] in mysql you can do the same thing with LOAD from a file [17:01:53] milimetric: no, but look at " organizes pending insert/update/delete operations into queues and flushes them all in one batch" [17:02:12] but i'm not sure if __table__.insert does that, because the fact that it wraps it in one transaction and flushes a bunch of things at the same time doesn't tell me much [17:02:36] right - that's one level of optimization, the next is actual physical bulk insert at the dbms level [17:02:52] milimetric: I am just going to have to set up a bunch of tests and see what is happening with orm/plain sql [17:02:58] definitely, yet [17:03:00] *yep [17:03:28] but i'd suggest talking to sean and getting his idea of an optimal bulk insert before getting too far [17:03:36] ok, will try to do that today with a dummy example [17:03:51] actually i will just dump a schema table and use that [17:06:02] DarTar: hi, you wanted to share some graph and notes with me yesterday. ;) [17:06:11] milimetric nuria__ mforns: when would you guys be available for an imprompty estimation meeting on the goals? [17:06:16] hey bmansurov [17:06:29] bmansurov: yes, on it now :) [17:06:33] milimetric: tasking you mean? [17:06:35] DarTar: cool [17:06:39] kevinator: any time [17:06:48] kevinator: any time that's free on my calendar [17:08:38] kevinator: same here, anytime is free, this is the tasking we talked about before right? [17:08:56] hmm…. you guys are free in 50 minutes. It’s really short notice, so I’m checking here first [17:09:16] yes, it’s what we talked about earlier today [17:10:10] 50 min. works [17:11:45] ok [17:14:16] (CR) Bmansurov: Add a timerange validator (1 comment) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/166157 (https://bugzilla.wikimedia.org/70714) (owner: Bmansurov) [17:21:53] Analytics / General/Unknown: "ulsfo <-> eqiad" network issue on 2014-10-20 affecting udp2log streams - https://bugzilla.wikimedia.org/72306#c2 (christian) NEW>RESO/WON The upd2log pipeline seems affected between 2014-10-20T13:06--2014-10-20T13:27. Per hour per host packetloss ranges between 6-47... [17:40:16] (CR) Milimetric: [C: -1] Improving retrieval of user names on cvs report (6 comments) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/167356 (https://bugzilla.wikimedia.org/71255) (owner: Nuria) [17:40:57] (CR) Milimetric: Improving retrieval of user names on cvs report (1 comment) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/167356 (https://bugzilla.wikimedia.org/71255) (owner: Nuria) [17:41:28] (CR) Milimetric: [C: 2 V: 2] Add index on wiki_user [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/167134 (https://bugzilla.wikimedia.org/71255) (owner: Nuria) [17:43:52] kevinator: did you mean to schedule that meeting for today? [17:43:59] it's set to next Monday... [17:44:25] WTF google calendar! [17:45:43] yes, I meant for today… in 14 minutes [17:53:26] (CR) Nuria: Improving retrieval of user names on cvs report (3 comments) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/167356 (https://bugzilla.wikimedia.org/71255) (owner: Nuria) [17:55:50] (PS7) Nuria: Improving retrieval of user names on cvs report [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/167356 (https://bugzilla.wikimedia.org/71255) [18:01:19] (PS1) QChris: [WIP] Add schema for edit fact table [analytics/data-warehouse] - https://gerrit.wikimedia.org/r/167839 [18:03:38] milimetric: give me a min, room isnt free [18:13:12] Hey ottomata. [18:13:30] I saw https://rt.wikimedia.org/Ticket/Display.html?id=7105 get resolved, but I'm still not in the right group and I can't read the .my.cnf file. [18:13:52] Should I push back or is there some ticket consolidation going on? [18:17:16] (hey sorry, talking to yuri, will be with ya) [18:17:28] no worries :) [18:25:08] ottomata: do you know ..... [18:26:08] or qchris: [18:26:42] ottomata, qchris: do you know where is the logpuller that pulls logs from vanadium to 1002 for event logging? [18:27:27] Let me look it up in puppet ... it should be a rsync [18:28:48] misc::statistics::rsync::jobs::eventlogging in manifests/misc/statistics.pp [18:30:04] Chttps://git.wikimedia.org/blob/operations%2Fpuppet.git/a23f815a14530f4ed8ae354084e52cc6db262d37/manifests%2Fmisc%2Fstatistics.pp#L598 [18:30:06] nuria__: ^ [18:31:41] milimetric, can you send me the username/pwd for the prototype PV cube? [18:31:49] and, would said cube be disrupted if I dropped the respective staging table? [18:31:53] qchris: to set up purging of logs older than 90 days in 1002 [18:32:51] qchris: is it a matter of modifying the rsync command? [18:33:43] I don't think that vanadium holds 90 days of data. [18:34:06] So if you set the rsync to delete files that are not on vanadium, you'd end up with less data than 90 days. [18:34:30] If you can live with that, modifying the rsync should be good enough to drop those log files. [18:34:49] If you cannot live with that you'd probably want a cronjob that gets rid of old files. [18:37:39] qchris: but to test all that you need ops-permits right? [18:38:02] qchris: how do you test such a cron? [18:41:04] Not sure if there is much testing around crons. [18:41:31] You can mock up things ... but for such changes ... just triple check them, and fix up if things break. [19:09:55] Ironholds: nope, i copied all the data, the cube is running in labs [19:10:24] cool! [19:48:14] Ironholds: multiply by 1000? [19:53:00] milimetric: just figured out how EL talked to graphite [19:53:23] or rather ori told me -- graphite subscribes to the message bus [19:53:29] tnegrin, the numbers in the emails? [19:53:48] Ironholds: sorry -- momentarily numerically challenged [19:53:49] nm [19:53:50] gotcha [19:54:50] tnegrin: was that related to something we were discussing, or just cool? [19:55:00] um [19:55:01] ok [19:55:07] but it is cool right? [19:55:34] just thinking that this could replace the limn dashboards for many use cases? [19:56:56] tnegrin: i don't think so because that's just operational data, like basic counts and stuff [19:57:26] the limn dashboards are serving up arbitrary aggregations of the data [19:58:35] for which we'd have to use SQL or engage in some serious above our pay grade hackery [20:01:36] yeah -- you're right [20:02:03] bummer [20:02:56] looks like graphite can do very basic sums and averages of different streams [20:38:40] halfak: have you seen this? [20:38:41] http://www.mediawiki.org/wiki/Extension:Graph/Demo#Contributions_by_diff_-_Barack_Obama [21:08:46] (PS8) Nuria: Improves retrieval of user names on cvs report [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/167356 (https://bugzilla.wikimedia.org/71255) [21:19:59] (CR) Milimetric: "I like, just a couple small things" (2 comments) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/167356 (https://bugzilla.wikimedia.org/71255) (owner: Nuria) [21:39:23] milimetric: so there are 105 unique schemas in EL and 216 tables in EL? [23:04:28] kevinator: 217 now :) [23:04:48] k, nite [23:05:01] nite! [23:27:19] (PS9) Nuria: Improves retrieval of user names on csv report [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/167356 (https://bugzilla.wikimedia.org/71255)