[05:05:52] Analytics / Tech community metrics: Wrong data at "Update time for pending reviews waiting for reviewer in days" - https://bugzilla.wikimedia.org/68436#c3 (Alvaro) (In reply to Alvaro from comment #2) > Quim, the problem is related with the update of messages in this page. I > think you worked in an HT... [09:17:56] Analytics / General/Unknown: Packetloss was critical on 2014-07-29 ~2:00 for oxygen, analytics1003, erbium - https://bugzilla.wikimedia.org/68796 (christian) NEW p:Unprio s:normal a:None On 2014-07-29 ~02:00, there were packet loss alarms for oxygen, analytics1003, erbium in the #wikimedia-... [09:18:41] Analytics / General/Unknown: Packetloss was critical on 2014-07-29 ~2:00 for oxygen, analytics1003, erbium - https://bugzilla.wikimedia.org/68796 (christian) a:christian [11:01:24] Analytics / General/Unknown: Packetloss issues on oxygen (and analytics1003) - https://bugzilla.wikimedia.org/67694#c8 (christian) TL;DR: * analytics1003 alarms were harmless. * oxygen alarms point to real packet-loss, which affected all udp2log multicast consumers during two ~4 hour periods.... [13:26:51] Analytics / Tech community metrics: Wrong data at "Update time for pending reviews waiting for reviewer in days" - https://bugzilla.wikimedia.org/68436#c4 (Quim Gil) Ok, I'm sorry. I shouldn't send patches going beyond edits in text strings without testing them in an environment. Do you know why Parso... [13:36:34] Hey ottomata. [13:36:41] Could you take a look at https://gerrit.wikimedia.org/r/#/c/150045/. [13:36:49] It's blocking my work on stat1003. [13:38:07] Analytics / General/Unknown: ULSFO post-move verification - https://bugzilla.wikimedia.org/68199#c14 (christian) NEW>RESO/FIX (In reply to Mark Bergsma from comment #13) > [ Situation around amssq47 ] Thanks for confirming. ------------------ Per host per hour packet loss numbers look good. (T... [13:42:27] halfak: merged :) [13:42:35] Thanks dude! :) [13:58:42] hi, can someone share the standup hangout url? [13:59:17] jgage: http://goo.gl/1pm5JI [13:59:21] thanks [13:59:39] yw [14:27:49] hey qchris_meeting / kevinator: good news is that after the backfilling, RAE now runs for 150 days in 1 second as opposed to 30 seconds per 1 day :) [14:28:39] backfilling / materializing of the data we need, that is [14:29:08] Whoaa! Great! [14:37:38] hey qchris_meeting, milimetric are you in the batcave? [14:37:51] Yes. [14:38:21] exciting -- we're in another hangout [14:38:22] My invite has no other hangout url ... [14:38:25] we'll come to you [14:38:36] there are a couple of invites which is confusing [14:38:40] ok. [14:39:14] tnegrin: i [14:39:20] 'm in a room with everyone [14:39:28] ok! duplicates explained! [14:39:29] qchris_meeting: https://plus.google.com/hangouts/_/wikimedia.org/analytics [14:39:31] OH [14:39:35] wow that was fast ottomata [14:39:39] yes -- please go there [14:39:39] we have staff now tho [14:39:41] https://plus.google.com/hangouts/_/wikimedia.org/analytics [14:39:42] right right [14:40:11] Ok. Coming. [14:48:36] (PS1) Yuvipanda: Add proper caching for User model [analytics/quarry/web] - https://gerrit.wikimedia.org/r/150220 [14:48:38] (PS1) Yuvipanda: Add caching for query, query_rev and query_run [analytics/quarry/web] - https://gerrit.wikimedia.org/r/150221 [15:03:49] (PS2) Yuvipanda: Add caching for query, query_rev and query_run [analytics/quarry/web] - https://gerrit.wikimedia.org/r/150221 [15:09:25] (CR) Legoktm: Add proper caching for User model (1 comment) [analytics/quarry/web] - https://gerrit.wikimedia.org/r/150220 (owner: Yuvipanda) [15:12:59] (CR) Yuvipanda: Add proper caching for User model (1 comment) [analytics/quarry/web] - https://gerrit.wikimedia.org/r/150220 (owner: Yuvipanda) [15:14:27] (CR) Legoktm: [C: 2] "As long as this isn't using shared redis, this should be fine." [analytics/quarry/web] - https://gerrit.wikimedia.org/r/150221 (owner: Yuvipanda) [15:15:40] (CR) Legoktm: [C: 2] Add proper caching for User model (1 comment) [analytics/quarry/web] - https://gerrit.wikimedia.org/r/150220 (owner: Yuvipanda) [15:15:45] (Merged) jenkins-bot: Add proper caching for User model [analytics/quarry/web] - https://gerrit.wikimedia.org/r/150220 (owner: Yuvipanda) [15:15:52] (Merged) jenkins-bot: Add caching for query, query_rev and query_run [analytics/quarry/web] - https://gerrit.wikimedia.org/r/150221 (owner: Yuvipanda) [15:41:23] (PS1) Milimetric: Update for August meeting [analytics/reportcard/data] - https://gerrit.wikimedia.org/r/150231 [15:43:09] (CR) Milimetric: [C: 2 V: 2] Update for August meeting [analytics/reportcard/data] - https://gerrit.wikimedia.org/r/150231 (owner: Milimetric) [17:09:38] Analytics / Wikimetrics: Story: User has website for dashboards - https://bugzilla.wikimedia.org/67124#c1 (Kevin Leduc) NEW>RESO/DUP Closing bug as it is a duplicate of 68351 *** This bug has been marked as a duplicate of bug 68351 *** [17:09:38] Analytics / Visualization: Story: AnalyticsEng has website for EEVS - https://bugzilla.wikimedia.org/68351#c1 (Kevin Leduc) *** Bug 67124 has been marked as a duplicate of this bug. *** [17:09:51] Analytics / Wikimetrics: Story: User has website for dashboards - https://bugzilla.wikimedia.org/67124 (Kevin Leduc) [17:18:07] Analytics / Wikimetrics: EEVSUser downloads report with correct Http Cache Headers - https://bugzilla.wikimedia.org/68445 (Kevin Leduc) p:Unprio>Highes s:normal>enhanc [17:18:07] Analytics / Visualization: EEVS Release Candidate - https://bugzilla.wikimedia.org/68350 (Kevin Leduc) [17:18:21] Analytics / Wikimetrics: Story: EEVSUser downloads report with correct Http Cache Headers - https://bugzilla.wikimedia.org/68445 (Kevin Leduc) [17:24:21] Analytics / Visualization: Story: EEVSUser adds/removes a metric/project - https://bugzilla.wikimedia.org/68142 (Kevin Leduc) [17:25:40] HMMM qchris_away [17:25:41] yt? [17:25:46] apparently not! [17:25:47] :) [17:58:45] ottomata: There I am. Stuffed with french fries :-D [17:58:49] hello! [17:58:50] What's up? [17:58:55] trap? [17:59:03] Sure. Booting google machine. [17:59:29] * YuviPanda pokes milimetric. Do poke back when you've a few minutes :) [17:59:37] qchris: french fries sound delicious right now.. [17:59:41] been a little nuts YuviPanda, sorry [17:59:48] milimetric: yeah, I could see :) [17:59:53] milimetric: hence me poking and awaiting a callback :) no hurry [17:59:59] in a few minutes my meeting's over, we can talk then? [18:00:12] milimetric: sure [18:03:50] ok YuviPanda, free [18:03:58] milimetric: cool. hangout or IRC? [18:04:02] IRC's fine [18:04:10] unless your hands need rest [18:04:17] milimetric: nah, they're fine :) [18:04:23] k [18:04:30] so you were asking more detail about the arch? [18:04:31] milimetric: so, wikimetrics' architecture. do you have a web frontend machine and multiple celery runners? [18:04:39] milimetric: yeah, pretty much [18:04:44] there are a few diagrams that haven't been updated for a while [18:04:49] but should still be relevant [18:04:50] one sec [18:04:58] milimetric: ah, cool. sure [18:05:07] https://github.com/wikimedia/analytics-wikimetrics/tree/master/design/diagrams [18:05:16] https://github.com/wikimedia/analytics-wikimetrics/blob/master/design/diagrams/mysql-celery-flask.png [18:05:23] that one's something like a system diagram [18:05:36] but basically, no [18:05:41] celery and web both run on the same machine [18:05:49] and they both talk to the same databse [18:05:51] *database [18:05:52] is there only one machine? [18:05:56] yep [18:06:15] there's no reason it has to be so, but we never hit scaling limits (until maybe yesterday) [18:06:25] right [18:06:44] like, in other words, through just config we should be able to move any of the main roles to a separate box (db, web, or queue) [18:06:48] milimetric: and puppetized? uses deb packages only? [18:06:58] it's all puppetized, but not debianized [18:07:07] it uses pip install where needed [18:07:08] right. so pip [18:07:27] and just redis backing for celery [18:07:47] yes, though that's fairly plug-in too, we don't interact with Redis at all [18:07:48] milimetric: does it use sqlalchemy? are you caching things for the frontend as well? [18:08:03] hmm, right [18:08:14] it uses sqlalchemy, yes [18:08:18] * YuviPanda didn't use sqlalchemy, so just finished writing his redis cache laayer. [18:08:23] no caching yet, but we're set up *to* cache [18:08:34] milimetric: ah, right. but just hasn't been necessary? [18:08:57] yes, so the way a "report" runs is interesting [18:09:37] there's a tree of nodes, with a leaf for each metric/project combination that the user added to a report [18:09:53] (a cohort is a combination of users from potentially multiple projects) [18:10:11] so, these "ReportNode"s have data at different levels of granularity [18:10:15] aha! right. [18:10:17] basically, we can cache at any point up the tree [18:10:33] and right now we're having problems come up like - hey this one metric can use this other metric's output [18:10:42] and so we're going to start looking into actually doing that caching [18:10:55] (it used to be implemented, that's why the tree is there, but we removed it for simplicity) [18:11:15] one of the nodes in the tree, for example, is "AggregateNode" [18:11:38] this just aggregates the results from multiple projects, for example if the only output needed is Sum [18:12:18] are these 'nodes' in mysql? [18:12:46] no, they're nodes like in Python, here's the one that splits up the cohort into multiple projects (one sec): [18:13:11] https://github.com/wikimedia/analytics-wikimetrics/blob/master/wikimetrics/models/report_nodes/multi_project_metric_report.py [18:13:32] notice line 35 creates that node's children [18:13:41] and line 39 merges their results back [18:14:14] oh, right. so these are eventually at some point 'evaluated' in some form to actually run SQL [18:14:26] initially, we had each of these nodes run in a separate celery task organized using celery's "chord" and "group" concepts [18:15:02] but that was overly complicated and ori helped me realize we didn't need the extra async-ness [18:15:23] the only thing that runs SQL is the metric itself [18:15:34] these nodes set up the metrics and pass them a session open to the database they need to query [18:16:10] right [18:16:16] so a report request comes in, is parsed, creates a tree of report nodes, the leaves of that tree are metrics, those run, results bubble up the tree [18:17:24] milimetric: right. [18:17:29] milimetric: that's quite elegant! [18:18:03] meh, it was a bit overkill at first but now it's starting to look like a good idea [18:18:39] right [18:19:16] milimetric: quarry's is much simpler, since it deals with raw SQL / results. SQL queries go into a db, celery gets a task with an id, gets the SQL, executes it, puts output in a well-known-location [18:19:29] milimetric: and then the js just polls the results by making a $.get every 5s [18:19:42] since it will just return a 404 until there are some results (failed, killed, error, success) [18:20:09] I'll probably have two boxes from the start, to make sure things scale when I want them to [18:21:04] wikimetrics polls celery too [18:21:13] oh? [18:21:14] for? [18:21:16] i always thought that was kinda blah - maybe look into socket.io, it's pretty easy [18:21:20] for the results, same as you [18:21:23] ah, right [18:21:27] I just poll the filesystem [18:21:31] oh i see [18:21:36] you can poll celery [18:21:43] milimetric: so nginx will just have an alias setup, and it'll just hit the filesystem. [18:21:56] if you save the task id, you can instantiate an AsyncResult from it and use it to get status info [18:22:02] milimetric: simpler for now :D I also don't really use celery states for anything, since query status is stored in the database [18:22:13] ok, makes sense [18:22:21] milimetric: but yeah, definitely socket.io in the future. [18:22:31] that's what i said last year too :) [18:22:35] milimetric: :D hehe [18:22:45] it's so easy too, it's just a million other easy things are at the door [18:22:51] milimetric: yeah, true. [18:23:04] milimetric: I've to implement 'fork' now, and then in the future a 'star' system as well maybe [18:23:39] milimetric: anything else that sounds terrible in my plan? [18:24:00] well, are any of those sites out there like jsfiddle and sqlfiddle etc. open sourcing their work? [18:24:19] because they solved some of these problems already too [18:24:30] http://sqlfiddle.com/ [18:24:57] milimetric: true, but sqlfiddle is unusable, since it is for its own database structure, etc [18:24:58] https://github.com/jakefeasel/sqlfiddle [18:25:11] milimetric: there's data.stackexchange.com, which I found out much later, which seems to almost have the exact same UI [18:25:19] right, maybe the simplest thing to do would be to write a plugin that can let it hit external dbs? [18:25:45] milimetric: hmm, doubt it, at this point at least. http://quarry.wmflabs.org/ is fairly fully functional now. [18:26:20] milimetric: I know I seem mildly NIHish, but I *did* check out sqlfiddle. I'd have used data.stackexchange.com, but it wasn't open source, and it also was very .NET/SQL Servery [18:26:44] right! [18:26:45] :) [18:26:46] my world [18:27:17] milimetric: :D 'my world' as in .NET / SQL Server? [18:27:36] * YuviPanda still maintains C# is such a nice, nice language [18:28:36] YuviPanda: i'm hanging out with matt flaschen talking about how to advertise quarry at our local Philly research hackathon [18:28:44] yeah, .net is my world [18:28:48] milimetric: w00t! [18:28:56] i self-aborted from it when i came here [18:29:24] milimetric: I used to be a 'Microsoft Student Partner' when I was young :) [18:29:31] milimetric: and I started doing Python from IronPython [18:29:33] the DLR is nice [18:35:05] (PS1) QChris: Give camus more time to write datasets before oozie picks them up [analytics/refinery] - https://gerrit.wikimedia.org/r/150287 [18:37:12] (CR) Ottomata: [C: 2 V: 2] Give camus more time to write datasets before oozie picks them up [analytics/refinery] - https://gerrit.wikimedia.org/r/150287 (owner: QChris) [18:41:56] ok qchris, that fixed all the missing data from that hour! [18:42:28] but there is still reported duplicate data in this hour [18:42:31] Wohoo \(^_^)/ [18:42:36] i don't have an explaination for that yet [18:42:41] so, lemme look into it... [18:42:44] duplicate mmmm. [18:43:00] I'll reboot the hangout machine ... just in case. [18:43:16] duplicate is only from esams [18:53:06] Thinking about it again ... [18:53:17] I said that I heard that some caches were restarted. [18:53:27] Maybe they were all esams? [18:53:39] IIRC there was same talk about datacenter boundaries. [18:53:48] * qchris checks IRC logs again [19:40:15] qchris, that seems to be unlikely with what i've found so far, if the cache was restarted, we wouldnt' see duplicates [19:40:24] we'd see weirdness in sequence numbers changing [19:40:32] pretty sure they reset back to 0 on restatr [19:40:37] Agreed. [19:40:47] I meanwhile checked the irc logs. [19:40:56] And there was an additional thing: [19:40:57] i've been looking at some of these duplicate seqs, and they are all increasing [19:41:07] http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-operations/20140729.txt [19:41:08] in a sane range [19:41:19] Starting at [01:36:07] [19:41:32] esams machine where not pingable [19:41:45] I am currently checking whether that time aligns with the dupes [19:42:17] Btw ... Some of the sequence numbers occur three times, not only twice. [19:42:45] yeh, noticed that too [19:43:45] ah intersting, there are some delivery errors for the time period i'm looking at [19:43:46] Jul 29 01:42:40 cp3018 varnishkafka[27703]: KAFKADR: Kafka message delivery error: Local: Message timed out [19:43:47] Jul 29 01:42:44 varnishkafka[27703]: last message repeated 99 times [19:43:47] Jul 29 01:42:57 cp3018 varnishkafka[27703]: KAFKADR: Suppressed 22469 (out of 22569) Kafka message delivery failures [19:44:04] (going slow, am half paying attention to HHVM talk :p ) [19:44:29] that corresponds with link failure [19:44:30] hmm [19:44:34] Link flapping happened 6 minutes before that. [19:44:44] And it also aligns withthe timestamp on the dupes. [19:45:03] ok qchris, i think that is it. [19:45:07] So I think those three things fit together [19:45:09] Yes. [19:45:09] yeah [19:45:12] i don't see any other duplicates [19:45:19] i just happened to pick an hour where there were duplicates again :) [19:45:28] Lucky ottomata [19:45:30] :-D [19:46:05] ok, i'm going to re-run all of the killed oozie jobs now, then I will kill this bundle, and launch a new one with the newly deployed configs and a new (current) start_date [19:46:26] Sounds good. [19:53:25] (PS1) Yuvipanda: Switch to CC0 for the SQL [analytics/quarry/web] - https://gerrit.wikimedia.org/r/150306 [19:53:41] (CR) Yuvipanda: [C: 2] Switch to CC0 for the SQL [analytics/quarry/web] - https://gerrit.wikimedia.org/r/150306 (owner: Yuvipanda) [19:53:43] (Merged) jenkins-bot: Switch to CC0 for the SQL [analytics/quarry/web] - https://gerrit.wikimedia.org/r/150306 (owner: Yuvipanda) [20:03:25] Analytics / General/Unknown: Duplicates in Hadoop cluster's webrequest data around 2014-07-29 ~01:40 - https://bugzilla.wikimedia.org/68819 (christian) NEW p:Unprio s:normal a:None Duplicate monitoring of camus imported data flagged a few datasets as duplicates. The datasets were all esams... [20:04:23] Analytics / General/Unknown: Duplicates in Hadoop cluster's webrequest data around 2014-07-29 ~01:40 - https://bugzilla.wikimedia.org/68819#c1 (christian) NEW>RESO/WON It turned out, that there was some link flapping around esams [1], which matches the timestamp for the dupes. This timestamp also... [20:16:48] milimetric: haha, I was looking at wikimetrics source to see how the pip install is handled, apparently not handled :D [20:18:27] not handled is the best handled :D [20:25:52] Analytics / Visualization: Story: EEVSUser loads static site in accordance to Pau's design - https://bugzilla.wikimedia.org/67806 (Kevin Leduc) [20:26:30] terrrydactyl: heh :) [20:27:09] Analytics / Visualization: Story: EEVSUser loads static site in accordance to Pau's design - https://bugzilla.wikimedia.org/67806#c1 (Kevin Leduc) Overview: It's the “Chrome” (i.e. not working, just component scafolding) * browse/autocomplete * side bar * time browsing * metric bar Important assumpti... [20:27:09] Analytics / Visualization: EEVS Release Candidate - https://bugzilla.wikimedia.org/68350 (Kevin Leduc) [20:30:38] Analytics / Visualization: Story: EEVSUser loads dashboard with a default view - https://bugzilla.wikimedia.org/68140#c1 (Kevin Leduc) Team Discussion: Assumptions: storage in “Story: AnalyticsEng has website for EEVS (34 points)” is done, get metadata in JSON Decisions: Do not put the file on the mai... [20:35:37] Analytics / Visualization: Story: EEVSUser adds/removes a metric/project - https://bugzilla.wikimedia.org/68142#c1 (Kevin Leduc) Team discussion during points: Only UI. No new metric definitions or such. Autocomplete is expected to work after this. Filling browsing component with live is part of that. [20:40:23] Analytics / Wikimetrics: Story: EEVSUser adds 'Edits' metric - https://bugzilla.wikimedia.org/68352 (Kevin Leduc) p:Unprio>High [20:44:36] Analytics / Wikimetrics: story:e WikimetricsUser runs "Number of pages created" report - https://bugzilla.wikimedia.org/68353 (Kevin Leduc) p:Unprio>High [20:45:23] Analytics / Wikimetrics: Story: EEVSUser adds 'Pages created' metric - https://bugzilla.wikimedia.org/68353 (Kevin Leduc) [20:47:11] grrr [20:47:17] qchris: [20:47:23] response_size":2206332088 [20:47:26] Caused by: org.apache.hadoop.hive.serde2.SerDeException: org.codehaus.jackson.JsonParseException: Numeric value (2206332088) out of range of int [20:47:35] :-D [20:47:51] It's >32bit. [20:47:53] yup [20:47:56] It's >31bit. [20:48:01] (signed) [20:48:07] Sooooo ... bigint? [20:48:11] what kind of response size is that though!? [20:48:12] yeah [20:48:21] /wikipedia/commons/6/6c/Wikimania_2011_-_Workshops_I.ogv [20:48:26] big ol' video i guess [20:48:32] Oh. Ja. [20:48:40] So the response size seems valid. [20:48:42] welp, bigint it is I guess [20:48:46] ok, um [20:48:49] hm [20:48:50] ok [20:48:51] Analytics / Visualization: Story: EEVSUser selects time range - https://bugzilla.wikimedia.org/68470#c1 (Kevin Leduc) p:Unprio>Normal Team discussion on points: Filtering may happen at client Can be a naive filtering. Performance is secondary. If the performance is bad, we can add another card to... [20:48:52] how about I [20:48:55] stop this bundle [20:48:57] drop the table [20:49:00] create witih new schema [20:49:05] and restart a new bundle [20:49:08] Yup. Sounds good. [20:49:10] ok [20:50:42] (PS1) Ottomata: Make response_size a bigint in the webrequest table create schema [analytics/refinery] - https://gerrit.wikimedia.org/r/150388 [20:50:45] qchris: ^ [20:50:54] You beat me :-D [20:50:57] haha [20:50:57] :) [20:51:16] (CR) QChris: [C: 2 V: 2] Make response_size a bigint in the webrequest table create schema [analytics/refinery] - https://gerrit.wikimedia.org/r/150388 (owner: Ottomata) [20:58:55] ok cool, done, new oozie bundle with updated configs running on newly created tables [20:59:04] will check back up on that tomorrow [20:59:12] Yay. [21:12:53] Analytics / Visualization: EEVS Release Candidate - https://bugzilla.wikimedia.org/68350 (Kevin Leduc) [21:12:53] Analytics / Wikimetrics: Story: AnalyticsEng has static file with list of projects and metrics - https://bugzilla.wikimedia.org/68822 (Kevin Leduc) p:Unprio>Normal [21:23:23] qchris, are you still around? [21:23:27] yup. [21:23:33] What's up? [21:23:43] milimetric: awesome about the philly local hackathon :) I'll be helping with any labs / toollabs related stuff at the one in London, so do feel free to poke me for any such help before / after the hackathon [21:24:01] i noticed that you added pool_size in this patch https://gerrit.wikimedia.org/r/#/c/149383/2/database_migrations/env.py [21:24:23] terrrydactyl: Right. [21:24:26] so when i run my local instance, it says it can't find the key for it [21:24:32] is there somewhere i have to set it? [21:24:38] i've already git pulled on master [21:24:44] It's not yet merged ... [21:24:47] Ah ok. [21:25:16] Mhmm ... pool_size is used in other places too. [21:25:24] Which version of sqlalchemy do you use? [21:26:38] In [2]: sqlalchemy.__version__ [21:26:38] Out[2]: '0.8.1' [21:26:53] although this is my local instance, would it be different in vagrant? [21:27:11] It works in Vagrant for me. But that does not mean anything :-D [21:27:20] >>> sqlalchemy.__version__ [21:27:20] '0.9.7' [21:27:24] in vagrant [21:27:27] and i run the code in vagrant [21:27:32] Could you please copy/paste the full error message somewhere? [21:27:34] so i guess that's the one that matters. :) [21:27:51] * qchris boots his vagrant machine [21:27:58] it's an error in browser: http://paste.ofcode.org/gVT7sEcGdhEiD6bya6urDC [21:28:27] i also git pulled on vagrant as well [21:29:58] Oh. [21:29:59] Ok. [21:30:26] It's not a pool_size issue, but a config issue. [21:30:40] You need to set WIKIMETRICS_POOL_SIZE in the config [21:30:56] okay [21:31:09] what's a good pool size to put? not sure what the units even look like [21:31:12] https://gerrit.wikimedia.org/r/#/c/149123/1/wikimetrics/config/db_config.yaml [21:31:23] neat! [21:31:24] thanks qchris [21:31:35] yw. Hope this helps. [21:32:37] looks like it was already there which probably means i have to run vagrant provision. [21:32:44] * terrrydactyl had issues like this before [21:32:48] always be provisioning :) [21:32:51] look, i'm learning. :) [21:33:02] i do that all the time [21:33:19] i mean forget to provision :) [21:33:29] hence, I just use a local setup instead of vagrant [21:33:43] haha, you did fight using vagrant :) [21:58:31] kevinator: https://metrics-staging.wmflabs.org/static/public/dash/ [21:58:35] check out itwiki [21:58:45] it's manually populated back to 2012 [21:59:28] so basically this works fine, the method for manually moving the data I've been computing on stat1003 is simple [21:59:38] now I just have to do it for all the wikis [21:59:44] i'll be doing that tomorrow for newly registered [22:02:03] (CR) Legoktm: "zOMG self-merge!" [analytics/quarry/web] - https://gerrit.wikimedia.org/r/150306 (owner: Yuvipanda) [22:09:40] (PS1) Yurik: Updated settings storage [analytics/zero-sms] - https://gerrit.wikimedia.org/r/150409 [22:10:00] (CR) Yurik: [C: 2] Updated settings storage [analytics/zero-sms] - https://gerrit.wikimedia.org/r/150409 (owner: Yurik) [22:25:55] Analytics / General/Unknown: Packetloss was critical on 2014-07-29 ~2:00 for oxygen, analytics1003, erbium - https://bugzilla.wikimedia.org/68796#c1 (christian) NEW>RESO/WON The issue was a flapping esams link [1], which (depending on the stream) killed half up to all esams traffic (eqiad and uls... [22:59:24] Analytics / Wikimetrics: Bring WIKIMETRICS_POOL_SIZE to vagrant's wikimetrics setup - https://bugzilla.wikimedia.org/68825 (christian) NEW p:Unprio s:normal a:None While the needed setup [1] for configurable connection pool settings [2] has been puppetized, and been brought into operations/... [22:59:37] Analytics / Wikimetrics: Bring WIKIMETRICS_POOL_SIZE to vagrant's wikimetrics setup - https://bugzilla.wikimedia.org/68825 (christian)