[00:12:39] bearloga: sorry, again something is up with my irc. In short teh answer to your question is yes. do take a look at oozie docs if you have not work with oozie before: https://wikitech.wikimedia.org/wiki/Analytics/Cluster/Oozie [00:13:44] bearloga: if you are adding new data it is likely that you also need to change the date ranges the workflow is executed for. See 'stop time' here: https://gerrit.wikimedia.org/r/#/c/327845/16/oozie/maps/druid/coordinator.properties [00:15:14] bearloga: the gist of it is that changes to file have to be moved to cluster to be executed as a workflow and the README outlines the steps with one way to do that (i normally rsync code to 1002 while I do chnages on my locally checked out depot in my machine) [00:16:57] nuria: thanks! [00:18:08] bearloga: makes sense? you can also work from a checkout on 1002 but regardless of the rsync step (you would not need it in that case) you need to move the files oozie needs to cluster via hdfs dfs [01:19:26] 06Analytics-Kanban, 10Pageviews-API: Monthly aggregate endpoint returns unexpected results and invalid timestamp - https://phabricator.wikimedia.org/T156312#2975061 (10Nuria) a:03Nuria [06:56:54] o/ [06:57:14] aqs1007-a is still bootstrapping, need other 24hrs before completing [06:57:18] but everything seems fine [07:41:01] 06Analytics-Kanban, 13Patch-For-Review: Clean up datasets.wikimedia.org - https://phabricator.wikimedia.org/T125854#1998898 (10elukey) Cronjob that fails periodically: ``` ---------- Forwarded message ---------- From: Cron Daemon Date: Wed, Jan 25, 2017 at 8:15 PM Subject: Cron nuria: actually, most of these two-month queries completed successfully AFAIR, except the one that hit the FileNotFoundException bug that had zareen start the conversation here [08:14:15] (and i think she already posted one example query as part of the error log then, which was part of the discussion) [08:15:02] in any case, querying one month's worth of partitions won't be enough, because the average is taken over the entire expiration time of the cookie - 1 month [08:15:13] so that would give only the value for a single day [08:17:04] nuria: CTE - thanks for the link, looks interesting, will read. for now we have started trying out TABLESAMPLE, but i imagine that won't work equally well for all wikis (once we restrict calculations to individual wikis) [09:44:11] 10Analytics, 10ChangeProp, 10EventBus, 10MediaWiki-JobQueue, and 5 others: Asynchronous processing in production: one queue to rule them all - https://phabricator.wikimedia.org/T149408#2975640 (10Legoktm) >>! In T149408#2928202, @Joe wrote: > Slides for the starting the discussion available here https://do... [10:16:29] 10Analytics, 10ChangeProp, 10Citoid, 10ContentTranslation, and 11 others: Node 6 upgrade planning - https://phabricator.wikimedia.org/T149331#2975775 (10akosiaris) 05Open>03Resolved And etherpad is now upgraded to 1.6.0-2 running on nodejs 6.9.1~dfsg-1+wmf1. Resolving the task once more [10:16:52] 10Analytics, 10ChangeProp, 10Citoid, 10ContentTranslation, and 11 others: Node 6 upgrade planning - https://phabricator.wikimedia.org/T149331#2975777 (10akosiaris) [10:23:20] joal: o/ - I thought this morning about the aqs user in cassandra [10:24:02] if we don't set it manually and aqs1007-a completes the bootstrap, then calls from node will probably fail [10:24:11] (well some of them) [10:37:34] ah I also verified on the lvs hosts that the last aqs deployment actually depooled / repooled hosts [10:37:37] all good [10:37:38] \o/ [10:37:57] going to update the deploy docs [10:42:37] 06Analytics-Kanban, 13Patch-For-Review: Improve AQS deployment - https://phabricator.wikimedia.org/T156049#2975885 (10elukey) Deployed yesterday the new config, as expected aqs1004 went first and then scap asked for a confirmation before proceeding. One interesting thing is that after aqs1004, scap asks if the... [10:51:21] 06Analytics-Kanban, 13Patch-For-Review: Improve AQS deployment - https://phabricator.wikimedia.org/T156049#2975931 (10elukey) Updated https://wikitech.wikimedia.org/wiki/Analytics/AQS#Prod [10:52:44] done [11:10:03] * elukey afk for a bit! [11:13:01] 10Analytics-Tech-community-metrics, 06Developer-Relations (Jan-Mar-2017): Merge detached Phab and mw.org identities in korma DB if Phab API shows that accounts are linked - https://phabricator.wikimedia.org/T156216#2976017 (10Aklapper) ``` #!/bin/bash # requires having jq installed; requires running "sortingha... [11:32:03] 10Analytics, 10ChangeProp, 10EventBus, 10MediaWiki-JobQueue, and 5 others: Asynchronous processing in production: one queue to rule them all - https://phabricator.wikimedia.org/T149408#2976055 (10Joe) >>! In T149408#2975640, @Legoktm wrote: >>>! In T149408#2928202, @Joe wrote: >> Slides for the starting th... [12:06:27] ah no I am auto answering my stupid cassandra question [12:06:53] we'll only need to increase the system_auth's replication factor with the new instances [12:07:27] elukey: o/ [12:07:32] Was thinking of that as well [12:07:53] The system_auth table should be shared accross node, shouldn't it? [12:08:45] joal: you know that I ask silly questions in the morning (also in the afternoon but the frequency decreases :P) [12:09:10] :D [12:09:24] elukey: I'd rather test nonetheless ;) [12:10:14] yeah I'll keep an eye on aqs1007-a over the weekend [12:11:21] joal: the other thing that Eric suggested and I forgot is cleanups [12:11:28] after each node (two intances) are booted [12:11:40] it could be done in the end too probably [12:12:07] but we'll need to remove the keys from aqs1004-a/b that will be managed by aqs1007-a/b [12:12:29] and probably also other nodes [12:13:21] but for the moment, we'd need to concentrate on aqs1007-a/b that will take a bit :P [12:18:34] elukey: just tried to cqlsh into cassandra on aqs1007-a with aqsloader user --> failed :( [12:18:46] maybe it's normal because of bootstrapping stage [12:18:49] yeah did the same and it makes sense, it is bootstrapping [12:19:12] the cassandra user doesn't work too [12:19:20] okey :) [12:20:49] joal: how is the labsdb work going ? [12:21:43] good, but very slower than on analytics-store due to concurrent connections limitations [12:21:58] I commented on the ticket and pinged you [12:22:10] (or at least I think I pinged you, checking) [12:22:20] I did ! [12:22:39] ah yes sorry I forgot to answer! [12:22:48] the network rule can be permanent [12:22:54] I don't see any issue in keeping it [12:23:05] this is a first great news :) [12:23:16] moritzm: can you confirm that ? --^ [12:23:18] dproxys are in prod and we need to use them [12:23:32] ok awesome [12:25:01] seems fine indeed [12:31:25] 10Analytics, 10Analytics-Cluster, 06Operations, 10ops-eqiad, 15User-Elukey: Analytics hosts showed high temperature alarms - https://phabricator.wikimedia.org/T132256#2976141 (10elukey) @Cmjohnson would you have time next week to apply the thermal paste to a couple of analytics hosts to see if they impro... [12:31:40] this issue kinda worries me --^ [12:32:02] I asked to Chris if he is available to apply new thermal paste to a couple of an hosts [12:32:13] (that we need to shutdown beforehand) [12:32:27] now I can see errors in an100[12] that is scary [12:33:00] ^ which hosts? let's upgrade kernel packages to the most recent versions so that rebooted hosts boot into the latest kernel [12:33:43] 10Analytics, 10Analytics-Cluster, 15User-Elukey: Audit fstabs on Kafka and Hadoop nodes to use UUIDs instead of /dev paths - https://phabricator.wikimedia.org/T147879#2976145 (10elukey) [12:34:28] moritzm: I'd say 10 of them, but I'll let you know beforehand the list so we'll install the new kernel [12:34:31] would it be ok? [12:34:53] if the thermal paste trick works I'll ask Chris to apply it to all of them [12:35:30] just tell me the cluster, it's easiest if I upgrade the whole thing, are we talking hadoop, kafka, aqs or something else? [12:36:19] elukey: Cluster have been under high pressure past few days [12:36:30] elukey: it might be a part of the explanation [12:36:41] elukey: we are currently working to reduce the load a bit [12:37:14] moritzm: hadoop :) [12:37:38] joal: it has been ongoing for a while, and it shouldn't raise these alarms even during high pressure :) [12:38:16] k elukey, you know best [12:39:45] 10Analytics, 10Analytics-Cluster, 15User-Elukey: Audit fstabs on Kafka and Hadoop nodes to use UUIDs instead of /dev paths - https://phabricator.wikimedia.org/T147879#2976187 (10elukey) Script adapted to an Hadoop node: ``` elukey@analytics1039:~$ cat /proc/mounts | grep "/var/lib/hadoop/data" | cut -d " "... [12:47:04] elukey: k, upgrading it to latest kernels [12:52:24] 10Analytics, 10Analytics-Cluster, 15User-Elukey: Audit fstabs on Kafka and Hadoop nodes to use UUIDs instead of /dev paths - https://phabricator.wikimedia.org/T147879#2976210 (10elukey) a:03elukey [12:53:35] 10Analytics, 10Analytics-Cluster: kafka alarms audit - https://phabricator.wikimedia.org/T151211#2976214 (10elukey) p:05Triage>03High [13:04:05] hey team :] [13:04:08] joal, yt? [13:04:37] Yes mforns :) [13:04:41] hello :) [13:04:54] hey! [13:05:02] do you want to talk about shards? [13:06:10] sure, give me 5 minutes to finish an email :) [13:06:16] 'course [13:11:32] PROBLEM - Hadoop HistoryServer on analytics1001 is CRITICAL: PROCS CRITICAL: 0 processes with command name java, args org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer [13:11:50] checking.. [13:12:32] RECOVERY - Hadoop HistoryServer on analytics1001 is OK: PROCS OK: 1 process with command name java, args org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer [13:12:49] (03CR) 10Mforns: ">" (031 comment) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/331794 (https://phabricator.wikimedia.org/T155141) (owner: 10Mforns) [13:14:48] sigh java.lang.OutOfMemoryError: Java heap space [13:16:05] mmm does the history server failover? [13:16:59] Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error [13:18:51] elukey: :( [13:19:01] mforns: before we start, do you give a minute for email scanning? [13:19:08] batcave? [13:19:10] sure joal [13:19:13] yes [13:37:56] HaeB: Just sent a email about huge queries - Please kill the one currently running [13:58:22] HaeB: I see you used TABLESAMPLE, but from the job execution, it doesn't reduce the load --> ~5h run, and still half of the mappers to go [14:05:00] HaeB: from what I see, you run queries to extract data project by project (the only thing that changed from zareen last request is project being 'en' or 'ja' - Having extracted the data for all projects would have way more sense here [14:12:36] joal: HaeB mentioned that to me yesterday as well. since our convo, i haven't run another large query as we're exploring the options we've discussed (^^ although perhaps TABLESAMPLE won't work as expected) [14:13:11] Hi zareen, as said just above, tablesample doesn't seem to the expected work [14:32:14] mforns: metrics loaded in druid ;) [14:32:22] mforns: you can vet :) [14:49:00] joal, yea, thanks [15:01:52] (03CR) 10Joal: [C: 04-1] "Some comments inside. Seems functionally correct, but some changes are needed for convention and comfort :)" (0323 comments) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/331794 (https://phabricator.wikimedia.org/T155141) (owner: 10Mforns) [15:07:17] 10Analytics, 10Analytics-Cluster: Add jmxtrans metrics from Hadoop yarn-mapreduce-historyserver - https://phabricator.wikimedia.org/T156272#2976526 (10elukey) I think that the history server does not have any jmx port configured, and I didn't find any environment variable to put in hadoop-env.sh to fix it. I a... [15:07:25] 10Analytics, 10Analytics-Cluster: Add jmxtrans metrics from Hadoop yarn-mapreduce-historyserver - https://phabricator.wikimedia.org/T156272#2976527 (10elukey) p:05Triage>03Normal [15:08:11] the CDH configuration is a mistery to me [15:09:02] elukey: We need ottomata to be there when looking at that - otherwise there is risk of brain self-combustion [15:11:21] ahahhaah [15:11:33] we need some hmmmmmm super powers [15:11:40] (03PS7) 10Mforns: [WIP] Add banner activity jobs [analytics/refinery] - 10https://gerrit.wikimedia.org/r/331794 (https://phabricator.wikimedia.org/T155141) [15:11:42] I think when he h [15:11:53] I am pretty sure that he'll tell me in 2 minutes the solution [15:12:02] when he hmmms, it's actually a way to ventilate the brain [15:12:12] ahhahahaha [15:20:44] joal: well look, the current query was simply implementing the advice we got from you folks here on tuesday about how to reduce the load (adding TABLESAMPLE to a query that zareen had already run successfully, to compare with that previous result) [15:20:59] seems that advice doesn't work as intended or we misunderstood it [15:21:17] Hi HaeB [15:21:20] so feel free to kill the query, or i will do so myself later [15:21:40] ...once i get up ;) [15:22:06] and many thanks for your email, will look more into it [15:22:21] HaeB: emails as well, pleaase take time for coffee, we'll catch up later - As said, I won't kill it - But I'd like some time from you later :) [15:23:24] no really, feel free to kill it now - it was to just to test that method (TABLESAMPLE) with previously obtained data [15:24:48] ...and it seems we have already gained most of the information we wanted from this test ;) [15:26:09] k HaeB, killing then :) [15:26:58] killed ! [15:29:37] 10Analytics, 10Analytics-Cluster: Add jmxtrans metrics from Hadoop yarn-mapreduce-historyserver - https://phabricator.wikimedia.org/T156272#2976654 (10elukey) Ah! I was looking in the wrong place! From https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/ClusterSetup.html#Configuring_the_Ha... [15:45:13] (03PS8) 10Mforns: Add banner activity oozie jobs [analytics/refinery] - 10https://gerrit.wikimedia.org/r/331794 (https://phabricator.wikimedia.org/T155141) [15:45:20] joal, ^ [15:45:28] :] [15:49:40] (03PS9) 10Mforns: Add banner activity oozie jobs [analytics/refinery] - 10https://gerrit.wikimedia.org/r/331794 (https://phabricator.wikimedia.org/T155141) [15:52:46] a-team: send you an invite for QR happening at standup time, no obligation, just in case anyone is interested [15:53:03] thx [15:55:58] yall wanna do a quick standup then and use the rest of the time for QRs today? [15:56:30] oui please [15:57:20] (03CR) 10Mforns: Add banner activity oozie jobs (031 comment) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/331794 (https://phabricator.wikimedia.org/T155141) (owner: 10Mforns) [16:01:40] mforns: standuuuup [16:26:49] milimetric: very nice presentation! [16:26:58] thanks man [16:27:15] I watched it too last night, could've been smoother, will try and practice more next time [16:27:38] elukey: https://github.com/wikimedia/operations-puppet/search?utf8=%E2%9C%93&q=rsync_to [16:27:44] I'm looking at that rsync error you mentioned [16:28:08] shouldn't that be thorium.eqiad.wmnet::/srv/limn-public-data... [16:28:15] with a leading slash on /srv [16:29:43] elukey@thorium:/srv/to-delete-otto-2017-01$ ls [16:29:43] aggregate-datasets limn-public-data public-datasets [16:29:51] I think that the problem is --^ [16:30:09] so limn-public-data is now under to-delete-otto-2017-01.. [16:30:33] but I didn't get the time to review the task to understand exactly what to do :D [16:30:44] (removing the cron or fix it somehow) [16:36:12] elukey: looks like ~14h ETA on 1007-a [16:36:25] surprised the per stream rate isn't a little higher [16:36:43] we usually see about 2x that [16:37:51] elukey: could you install jvm-tools on these? [16:38:02] elukey: actually, this should probably be a require on all cassandra hosts too [16:38:11] maybe i'll submit a gerrit for this [16:39:11] elukey: but there is a subcommand of that tool called ttop (sjk ttop) that will show you thread utilization. i'd be interested to see if that thread is maxed [16:41:34] urandom: jvm-tools installed on aqs1007 [16:41:52] elukey: thanks! [16:42:03] writing down to require it with the other cassandra toolset [16:42:11] thank you for following up :) [16:44:43] urandom: what should I run to see sjk ttop in action? [16:45:03] elukey: try: sjk ttop -s localhost:`uyaml /etc/cassandra-instances.d/aqs1007-a.yaml /jmx_port` -f STREAM* [16:45:23] 06Analytics-Kanban: Document the difference in aggregate data on wikistats and wikistats 2.0 - https://phabricator.wikimedia.org/T150963#2976867 (10Milimetric) >>! In T150963#2969847, @Quiddity wrote: > * **Where/who** - IIUC, that page is intended to be an announcement, and needs to summarize [various changes],... [16:45:24] TL;DR 100% cpu on the incoming stream :/ [16:46:16] nice tool! [16:46:24] yeah, it's handy [16:46:51] urandom: is the 100% consumption likely due to a misconfiguration or something that can happen? [16:46:59] compression, i assume [16:47:06] well [16:47:08] decompression [16:47:19] yep might be [16:47:26] elukey: hm, no, there's too many misunderstandings there. The to-delete-otto stuff is outdated now, we can wait for Andrew on Monday then [16:47:39] the data comes over compressed, and the receiving node needs to decompress it [16:47:53] 06Analytics-Kanban, 13Patch-For-Review: Clean up datasets.wikimedia.org - https://phabricator.wikimedia.org/T125854#2976877 (10Milimetric) waiting for @Ottomata, we can untangle this together Monday [16:49:53] 10Analytics, 10Analytics-Cluster, 13Patch-For-Review: Add jmxtrans metrics from Hadoop yarn-mapreduce-historyserver - https://phabricator.wikimedia.org/T156272#2976879 (10elukey) Next step is to understand if we can expect metrics exported via JMX from the JobHistory.. [16:51:49] urandom: I am learning tons of things, thanks a lot :) [16:52:06] elukey: glad to help!\ [16:52:22] elukey: also, the more you know the better for me (and everyone)! [16:52:27] * urandom is so selfish [16:52:44] * elukey doesn't believe it [16:52:48] :) [17:01:28] * elukey afk for a bi [17:01:31] *bit [17:59:36] mforns: how'd you get bluejeans to work on ubuntu? [17:59:46] it tries to download an rpm... [17:59:48] milimetric, I just join in the browser [17:59:53] yeah, me too [17:59:56] it won't let me [18:00:10] and... now it lets me [18:00:48] xDDDD [18:02:49] I am the first person in technology 2? [18:03:02] oh! wrong one [18:48:27] hahahha, they love data lake?! hahah [18:48:31] milimetric: ^ haha [18:52:25] It does make me nervous the idea of saying we will kill the irc change feed within 6 months iiuc [18:52:35] we should work on this: [18:52:45] streams flow in, swamps refine them, people fish and play in the lake, water evaporates and is shared by the cloud [18:53:22] chasemp: not irc [18:53:25] rcstream (socket.io) [18:53:31] ah ok that makes way more sense [18:53:34] yeah, just the old socket.io 0.9 one [18:53:55] IRC is too deeply embedded, as someone amazingly shared yesterday: https://xkcd.com/1782/ [18:54:18] I get hit w/ that everytime my other slack groups complain I can't do integrated polls :) [18:54:38] but yeah, the depth of bots and consumers for teh irc feed and people who use it to sanity check random things [18:59:32] people going afk, have a good weekend! [18:59:42] * elukey is going to watch aqs1007-a over the weekend [19:44:41] milimetric: that irc kxcd is so-AWESOME! [19:48:09] milimetric mforns fdans nuria sent email with first pass at feedback request wiki [19:48:16] please take a look when you can [19:52:59] PROBLEM - cassandra-a CQL 10.64.0.213:9042 on aqs1007 is CRITICAL: connect to address 10.64.0.213 and port 9042: Connection refused [19:53:11] Have a good wekend a-team ! see you on Monday ! [20:01:18] ashgrigas: thanks very much, gotta run for the weekend but will take a look soon [20:01:34] nuria: awesome job on the QR, really nicely pulling together everything we do [20:01:50] o/ taking off for the weekend [20:07:30] ACKNOWLEDGEMENT - cassandra-a CQL 10.64.0.213:9042 on aqs1007 is CRITICAL: connect to address 10.64.0.213 and port 9042: Connection refused eevans Bootstrapping - The acknowledgement expires at: 2017-01-30 20:06:52. [20:08:13] ^^^ afaik, there must have been a previous acknowledgement on a clock, and that clock expired [20:08:15] Hi urandom - Thanks for acknowleding [20:08:21] urandom: was trying to gather info [20:08:30] CQL shouldn't be up until the bootstrap finishes [20:08:36] okkkkk :) [20:08:50] makes sense - Thanks again urandom :) [20:08:58] joal: no worries! [20:09:15] there is ~12h remaining on the bootstrap [20:09:41] urandom: Yes, I'm looking at that as well (elukey passed the command) [20:09:46] kk [20:09:47] Back offline ! [20:09:49] Later [20:09:53] joal: enjoy your weekend! [20:10:04] Thanks :) [20:18:28] 10Analytics: Investigate adding user-friendly testing functionality to Reportupdater - https://phabricator.wikimedia.org/T156523#2977618 (10mpopov) [20:23:58] ashgrigas: same mediawiki location? [20:24:10] mforns: Hi! Still around? How's it going? [20:24:27] nuria yes https://www.mediawiki.org/wiki/Wikistats_2.0_Design_Project/RequestforFeedback [20:24:30] k [20:26:43] I have a quick question about T155141... Sorry to bother you here, it's just that the webrequest data we'd use for banner activity from the beginning of the year-end fundraiser is probably going to be purged this weekend... So I was wondering we might capture it in Druid before that... Thanks in advance!!! [20:26:44] T155141: Productionize banner impressions druid/pivot dataset - https://phabricator.wikimedia.org/T155141 [20:26:45] ashgrigas: looks good overall, let's wait to hear from coms to add how to leave feedback (talk page? elsewhere?, that should be clear but I am not sure now what is the best method). Also we just need a small description per screen on the design objective of each [20:28:44] (FR banners went up on November 29) [20:32:44] nuria sounds good - i just put the talk page for now, and will add the descriptions [20:33:29] AndyRussG, hey! [20:33:40] :) [20:33:44] I'm leaving for the weekend, but have 5 minutes :] [20:34:34] mforns: K hopefully less minutes are needed--I can probably run the query myself--just wanted to know if I could go ahead and do so with the version in the latest patch [20:35:46] AndyRussG: sorry but I 'm not sure is possible, data is continuously purged from our system and changes on pivot loading are not yet final so i am not even sure nov 29th would be available when loading code is finalized at some point next week. [20:35:48] AndyRussG, I loaded the month of december with the latest patch [20:36:16] AndyRussG: we sure do not want to have data loaded with significantly different code [20:36:23] I see region is still in there [20:36:35] nuria: yeah indeed makes sense [20:37:02] And now it has minutely resolution, no? [20:37:21] AndyRussG, the data I loaded between yesterday and today with the newest code is in another data set called: banner_activity [20:37:32] In pivot it is the last "empty box" [20:37:32] OK [20:37:49] I would just try to add in the last days of November [20:38:09] So region and minutely are both doable performancewise do you think? [20:38:10] AndyRussG, I see [20:38:29] yes, it looks fine [20:38:43] Ah OK great [20:39:09] AndyRussG, I can launch a job for the last days of november, which day did the big FR start? [20:39:15] 29? [20:39:19] Yep! [20:39:28] so, 29 and 30? [20:39:54] Yep! [20:40:30] (November 31st we weren't able to fundraise...) [20:40:35] (just kidding...) [20:40:41] xD [20:40:47] hehehe, ok, launched [20:40:52] 0090565-161121120201437-oozie-oozi-C [20:40:58] Fasntastic, thanks so much!! [20:41:02] it will take like 1 hour [20:41:20] no problem :] [20:41:34] cool! [20:41:59] are you able to see the new data set in pivot (last box)? [20:42:37] uuuuh not sure /me tries to switch pivot module in brain back on [20:43:49] just go to pivot.wikimedia.org, you'll see 7 boxes, the last one (empty) corresponds to banner_activity, the data set generated with latest code [20:44:09] in short we will add names and description to the box, so that it's recognizable [20:44:45] or alternatively, you can go directly to: https://pivot.wikimedia.org/#banner_activity_minutely [20:45:20] K yes got it... [20:46:06] cool! I'm leaving for weekend, have a nice weekend people :] [20:46:49] mforns: thx again, all the best! [20:46:54] Have fun :) [20:47:04] bye! you too :] [21:00:52] 06Analytics-Kanban: Create AQS endpoint to serve legacy pageviews - https://phabricator.wikimedia.org/T156391#2977766 (10Nuria) [21:01:45] 06Analytics-Kanban: Create AQS endpoint to serve legacy pageviews - https://phabricator.wikimedia.org/T156391#2973277 (10Nuria) [21:02:42] 06Analytics-Kanban, 10Pageviews-API: Monthly aggregate endpoint returns unexpected results and invalid timestamp - https://phabricator.wikimedia.org/T156312#2970596 (10Nuria) First thing I'll do is to load test data on beta so we can actually test fix. [21:05:37] nuria: also thansk ^ ! [22:01:40] 10Analytics, 10EventBus, 13Patch-For-Review, 06Services (done), 05WMF-deploy-2017-01-24_(1.29.0-wmf.9): EventBus produces non-canonical page urls - https://phabricator.wikimedia.org/T155066#2977915 (10Pchelolo) 05Open>03Resolved The MW change's been deployed - resolving. [22:03:52] 06Analytics-Kanban, 06Discovery, 06Discovery-Analysis (Current work), 03Interactive-Sprint, 13Patch-For-Review: Add Maps tile usage counts as a Data Cube in Pivot - https://phabricator.wikimedia.org/T151832#2977918 (10Deskana) p:05Normal>03Low Given the upcoming pause of Interactive-related work, I'd... [22:14:33] 10Analytics-Tech-community-metrics: Provide equivalent of "SCR: Code review users vs. Code review committers" in Kibana - https://phabricator.wikimedia.org/T151558#2978073 (10Aklapper) p:05Triage>03Low [22:14:41] 10Analytics-Tech-community-metrics: Provide equivalent of "SCR: People uploading patchsets vs. Reviewers per month" in Kibana - https://phabricator.wikimedia.org/T151559#2978074 (10Aklapper) p:05Triage>03Low [22:14:51] 10Analytics-Tech-community-metrics: Provide equivalent of korma's "Gerrit review queue" (listing the 'worst' SCR repositories) in Kibana - https://phabricator.wikimedia.org/T151488#2978075 (10Aklapper) p:05Triage>03Low [22:14:59] 10Analytics-Tech-community-metrics: Provide equivalent of "SCR: Age of open changesets (monthly snapshots)" in Kibana - https://phabricator.wikimedia.org/T151557#2978076 (10Aklapper) p:05Triage>03Normal [22:16:56] 10Analytics-Tech-community-metrics: Provide equivalent of "SCR: Distribution of open changesets (by date of submission)" in Kibana - https://phabricator.wikimedia.org/T151556#2978194 (10Aklapper) p:05Triage>03Lowest [22:17:45] 10Analytics-Tech-community-metrics: Provide equivalent of "SCR: Oldest open Gerrit changesets without code review" in Kibana - https://phabricator.wikimedia.org/T151560#2978198 (10Aklapper) p:05Triage>03Low [22:18:44] 10Analytics-Tech-community-metrics: Have "Last Attracted Developers" information for Gerrit (already exists for Git) - https://phabricator.wikimedia.org/T151161#2809332 (10Aklapper) p:05Normal>03High [22:36:57] 10Analytics, 10Analytics-EventLogging: Remove ad-hoc UA logging from existing schemas - https://phabricator.wikimedia.org/T61832#2978710 (10MarkTraceur)