[08:42:52] Analytics-Kanban: Investigate why cassandra per-article-daily oozie jobs fail regularly - https://phabricator.wikimedia.org/T140869#2482997 (JAllemandou) Reloading of 2016-07-19 worked, and 2016-07-20 worked straight away. weird. [08:46:05] Analytics-Kanban: Continue New AQS Loading - https://phabricator.wikimedia.org/T140866#2483011 (JAllemandou) After changing compression from lz4 to deflate and relaoding a month of data (January 2016), we are down to about 120Gb per instance, which is way better than 250G. Proceeding with loading February 2... [09:00:20] Hi addshore o/ [09:00:24] hio! [09:00:48] addshore: I'm sorry to bug you, but I think there are a couple modifications needed in the patches nuria_ merged yesterday [09:00:56] :S [09:00:58] oooh, okay! [09:01:08] addshore: Sooorry :( [09:01:23] addshore: I meant to review the patches yesterday but didn't make the time [09:01:27] Anyway [09:01:48] One thing: in the scala code (or more precisly the SQL code in the scala code) [09:02:11] In pageview_hourly, data is partitioned by year/month/day/hour [09:02:32] In webrequest, there is one more partition: webrequest_source/year/month/day/hour [09:02:56] yes! but the data we want out is still only daily data! (does it need the hour still)? [09:03:00] webrequest_source is the varnish type sending the request (we currently have text, upload, misc and maps) [09:04:45] From a data size perpsective: text ~60%, upload 40%, misc and maps are neglectable [09:05:06] And, last but not least, pageviews are only visible in the text partition [09:05:41] Now you can see where I head to: You should restrict you partition to webrequest_source = 'text', in order to prevent scanning 40% of useless data :) [09:05:44] add stashbot [09:05:47] addshore: --^ [09:05:51] ooh, I did not spot webrequest_source is an actual partition! [09:06:15] That's no big deal, it would work ,but for better perf, partition restriction first :) [09:06:26] Yep, even with my limited knowledge of the inners of hadoop I can tell that defiantly makes sense! [09:07:51] Now the second thing: I can tell your oozie has not been tested :) [09:08:04] Cause it'd fail ;) [09:08:45] Lines 31/32 of coordinator.xml, I let you check addshore ;) [09:09:07] pageview_table >.> [09:09:21] and actually, line 53, 55, 59 as well :) [09:09:52] (PS1) Addshore: Query using webrequest_source in ArticlePlaceholder query [analytics/refinery/source] - https://gerrit.wikimedia.org/r/300235 (https://phabricator.wikimedia.org/T138500) [09:09:59] And finally, line 17 [09:11:34] addshore: When doing changes like the pageview -> webrequest one in oozie, global search over your folder for 'pageview' can help spot missed bits [09:11:49] (PS1) Addshore: Fix pageview -> webrequest in articleplaceholder coordinator [analytics/refinery] - https://gerrit.wikimedia.org/r/300236 (https://phabricator.wikimedia.org/T138500) [09:11:54] I thought I did, but apparently I failed :/ [09:11:59] huhuhu :) [09:14:13] addshore: Another change needed, in data handling [09:14:34] addshore: Have you been given an explanation on the dataset as input/output thing in oozie? [09:16:26] (CR) Joal: [C: -1] "One change in data for input." (1 comment) [analytics/refinery] - https://gerrit.wikimedia.org/r/300236 (https://phabricator.wikimedia.org/T138500) (owner: Addshore) [09:16:38] okay, in a meeting for 15 mins now, back in a bit! [09:16:50] addshore: sure, let me know when back [09:23:12] (CR) Joal: [C: 2] "LGTM ! Merging." [analytics/refinery/source] - https://gerrit.wikimedia.org/r/300235 (https://phabricator.wikimedia.org/T138500) (owner: Addshore) [09:28:25] right, joal back! :) I'll have a look at your comments! [09:28:30] sure :) [09:29:40] (PS2) Addshore: Fix pageview -> webrequest in articleplaceholder coordinator [analytics/refinery] - https://gerrit.wikimedia.org/r/300236 (https://phabricator.wikimedia.org/T138500) [09:29:44] joal: ^^ [09:30:06] addshore: yessir:) [09:30:51] addshore: the dataset names can easily be mistaken, so triple checking with the actual dataset definition is usually usefull :) [09:31:09] addshore: Do you mind me asking for a test before merging? [09:37:13] joal: yes thats fine! :) [09:39:23] joal: can you think of a good page / subpage that I should post the list of commands that I use in order to test this stuff? [09:39:31] for other people to look at in the future and for me to refer to? :) [09:39:53] hm addshore, https://wikitech.wikimedia.org/wiki/Analytics/Cluster/Oozie ? [09:44:35] https://hue.wikimedia.org/oozie/list_oozie_coordinator/0032959-160630131625562-oozie-oozi-C [10:03:13] *twidles thumbs whilke it runs* [10:03:52] addshore: This gives you a feel at how different data sizes matters :) [10:03:58] yup! [10:04:22] addshore: When feasible, ALWAYS go for pageview_hourly instead of webrequest: ) [10:22:13] looks like it is still going joal |! [10:22:58] addshore: It has finished, but hue is not updating [10:23:12] hmmm, it's also not in graphite :/ [10:23:49] addshore: status in oozie CLI says killed --> Issue [10:30:54] *looks around for the error* [10:31:10] error code JA018 [10:33:30] I just tested it back with spark-submit joal and I get... [10:33:32] Diagnostics: File does not exist: hdfs://analytics-hadoop/user/addshore/.sparkStaging/application_1468526822215_19126/hive-site.xml [10:33:33] java.io.FileNotFoundException: File does not exist: hdfs://analytics-hadoop/user/addshore/.sparkStaging/application_1468526822215_19126/hive-site.xml [10:34:53] addshore: When using spark-submit, use them conf parameters as the one oozie would use [10:35:50] addshore: java.lang.ClassCastException: [10:35:52] java.lang.String cannot be cast to java.lang.Long [10:36:41] Wait, I think I see the issue [10:36:50] val data = hiveContext.sql(sql).collect().map(r => (r.getString(0), r.getString(0), r.getLong(1))) [10:36:52] should be [10:36:58] val data = hiveContext.sql(sql).collect().map(r => (r.getString(0), r.getString(1), r.getLong(2))) ?? [10:37:52] addshore: correct [10:38:26] addshore: I won't say it again today, but: Not tested code shouldn't be merged :) [10:38:44] * addshore keeps forgetting what he has tested and what he has not :/ [10:38:59] addshore: This means you wnat to do too many things at the same time :) [10:39:16] Right, give me a few more mins to test this one! [10:39:28] (PS1) Addshore: Fix CastException in ArticlePlaceholderMetrics [analytics/refinery/source] - https://gerrit.wikimedia.org/r/300248 (https://phabricator.wikimedia.org/T138500) [10:39:38] * addshore is doing too many things at the same time ;) [10:39:41] always! [10:39:50] addshore: being fast is sometimes slower than being slow, when being fast involves repairing :) [10:46:29] * joal stops giving advices he doesn't follow himslef [10:49:14] :D [10:49:28] so what is wrong with my spark submit now? [10:49:34] afaik this was working before.. [10:49:38] https://www.irccloud.com/pastebin/FkZqcXLj/ [10:50:41] addshore: hive-site.xml file in cluster mode is not working [10:51:02] you should provide the hdfs file from refinery (the same way it's done in oozie) [10:51:39] hdfs://analytics-hadoop/wmf/refinery/current/oozie/util/hive/hive-site.xml IRRC [10:51:57] ${refinery_directory}/oozie/util/hive/hive-site.xml [10:52:12] or, change deploy-mode from cluster to client [10:54:16] okay, changed it to client, and also the noticed we changed the namespace parameter to "graphite-namespace" [11:03:58] okay, spark worked, now to try ooozzie [11:04:37] https://hue.wikimedia.org/oozie/list_oozie_coordinator/0033057-160630131625562-oozie-oozi-C/ [11:17:54] joal: right, everything worked and everything landed in graphite from both the spark and the oozie job :) [11:18:04] addshore: Great news :) [11:18:22] addshore: so, what should I merge? [11:18:53] https://gerrit.wikimedia.org/r/#/c/300236/ and https://gerrit.wikimedia.org/r/#/c/300248/ and they are ready to go! [11:20:03] (CR) Joal: [C: 2] "Tested, merging !" [analytics/refinery/source] - https://gerrit.wikimedia.org/r/300248 (https://phabricator.wikimedia.org/T138500) (owner: Addshore) [11:20:35] (CR) Joal: [C: 2 V: 2] "Tested, merging !" [analytics/refinery] - https://gerrit.wikimedia.org/r/300236 (https://phabricator.wikimedia.org/T138500) (owner: Addshore) [11:20:40] addshore: Merged ! [11:20:49] awesome! [11:20:49] Now we should deploy that :) [11:20:58] that would be great! [11:25:26] addshore: Will do that in a moment [11:25:53] awesome, it only needs to run from the 19th onwards :) [11:26:06] okey [11:31:41] I will be back later!! [11:55:00] addshore: Forgot again one thing: Since there are changes in refinery-source repo, jar version will bumped from 0.0.32 to 0.0.33 [11:55:21] (PS1) Joal: Update changelog.md for version 0.0.33 [analytics/refinery/source] - https://gerrit.wikimedia.org/r/300250 [11:55:41] (CR) Joal: [C: 2] "Self merging for deploy." [analytics/refinery/source] - https://gerrit.wikimedia.org/r/300250 (owner: Joal) [11:59:28] (Merged) jenkins-bot: Update changelog.md for version 0.0.33 [analytics/refinery/source] - https://gerrit.wikimedia.org/r/300250 (owner: Joal) [12:57:55] addshore: I'm sorry I got an issue with the new deploy process again :( [12:58:04] I'll wait for madhuvishy arrival to try and fix [13:43:34] Analytics, Analytics-Cluster, Analytics-Kanban, Deployment-Systems, and 3 others: Deploy analytics-refinery with scap3 - https://phabricator.wikimedia.org/T129151#2483356 (Ottomata) The problem with pwstore is this same as I had here: https://phabricator.wikimedia.org/T132177#2341946 The last I... [13:50:48] Analytics-Kanban: Mediawiki changes to publish data for analytics schemas - https://phabricator.wikimedia.org/T138268#2483361 (Ottomata) Ja this one: T137287 [13:53:19] Analytics-Kanban, EventBus, Services, User-mobrovac: Improve schema update process on EventBus production instance - https://phabricator.wikimedia.org/T140870#2483365 (Ottomata) Hi! Working on this today, there are two problems here (aside from documentation and access). 1. eventlogging-service... [14:19:50] ottomata is back ! [14:19:53] hi ottomata :) [14:20:08] HIIII! [14:51:24] a-team - Lino is sick :( I have a GP appointment at 5:30, so I'll most probably miss meetings tonight [14:51:34] e-team: Sending e-scrum [14:51:48] good health to the little one, joal [14:52:06] Thanks milimetric [14:52:12] you get the private demo all the time anyway, so you're not missing much but I'm happy to re-enact it for you :) [14:52:23] milimetric: He's been over 40deg last night [14:52:28] :) [14:52:36] oof, yeah, I remember this when I was a kid I was always sick [14:52:54] I remember nothing but hallucinating horse stampedes and shrinking blowing up all through my early years [14:54:50] milimetric: Maaan, there are people paying products to get that ;) [14:55:12] What I remember from fever is: feeling BAAAAAAAAAAAAaaaaAAAAAAAAAD [14:55:38] yeah, maybe the horses were there to distract me from feeling bad, never thought of it that way :) [14:57:44] :) [14:58:28] a-team: it looks like the meeting is at a different hangout, so we should use that in case people outside our team join: https://hangouts.google.com/hangouts/_/wikimedia.org/nuria [14:58:30] msg milimetric will join 8:05 , sorry [14:58:35] np [14:58:44] I can talk really fast when everyone joins :) [14:58:55] k [15:01:07] omw [15:01:18] ah wait not in batcave [15:06:25] having trouble joining hangouts... [15:58:55] joal: okay! and thanks again! [16:00:48] joal: do I need to bump spark_job_jar version in refinery for the job then (in coordinator.properties)? [16:07:05] Analytics-Kanban, EventBus, Services, User-mobrovac: Improve schema update process on EventBus production instance - https://phabricator.wikimedia.org/T140870#2483782 (Ottomata) a:Ottomata [16:10:09] (PS2) Milimetric: Normalize project parameter [analytics/aqs] - https://gerrit.wikimedia.org/r/299824 (https://phabricator.wikimedia.org/T136016) (owner: Nuria) [16:14:39] (CR) Milimetric: "+2 on the change and refactoring of the tests, -1 on the style changes in aqsUtil.js, some of them mess up the code too much, I'd prefer t" [analytics/aqs] - https://gerrit.wikimedia.org/r/299824 (https://phabricator.wikimedia.org/T136016) (owner: Nuria) [16:16:05] Analytics-Kanban, EventBus, Services, User-mobrovac: Improve schema update process on EventBus production instance - https://phabricator.wikimedia.org/T140870#2483836 (Nuria) @mobrovac Please file a request for access for eventbus services team should be able to: ssh to machines, do restart and d... [16:16:13] (CR) Milimetric: Normalize project parameter (1 comment) [analytics/aqs] - https://gerrit.wikimedia.org/r/299824 (https://phabricator.wikimedia.org/T136016) (owner: Nuria) [16:17:03] Analytics-Kanban, Patch-For-Review: Translate the analytics-release-test job to YAML config in integration/config {hawk} - https://phabricator.wikimedia.org/T132182#2190797 (hashar) @madhuvishy has added the necessary python glue in Jenkins Job Builder. Aced the CI configuration https://gerrit.wikimedia... [16:19:10] Analytics-Kanban, EventBus, Services, User-mobrovac: Improve schema update process on EventBus production instance - https://phabricator.wikimedia.org/T140870#2483847 (Ottomata) +1. Probably should have a new group `eventbus-admins`. [16:20:30] (PS1) Addshore: Prepend wikidatawiki dbname to sql queries [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300299 [16:20:46] (PS1) Addshore: Switch apiLogScanner.sh to php file [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300300 [16:21:25] (PS1) Addshore: Prepend wikidatawiki dbname to sql queries [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300301 [16:21:25] Analytics: Compile a request data set for caching research and tuning - https://phabricator.wikimedia.org/T128132#2483856 (Nuria) [16:21:34] (CR) Addshore: [C: 2] Prepend wikidatawiki dbname to sql queries [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300301 (owner: Addshore) [16:21:43] (CR) Addshore: [C: 2] Prepend wikidatawiki dbname to sql queries [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300299 (owner: Addshore) [16:21:53] (Merged) jenkins-bot: Prepend wikidatawiki dbname to sql queries [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300301 (owner: Addshore) [16:22:02] (Merged) jenkins-bot: Prepend wikidatawiki dbname to sql queries [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300299 (owner: Addshore) [16:22:05] (CR) Addshore: [C: 2] Switch apiLogScanner.sh to php file [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300300 (owner: Addshore) [16:22:12] (PS1) Addshore: Switch apiLogScanner.sh to php file [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300303 [16:22:18] (CR) Addshore: [C: 2] Switch apiLogScanner.sh to php file [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300303 (owner: Addshore) [16:28:44] Analytics, Reading-Admin, Zero: Country mapping routine for proxied requests - https://phabricator.wikimedia.org/T116678#1755493 (Nuria) Analytics uses X-Client_IP and that is what we geocode so there is no action for the team here that I can see, let us know otherwise. Moving to radar. [16:29:00] Analytics, Reading-Admin, Zero: Country mapping routine for proxied requests - https://phabricator.wikimedia.org/T116678#2483889 (Nuria) a:Nuria [16:31:08] (CR) Addshore: [C: 2] "Not sure why the gate submit didnt run then....." [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300300 (owner: Addshore) [16:31:14] (CR) Addshore: [C: 2] "Not sure why the gate submit didnt run then....." [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300303 (owner: Addshore) [16:32:51] Analytics-Dashiki, Analytics-Kanban: Dashiki should load stale data if new data is not available due to network/api conditions - https://phabricator.wikimedia.org/T138647#2483895 (Nuria) Add Banner to warn user that data is stale or that we have network issues. Research how easy is this. [16:37:45] Analytics-Kanban: Wikistats 2.0. Edit Reports: Setting up a pipeline to source Historical Edit Data into hdfs {lama} - https://phabricator.wikimedia.org/T130256#2483920 (Milimetric) [16:37:47] Analytics: Scale MySQL edit data extraction - https://phabricator.wikimedia.org/T134791#2483919 (Milimetric) [16:42:15] Analytics-Wikistats: Unexpected increase in traffic for 4 languages in same region, on smaller projects - https://phabricator.wikimedia.org/T136084#2483969 (Nuria) This is harder that it seems as banning requests per IP might ban (or tag as spider a bunch of lawful bot traffic, for ex: all verizon iphone use... [16:42:41] (CR) Addshore: [C: 2 V: 2] "The non merging issue should have been fixed by https://gerrit.wikimedia.org/r/#/c/300308/" [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300300 (owner: Addshore) [16:42:44] (CR) Addshore: [C: 2 V: 2] "The non merging issue should have been fixed by https://gerrit.wikimedia.org/r/#/c/300308/" [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300303 (owner: Addshore) [16:45:02] Analytics-Kanban: Bot from an Azure cloud cluster is causing a false pageview spike - https://phabricator.wikimedia.org/T137454#2368793 (Nuria) If we want to implement tagging as "possible bot" (or "bot") based on request ratio first we need to research whether we will be tagging a bunch of mov=bile traffic... [16:45:47] Analytics-Kanban: Bot from an Azure cloud cluster is causing a false pageview spike (can we identify as bot?) - https://phabricator.wikimedia.org/T137454#2483994 (Nuria) [16:49:39] (PS1) Addshore: betafeatures also ignore labtestwiki db [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300310 [16:49:51] (PS1) Addshore: betafeatures also ignore labtestwiki db [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300311 [16:49:57] (CR) Addshore: [C: 2] betafeatures also ignore labtestwiki db [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300311 (owner: Addshore) [16:50:08] (CR) Addshore: [C: 2] betafeatures also ignore labtestwiki db [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300310 (owner: Addshore) [16:50:35] (Merged) jenkins-bot: betafeatures also ignore labtestwiki db [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300311 (owner: Addshore) [16:56:22] (Merged) jenkins-bot: betafeatures also ignore labtestwiki db [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300310 (owner: Addshore) [16:57:07] Analytics-Cluster, Analytics-Kanban, Deployment-Systems, scap, and 2 others: Deploy analytics-refinery with scap3 - https://phabricator.wikimedia.org/T129151#2096617 (Nuria) [16:59:00] Analytics-Kanban, Patch-For-Review: Translate the analytics-release-test job to YAML config in integration/config {hawk} - https://phabricator.wikimedia.org/T132182#2190797 (madhuvishy) :D Thanks @hashar! For the barnstar, and all the help :) a-team, this is all merged now. [16:59:56] Analytics-Kanban, Continuous-Integration-Config, Patch-For-Review: Add JJB support for Jenkins Maven Release Plugin {hawk} - https://phabricator.wikimedia.org/T132175#2190696 (madhuvishy) This is merged upstream, and also cherry-picked and available in our version of JJB. [17:04:04] Analytics: Compile a request data set for caching research and tuning - https://phabricator.wikimedia.org/T128132#2484111 (Nuria) We wil have a flatfile(s) on datasets.wikimedia.org How much data can we handle on an http endpoint? Seems that this data will compress pretty well, let's start with 1G. [17:04:54] Analytics-Kanban: Compile a request data set for caching research and tuning - https://phabricator.wikimedia.org/T128132#2064831 (Nuria) [17:06:53] Analytics-Kanban: Scale MySQL edit data extraction - https://phabricator.wikimedia.org/T134791#2277164 (Nuria) [17:07:36] Analytics-Kanban: Remove outdated docs regarding dashboard info - https://phabricator.wikimedia.org/T137883#2382422 (Nuria) [17:14:46] Analytics: Make top pages for WP:MED articles - https://phabricator.wikimedia.org/T139324#2428005 (Nuria) This task will be an ad hoc query to get these numbers. There is a longer task of "adding top counts for wiki projects to pageview API" [17:15:02] Analytics: Adding top counts for wiki projects to pageview API - https://phabricator.wikimedia.org/T141010#2484223 (Nuria) [17:15:30] Analytics: Adding top counts for wiki projects to pageview API - https://phabricator.wikimedia.org/T141010#2484235 (Nuria) Example: User wants top pages for WikiProject:Medicine. Add WikiProject:Medicine as a project modifier to en.wikipedia.org queries on the pageview API. Need to think this through and... [17:15:41] Analytics: Adding top counts for wiki projects (ex: WikiProject:Medicine) to pageview API - https://phabricator.wikimedia.org/T141010#2484236 (Nuria) [17:17:50] Analytics-Kanban: Make top pages for WP:MED articles - https://phabricator.wikimedia.org/T139324#2428005 (Nuria) [18:06:14] mobrovac: once an admin group gets created let's have a fast sync up to be on the same page when it comes to ops for eventbus cc ottomata , probably at some point next week, 20 mins should be enough cc gwicke [18:07:32] nuria_: sure, but i'll be out next week [18:07:43] so the week after that for me the soonest [18:07:44] mobrovac: noted [18:20:18] Hello people, checking emails and IRC, but afaik it seems all good right? [18:20:21] :) [18:22:25] ottomata: o/ [18:36:00] ottomata: I fixed https://gerrit.wikimedia.org/r/#/c/299719, we can definitely use analytics-deploy. I added 'analyticsdeploy' because I didn't know well enough the puppet code.. As soon as Yuvi's key will be fixed in the pwstore I'll create the key and updated the keyholder [18:36:17] after that the main problem would be how to test it [18:36:32] not sure it trying it in labs would be useful or a waste of time [18:37:10] we can maybe disable puppet on tin, analytics1027 and upgrade only say stat1002 [18:37:27] see if everything is changed correctly [18:37:48] upgrade tin, do a little deploy to stat1002 [18:37:50] etc. [18:48:30] elukey: that sounds good. or stat1004 [18:48:33] it gets deployed there too [18:48:42] actually you could probably puppetize the deploy conditionally [18:48:53] use scap on stat1004, leave trebuchet until we know it all works for the others [18:50:42] ah can we do something like that?? [19:04:46] will try to investigate tomorrow.. [19:04:56] going afk! [19:04:58] byyeeee [19:05:04] joal: hope that Lino is ok! [19:05:18] * elukey hugs joal [19:05:21] o/ [19:23:03] byyee! [19:42:45] joal: sorry about the merge yesterday but totally missed the text partition thing (cc addshore ) [19:52:03] ottomata: no bother nuria_ :) [19:52:10] oops, sorry ottomata [19:58:41] * joal hugs elukey back - Thanks mate, Lino will get better soon I'm sure :) [20:15:31] Analytics-Kanban: Mediawiki changes to publish data for analytics schemas - https://phabricator.wikimedia.org/T138268#2484994 (Jdforrester-WMF) So, should this be duped into T137287? Or just marked as a parent of that one? [20:41:08] Analytics-Kanban, MW-1.28-release-notes, Patch-For-Review: Update mediwiki hooks to generate data for new event-bus schemas - https://phabricator.wikimedia.org/T137287#2485143 (Nuria) [20:41:35] Analytics-Kanban, MW-1.28-release-notes, Patch-For-Review: Update mediwiki hooks to generate data for new event-bus schemas - https://phabricator.wikimedia.org/T137287#2363796 (Nuria) [20:41:37] Analytics-Kanban: Mediawiki changes to publish data for analytics schemas - https://phabricator.wikimedia.org/T138268#2395185 (Nuria) [20:42:16] Analytics-Kanban, MW-1.28-release-notes, Patch-For-Review: Update mediwiki hooks to generate data for new event-bus schemas - https://phabricator.wikimedia.org/T137287#2363796 (Nuria) [20:42:18] Analytics-Kanban: Wikistats 2.0. Edit Reports: Setting up a pipeline to source Historical Edit Data into hdfs {lama} - https://phabricator.wikimedia.org/T130256#2485149 (Nuria) [20:42:53] Analytics-Kanban: Mediawiki changes to publish data for analytics schemas - https://phabricator.wikimedia.org/T138268#2395185 (Nuria) [20:42:55] Analytics-Kanban, MW-1.28-release-notes, Patch-For-Review: Update mediwiki hooks to generate data for new event-bus schemas - https://phabricator.wikimedia.org/T137287#2363796 (Nuria) [20:44:02] joal: you should be good to deploy now - i'm happy to talk about the problem whenever you're around [20:44:16] also *hugs* hope lino is doing alright [21:40:34] (PS1) Addshore: Improve all script output [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300427 [21:40:57] (PS1) Addshore: Improve all script output [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300428 [21:42:15] (CR) Addshore: [C: 2] Improve all script output [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300427 (owner: Addshore) [21:42:18] (CR) Addshore: [C: 2] Improve all script output [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300428 (owner: Addshore) [21:42:36] (Merged) jenkins-bot: Improve all script output [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300427 (owner: Addshore) [21:42:41] (Merged) jenkins-bot: Improve all script output [analytics/wmde/scripts] - https://gerrit.wikimedia.org/r/300428 (owner: Addshore) [21:55:23] Analytics, Editing-Analysis, Performance-Team, VisualEditor, Graphite: Statsv down, affects metrics from beacon/statsv (e.g. VisualEditor, mw-js-deprecate) - https://phabricator.wikimedia.org/T141054#2485526 (Krinkle) [22:38:48] Analytics-Cluster, Operations: stat1002 - puppet fails to git pull refinery_source - https://phabricator.wikimedia.org/T141062#2485674 (Dzahn) [22:39:14] Analytics-Cluster, Operations: stat1002 - puppet fails to git pull refinery_source - https://phabricator.wikimedia.org/T141062#2485686 (Dzahn) [22:54:29] Analytics, Analytics-Cluster: Create Daily & Monthly pageview dump with country data - https://phabricator.wikimedia.org/T90759#2485715 (ggellerman) [23:53:27] Analytics, Editing-Analysis, Performance-Team, VisualEditor, Graphite: Statsv down, affects metrics from beacon/statsv (e.g. VisualEditor, mw-js-deprecate) - https://phabricator.wikimedia.org/T141054#2485931 (Jdforrester-WMF) Aha, data has begun to trickle back in. Thanks!