[00:04:14] 10Quarry, 10Patch-For-Review, 10cloud-services-team (Kanban): Prepare Quarry for multiinstance wiki replicas - https://phabricator.wikimedia.org/T264254 (10Bstorm) I think that to move this forward, we need to deploy the patch on a testing server and run it in parallel. I also wonder if branching the repo wo... [00:06:59] 10Quarry, 10cloud-services-team (Kanban): Do some checks of how many Quarry queries will break in a multiinstance environment - https://phabricator.wikimedia.org/T267989 (10Bstorm) [00:54:14] RECOVERY - Check the last execution of monitor_refine_eventlogging_legacy_failure_flags on an-launcher1002 is OK: OK: Status of the systemd unit monitor_refine_eventlogging_legacy_failure_flags https://wikitech.wikimedia.org/wiki/Analytics/Systems/Managing_systemd_timers [02:54:09] 10Analytics, 10Datasets-Archiving, 10Datasets-General-or-Unknown, 10Internet-Archive: Mediacounts dumps missing since February 9, 2021 - https://phabricator.wikimedia.org/T274617 (10Hydriz) [05:34:02] 10Analytics-Radar, 10Datasets-Archiving, 10Research: Make HTML dumps available - https://phabricator.wikimedia.org/T182351 (10bd808) >>! In T182351#6825141, @Ottomata wrote: > Templates are stored in wikitext (right...are they?). If so, I wonder if [[ https://wikitech.wikimedia.org/wiki/Analytics/Data_Lake/... [07:11:26] goood morning [07:15:34] 10Analytics-Radar, 10Datasets-Archiving, 10Research: Make HTML dumps available - https://phabricator.wikimedia.org/T182351 (10jeremyb-phone) if you're going to do site js/css then might as well do vector js/css as well. and this may be going too far but some other things to consider: * vector wasn't always t... [07:19:26] 10Analytics-Radar, 10Datasets-Archiving, 10Research: Make HTML dumps available - https://phabricator.wikimedia.org/T182351 (10jeremyb-phone) a couple more things * magic words that return date/time should return an old date/time * make sure that articles that exist now are red links if they didn't exist bac... [07:19:35] 10Analytics, 10Analytics-Kanban: Move the puppet codebase from cdh to bigtop - https://phabricator.wikimedia.org/T274345 (10elukey) p:05Triage→03Medium [07:46:59] !log force a manual run of refinery-druid-drop-public-snapshots on an-launcher1002 (3d before its natural start) - controlled execution to see how druid + 3xdataset replication reacts [07:47:01] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [07:49:52] I see some timeouts in the broker on druid1003 [07:49:56] err druid1004 [07:50:55] broker metrics start to report higher latencies [07:52:18] and stats.wikimedia.org not working [07:52:22] PROBLEM - aqs endpoints health on aqs1007 is CRITICAL: /analytics.wikimedia.org/v1/edits/per-page/{project}/{page-title}/{editor-type}/{granularity}/{start}/{end} (Get daily edits for english wikipedia page 0) timed out before a response was received https://wikitech.wikimedia.org/wiki/Services/Monitoring/aqs [07:52:36] PROBLEM - aqs endpoints health on aqs1004 is CRITICAL: /analytics.wikimedia.org/v1/edits/per-page/{project}/{page-title}/{editor-type}/{granularity}/{start}/{end} (Get daily edits for english wikipedia page 0) timed out before a response was received https://wikitech.wikimedia.org/wiki/Services/Monitoring/aqs [07:53:44] PROBLEM - aqs endpoints health on aqs1009 is CRITICAL: /analytics.wikimedia.org/v1/edits/per-page/{project}/{page-title}/{editor-type}/{granularity}/{start}/{end} (Get daily edits for english wikipedia page 0) timed out before a response was received https://wikitech.wikimedia.org/wiki/Services/Monitoring/aqs [07:53:44] PROBLEM - aqs endpoints health on aqs1005 is CRITICAL: /analytics.wikimedia.org/v1/edits/per-page/{project}/{page-title}/{editor-type}/{granularity}/{start}/{end} (Get daily edits for english wikipedia page 0) is CRITICAL: Test Get daily edits for english wikipedia page 0 returned the unexpected status 504 (expecting: 200) https://wikitech.wikimedia.org/wiki/Services/Monitoring/aqs [07:53:45] PROBLEM - aqs endpoints health on aqs1006 is CRITICAL: /analytics.wikimedia.org/v1/edits/per-page/{project}/{page-title}/{editor-type}/{granularity}/{start}/{end} (Get daily edits for english wikipedia page 0) is CRITICAL: Test Get daily edits for english wikipedia page 0 returned the unexpected status 504 (expecting: 200) https://wikitech.wikimedia.org/wiki/Services/Monitoring/aqs [07:53:50] PROBLEM - aqs endpoints health on aqs1008 is CRITICAL: /analytics.wikimedia.org/v1/edits/per-page/{project}/{page-title}/{editor-type}/{granularity}/{start}/{end} (Get daily edits for english wikipedia page 0) is CRITICAL: Test Get daily edits for english wikipedia page 0 returned the unexpected status 504 (expecting: 200) https://wikitech.wikimedia.org/wiki/Services/Monitoring/aqs [07:54:35] !log roll restart of druid brokers on druid-public - locked after scheduled datasource deletion [07:54:40] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [07:56:04] RECOVERY - aqs endpoints health on aqs1009 is OK: All endpoints are healthy https://wikitech.wikimedia.org/wiki/Services/Monitoring/aqs [07:56:10] RECOVERY - aqs endpoints health on aqs1005 is OK: All endpoints are healthy https://wikitech.wikimedia.org/wiki/Services/Monitoring/aqs [07:56:10] RECOVERY - aqs endpoints health on aqs1006 is OK: All endpoints are healthy https://wikitech.wikimedia.org/wiki/Services/Monitoring/aqs [07:56:20] RECOVERY - aqs endpoints health on aqs1008 is OK: All endpoints are healthy https://wikitech.wikimedia.org/wiki/Services/Monitoring/aqs [07:57:18] RECOVERY - aqs endpoints health on aqs1007 is OK: All endpoints are healthy https://wikitech.wikimedia.org/wiki/Services/Monitoring/aqs [07:57:30] RECOVERY - aqs endpoints health on aqs1004 is OK: All endpoints are healthy https://wikitech.wikimedia.org/wiki/Services/Monitoring/aqs [08:20:04] elukey@druid1004:~$ ps -eLf | grep "org.apache.druid.cli.Main server historical" | wc -l [08:20:07] 455 [08:20:11] this is something that doesn't make sense to me [08:53:58] I am wondering if a more conservative drop script could help [08:54:34] for example, dropping max X segments in a batch, wait, etc.. [09:59:24] iiiinteresting, all druid broker timeouts are related to a single historical, the one on druid1003 [09:59:28] err druid1004 [10:03:17] 10Analytics: Druid datasource drop triggers segment reshuffling by the coordinator - https://phabricator.wikimedia.org/T270173 (10elukey) Today it rehappened, and I noticed from the following that the broker timeouts are all related to one historical: ` elukey@cumin1001:~$ sudo cumin 'A:druid-public' 'grep "ERR... [10:08:15] so the brokers all pile up on one historical [10:09:21] and queries are for mediawiki_history_reduced_2021_01 [10:09:23] as expected [10:49:12] I periodically find https://github.com/apache/druid/issues/325#issuecomment-32744317 [10:49:24] I have never really got it completely [10:49:32] but I might have a theory [10:49:58] we have druid.broker.http.numConnections=20 configured [10:51:12] and 5 brokers, so a total of 5*20 total parallel cons that a single historical can receive [10:53:01] on the historical, we have druid.server.http.numThreads=120 [10:53:19] that is 5*20+20, more or less similar to what upstream suggests [10:53:35] on the broker though we have druid.processing.numThreads=10 [10:53:51] and that old gh issues says [10:53:52] [num of brokers] * broker:druid.processing.numThreads > historical:druid.server.http.numThreads [10:54:16] otherwise CLEARLY things lock up quickly [10:55:17] and joal I think gave a thumbs up :D [10:55:54] so what I think is that we there are not enough threads to process subqueries returned by the historicals [10:56:08] so connections need to wait on the broker side if there is a slowdown [10:56:58] we could try something like druid.processing.numThreads=25 [10:57:04] for brokers [11:00:27] also [11:00:28] druid.server.http.numThreads on the Broker should be set to a value slightly higher than druid.broker.http.numConnections on the same Broker. [11:02:00] ETOOMANYTUNABLES [11:09:39] ok https://gerrit.wikimedia.org/r/c/operations/puppet/+/663800 [11:09:45] let's see if this helps [11:11:41] ok joal when you have time lemme know what you think about --^ :) [11:35:34] * elukey lunch! [12:40:44] * joal looks at his thumb, wondering if it should be up or down [12:42:51] ok - thumbs up it is :) [14:19:18] 10Analytics, 10Analytics-Visualization: Archive #Analytics-Visualization (which seems to be about Limn)? - https://phabricator.wikimedia.org/T274647 (10Aklapper) [15:12:06] heya teamm [15:12:28] holaaaa [15:13:08] reading scrollback elukey, can I do somethingf? [15:13:39] mforns: nono all done, it was a big bla bla bla between myself and IRC :D [15:13:54] the final result is the code review above [15:13:57] I am rolling it out [15:14:00] hope that it helps [15:14:03] k [15:16:24] !log roll restart druid broker on druid-public to pick up new settings [15:16:27] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [15:16:34] mforns: if you have time we can fix the entropy alerts [15:16:37] elukey: if you give a +1 to https://gerrit.wikimedia.org/r/c/analytics/refinery/+/663245/ I will deploy it to stop alerts. Despite Joseph's comment, he told me it was OK to merge. [15:16:45] hehe, you read my mind! [15:17:07] (03CR) 10Elukey: [C: 03+1] Replace UNION ALL with UNION to unbreak data_quality_stats job [analytics/refinery] - 10https://gerrit.wikimedia.org/r/663245 (https://phabricator.wikimedia.org/T274322) (owner: 10Mforns) [15:17:12] thanks! [15:17:17] np! [15:17:28] mforns: we can do a quick refinery deploy if you want [15:17:59] (03CR) 10Mforns: [V: 03+2 C: 03+2] "Since Joseph told me in person to disregard his comment, and that the fix was OK, and I got a +1 from Luca, I'm merging this!" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/663245 (https://phabricator.wikimedia.org/T274322) (owner: 10Mforns) [15:18:49] elukey: sure, I will do a ref deploy now [15:18:55] <3 [15:19:02] elukey: do you know if there's something in ref-source to deploy too? [15:21:15] mforns: I knew only the 1.1 release to remove cdh deps, but I think it was already deployed [15:21:28] a quick refinery scap deploy is fine I think [15:21:34] ok [15:21:36] worst case we deploy early next week [15:21:42] doing [15:25:38] 10Analytics: Repackage spark without hadoop, use provided hadoop jars - https://phabricator.wikimedia.org/T274384 (10Ottomata) Some findings: - spark-2.4.4-bin-without-hadoop.tgz does not include any Hive support. It also seems to be missing some parts of spark-hadoop specific packages that I think we need, e... [15:26:08] !log started deployment of analytics-refinery [15:26:10] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [15:30:44] 10Analytics, 10Patch-For-Review: Druid datasource drop triggers segment reshuffling by the coordinator - https://phabricator.wikimedia.org/T270173 (10elukey) After checking again a ton of docs, I tried new settings for the brokers, let's see if they work. I also followed up in the user@druid mailing list but s... [15:31:37] 10Analytics: HDFS Namenode: use a separate port for Block Reports and Zookeeper failover - https://phabricator.wikimedia.org/T273629 (10elukey) Next step: deploy new settings to the main hadoop cluster and roll restart HDFS daemons (tentative schedule - Mon 15th) [15:31:52] 10Analytics, 10Analytics-Kanban: HDFS Namenode: use a separate port for Block Reports and Zookeeper failover - https://phabricator.wikimedia.org/T273629 (10elukey) [15:47:39] 10Analytics-Clusters, 10Patch-For-Review: Convert labsdb1012 from multi-source to multi-instance - https://phabricator.wikimedia.org/T269211 (10elukey) @razzi lets work on a schedule, we are getting close to the middle of the month and we have only two weeks to do the migration :) Some notes: * if you ssh to... [15:49:20] 10Analytics: Repackage spark without hadoop, use provided hadoop jars - https://phabricator.wikimedia.org/T274384 (10elukey) +1 [15:57:24] 10Analytics, 10Patch-For-Review: Decide to move or not to PrestoSQL/Trino - https://phabricator.wikimedia.org/T266640 (10elukey) @Ottomata https://github.com/prestodb/presto/pull/15655 got merged by upstream, but it will be released in 0.248 and I am not sure when they have scheduled it. I uploaded the 0.246 d... [16:00:35] elukey: the scap deploy is taking much more than usual, seems OK, but slowwww [16:00:48] 1 to go [16:01:52] ack :) [16:07:30] finished, it took 42 minutes... [16:09:27] 10Analytics, 10Analytics-Kanban: Check data currently stored on thorium and drop what it is not needed anymore - https://phabricator.wikimedia.org/T265971 (10elukey) Definitely better now: ` elukey@an-launcher1002:~$ sudo -u hdfs kerberos-run-command hdfs hdfs dfs -du -h /wmf/data/archive/backup/misc/thorium... [16:09:57] 10Analytics, 10Analytics-Kanban: Check data currently stored on thorium and drop what it is not needed anymore - https://phabricator.wikimedia.org/T265971 (10elukey) @Milimetric can we sync next week to double check and finally drop? [16:23:26] factorio, yea? [16:23:30] everyone have it installed a-team? [16:23:41] no...... :[ [16:23:48] I can watch [16:23:49] how we doin' voice, discord? [16:23:52] I am going to join bc to chat but probably will not play :) [16:23:53] mforns! Why not? [16:24:02] what is happening! :) [16:24:05] hehehe [16:24:20] I looked at it, and I found it too complicated for my brain [16:24:22] we can do voice on hangouts, but turn off video, otherwise I think my game will struggle with the fps [16:24:37] I can testify that it works fine on my laptop with everything turned off [16:24:53] mforns: it has the smoothest learning curve, but come hang out :) [16:24:55] (on linux, there's not even an install process, just download and go) [16:25:12] yeah, it's easy, you just walk around and bang on rocks as far as I can tell [16:25:19] aha... [16:25:40] ok, I'm joining the batcave and spinning up the game so I can make sure it works. fdans how do I join your server? [16:28:18] going to get a coffee and then I'll join bc [16:28:42] Starting the download here [16:32:00] !log finished deployment of analytics-refinery [16:32:04] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [16:37:49] milimetric: are you joining bc? [16:38:06] we are here [16:38:30] ahhh we are in the meeting that you created [16:38:38] oh shit sorry [17:04:52] mforns: heya - talking work in the non-game place ;) [17:05:07] mforns: have you deploy fixed the issue with the data-quality job? [17:05:13] what?? :] [17:05:23] joal: yes, I hope alarms stop now [17:06:32] mforns: have you restarted jobs? [17:06:40] heh, doing now... [17:06:49] Ah - sorry :) [17:08:51] !log reboot presto workers for kernel upgrade [17:08:52] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [17:09:07] joal: no, no, you reminded me when you asked about the deploy-fix! [17:09:15] Ah [17:10:51] mforns: dumb question - how have you chosen the date for restart? [17:11:50] joal: for the daily bundle, 2021-02-10; for the hourly 2021-02-09T00 [17:12:26] ack mforns [17:12:27] joal: does that make sense? [17:12:51] it does mforns - No sure if you noticed mforns but daily bundle succeeded for one day (https://hue.wikimedia.org/oozie/list_oozie_coordinator/0019132-210107075406929-oozie-oozi-C/) [17:13:05] * joal wonders :S [17:13:16] joal: yes, the daily bundle was not using UNION ALL... [17:13:20] AH! [17:13:26] makes sense :) [17:13:29] great [17:13:31] but still, I wonder why it failed... [17:13:41] That perfect mforns - thanks for explanations :) [17:36:12] 10Analytics, 10Analytics-Visualization, 10Project-Admins: Archive #Analytics-Visualization (which seems to be about Limn)? - https://phabricator.wikimedia.org/T274647 (10DannyS712) [17:37:50] 10Quarry: Add a possibility to delete a draft - https://phabricator.wikimedia.org/T135908 (10Halfak) FWIW, I think there's a big difference between "delete" and "archive". Delete breaks links and hides past activity. Archive gets stuff I don't want to see out of the way. I think "archive" is the right metap... [17:43:24] !log Rerun wikidata-json_entity-weekly-wf-2021-02-01 [17:43:26] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [17:47:40] !log Rerun wikidata-specialentitydata_metrics-wf-2021-2-10 [17:47:42] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [17:48:21] !log Rerun wikidata-articleplaceholder_metrics-wf-2021-2-10 [17:48:22] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:31:17] 10Analytics-Clusters: Balance Kafka topic partitions on Kafka Jumbo to take advantage of the new brokers - https://phabricator.wikimedia.org/T255973 (10razzi) [18:31:58] !log rebalance kafka partitions for __consumer_offsets [18:32:00] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:33:37] (03CR) 10Mforns: [V: 03+2 C: 03+2] "LGTM!" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/663020 (https://phabricator.wikimedia.org/T273826) (owner: 10Nettrom) [18:34:13] !log rebalance kafka partitions for atskafka_test_webrequest_text [18:34:14] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:34:29] (03CR) 10Mforns: "This will become effective on our next refinery deployment (on tuesdays). Thanks!" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/663020 (https://phabricator.wikimedia.org/T273826) (owner: 10Nettrom) [18:38:23] going afk people, have a nice weekend! [18:38:31] ciao elukey ! [18:38:35] o/ [18:41:08] 10Analytics-Clusters: Balance Kafka topic partitions on Kafka Jumbo to take advantage of the new brokers - https://phabricator.wikimedia.org/T255973 (10razzi) [19:02:50] (03PS2) 10Milimetric: Update syntax for new hive version [analytics/refinery] - 10https://gerrit.wikimedia.org/r/663372 (https://phabricator.wikimedia.org/T274322) [19:03:01] (03CR) 10Milimetric: [V: 03+2 C: 03+2] Update syntax for new hive version [analytics/refinery] - 10https://gerrit.wikimedia.org/r/663372 (https://phabricator.wikimedia.org/T274322) (owner: 10Milimetric) [19:03:43] a-team: gonna deploy to launch the mediarequest per file cassandra job [19:03:54] k :] [19:03:56] the HQL worked fine... I don't see why it would fail with oozie [19:04:09] (is there anything else for me to deploy? [19:04:14] milimetric: how did you solve the union all thing? [19:04:18] looks like changes to the whitelist will go out [19:04:24] mforns: it doesn't seem to fail in this case [19:04:33] milimetric: I deployed refinery before factorio, and nothing else was there [19:04:33] (you were saying it fails for some jobs and not others) [19:04:39] k [19:04:40] ok ok [19:05:07] milimetric: did you test the query with insert overwrite? [19:05:33] mforns: yeah, https://gerrit.wikimedia.org/r/c/analytics/refinery/+/663372/2/oozie/cassandra/daily/mediarequest_per_file.hql [19:05:37] ok ok [19:06:00] :] [19:07:26] btw it's my ops week, but Monday is a holiday, for me, Andrew, & Raz-zi, so all ops duty people will be out [19:08:23] milimetric: don't worry, I can look after ops, at least in the eu-afternoon [19:09:00] milimetric: I forgot, I merged a change to the EL sanitization include-list, so it should be included in your deployment [19:09:07] but, no extra action item [19:11:57] 10Analytics, 10Growth-Scaling, 10Growth-Team, 10Product-Analytics: Growth: delete data older than 90 days - https://phabricator.wikimedia.org/T273821 (10mforns) I just merged the patch removal of the growth schemas from the include-list (T273826). When that is deployed, I will delete the 4 tables from the... [19:17:31] (03CR) 10Mforns: [V: 03+2 C: 03+2] "LGTM!" [analytics/reportupdater-queries] - 10https://gerrit.wikimedia.org/r/661108 (https://phabricator.wikimedia.org/T273454) (owner: 10Awight) [19:18:59] (03CR) 10Mforns: "Sorry for the delay." [analytics/reportupdater-queries] - 10https://gerrit.wikimedia.org/r/661108 (https://phabricator.wikimedia.org/T273454) (owner: 10Awight) [19:19:37] !log deployed refinery with query syntax fix for the last broken cassandra job and an updated EL whitelist [19:19:39] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [19:20:02] milimetric: did the scap deploy take forever for you too? [19:20:12] not longer than normal for me [19:20:17] maybe 10 min? [19:20:21] oh.. it took like 40 mins for me [19:20:33] anyway, good news! [19:21:25] :) [19:21:52] crossing fingers nothing else blows up. But as lex pointed out, the monthly jobs will probably blow up too [19:23:05] oh, yes [19:24:42] arg, messed up [19:25:00] (03PS19) 10Bstorm: multiinstance: Attempt to make quarry work with multiinstance replicas [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/632804 (https://phabricator.wikimedia.org/T264254) [19:25:15] milimetric: can I help? [19:25:26] no, I just accidentally forgot some debug text [19:25:55] k [19:26:12] (03PS1) 10Milimetric: Remove errant code [analytics/refinery] - 10https://gerrit.wikimedia.org/r/663883 [19:26:34] (03CR) 10Milimetric: [V: 03+2 C: 03+2] Remove errant code [analytics/refinery] - 10https://gerrit.wikimedia.org/r/663883 (owner: 10Milimetric) [19:26:44] (03CR) 10Bstorm: multiinstance: Attempt to make quarry work with multiinstance replicas (031 comment) [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/632804 (https://phabricator.wikimedia.org/T264254) (owner: 10Bstorm) [19:35:04] (03PS20) 10Bstorm: multiinstance: Attempt to make quarry work with multiinstance replicas [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/632804 (https://phabricator.wikimedia.org/T264254) [19:48:56] woo hoo, I think's it's working [20:09:43] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Clean up issues with jobs after Hadoop Upgrade - https://phabricator.wikimedia.org/T274322 (10Milimetric) [20:11:29] took a look at cassandra and it seems to be handling the writes ok [20:21:13] loving the response luca got on bigtop-users :) [21:55:32] (03PS21) 10Bstorm: multiinstance: Attempt to make quarry work with multiinstance replicas [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/632804 (https://phabricator.wikimedia.org/T264254) [22:53:28] (03PS22) 10Bstorm: multiinstance: Attempt to make quarry work with multiinstance replicas [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/632804 (https://phabricator.wikimedia.org/T264254) [23:18:50] (03PS23) 10Bstorm: multiinstance: Attempt to make quarry work with multiinstance replicas [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/632804 (https://phabricator.wikimedia.org/T264254) [23:21:56] 10Analytics-Radar, 10MediaWiki-API, 10Patch-For-Review, 10Platform Team Initiatives (Modern Event Platform (TEC2)), 10User-Addshore: Run ETL for wmf_raw.ActionApi into wmf.action_* aggregate tables - https://phabricator.wikimedia.org/T137321 (10Mholloway) a:05EvanProdromou→03Mholloway [23:31:59] 10Analytics-Radar, 10MediaWiki-API, 10Patch-For-Review, 10Platform Team Initiatives (Modern Event Platform (TEC2)), 10User-Addshore: Run ETL for wmf_raw.ActionApi into wmf.action_* aggregate tables - https://phabricator.wikimedia.org/T137321 (10Mholloway) I picked up @Tgr's old patch and updated it to co... [23:34:25] (03CR) 10Bstorm: "Ok, at this point, I've found a few bugs and made this work on https://quarry-dev.wmflabs.org." [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/632804 (https://phabricator.wikimedia.org/T264254) (owner: 10Bstorm)