[00:00:41] 10Analytics, 10Dumps-Generation, 10ORES, 10Scoring-platform-team: Produce dump files for ORES scores - https://phabricator.wikimedia.org/T209739 (10awight) [00:00:49] awight: for persisting events if they abide to a schema and they are on a kafka topic (which they probably are if you are using changeprop) you just have to make sure they follow the guidelines and current code can be used to persist them [00:02:12] RoanKattouw: just rebooted box and i see events coming in from apps, let me look at mysql [00:05:01] OK. I just changed my prefs on beta to cause PrefUpdate event [00:05:31] RoanKattouw: i think is working .. one sec [00:07:43] 10Analytics, 10ORES, 10Scoring-platform-team: Purge ORES scores from Hadoop and begin backfill when model version changes - https://phabricator.wikimedia.org/T209742 (10awight) [00:11:15] 10Analytics, 10ORES, 10Scoring-platform-team: Wire ORES scoring events into Hadoop - https://phabricator.wikimedia.org/T209732 (10awight) >>! In T209732#4754975, @Nuria wrote: > @awight FYI that events need to abide to a schema that can be persisted to sql: https://wikitech.wikimedia.org/wiki/Analytics/Syste... [00:11:24] 10Analytics, 10ORES, 10Scoring-platform-team: Wire ORES scoring events into Hadoop - https://phabricator.wikimedia.org/T209732 (10awight) [00:11:30] RoanKattouw: events seem to be rolling in [00:11:30] 10Analytics, 10Analytics-Kanban, 10EventBus, 10ORES, and 4 others: Modify revision-score schema so that model probabilities won't conflict - https://phabricator.wikimedia.org/T197000 (10awight) [00:12:04] https://www.irccloud.com/pastebin/dehnSNcB/ [00:14:38] Yeah looks like something is happening [00:14:50] max(timestamp) is still a few hours behind in some of those tables but I'll wait for more to come in [00:23:39] 10Analytics, 10Wikipedia-iOS-App-Backlog, 10iOS-app-Bugs, 10iOS-app-feature-Analytics, 10iOS-app-v6.2-Beluga-On-A-Pogo-Stick: Many errors on "MobileWikiAppiOSSearch" and "MobileWikiAppiOSUserHistory" - https://phabricator.wikimedia.org/T207424 (10greg) This UBN! has not had activity for about a week... [00:29:24] 10Analytics, 10Anti-Harassment: Add ipblocks_restrictions table to Data Lake - https://phabricator.wikimedia.org/T209549 (10TBolliger) From some discussions, I believe we'll want to add `ipblocks_restrictions` to `filtered_tables.txt` so my team can compare monthly trends. Is this work for my team to do, or is... [00:37:03] jdlrobson: I totally forgot about that fix you sent, my bad. I just really hesitate to work on that repo since the base repo it inherits from is so broken. Mine was just a really rough proof of concept for limn on top of the very broken passport-oauth. I think the right thing to do is look around for a better oauth middleware, one has to exist, and then make it work with mediawiki [01:04:34] 10Analytics, 10Analytics-Kanban, 10New-Readers, 10Patch-For-Review: Instrument the landing page - https://phabricator.wikimedia.org/T202592 (10Nuria) I can see data climbing up but your event widget is empty, please take a look. Bulk of traffic comes from fb mobile [07:28:53] PROBLEM - Check if the Hadoop HDFS Fuse mountpoint is readable on notebook1004 is CRITICAL: connect to address 10.64.36.107 port 5666: Connection refused [08:09:04] forced remount --^ [08:09:13] (should be recovered in a bit) [08:29:05] RECOVERY - Check if the Hadoop HDFS Fuse mountpoint is readable on notebook1004 is OK: OK [09:30:25] PROBLEM - Check if the Hadoop HDFS Fuse mountpoint is readable on notebook1004 is CRITICAL: connect to address 10.64.36.107 port 5666: Connection refused [10:00:31] RECOVERY - Check if the Hadoop HDFS Fuse mountpoint is readable on notebook1004 is OK: OK [10:10:57] 10Quarry, 10Google-Code-in-2018: Create api health point for monitoring - https://phabricator.wikimedia.org/T205151 (10D3r1ck01) [10:11:56] 10Quarry, 10Google-Code-in-2018: Create api health point for monitoring - https://phabricator.wikimedia.org/T205151 (10Framawiki) Imported as https://codein.withgoogle.com/tasks/4737312928825344/, task that I'll mentor. [23:04:32] PROBLEM - Check if the Hadoop HDFS Fuse mountpoint is readable on notebook1004 is CRITICAL: connect to address 10.64.36.107 port 5666: Connection refused [23:34:38] RECOVERY - Check if the Hadoop HDFS Fuse mountpoint is readable on notebook1004 is OK: OK