[00:52:11] 10Machine-Learning-Team, 10ORES, 10Graphite, 10good first task: Look at additional uWSGI metrics for potential use in the ORES dashboard - https://phabricator.wikimedia.org/T182915 (10Carlwcheung) Hi, is this ticket something I can still look into? [09:35:15] 10Machine-Learning-Team, 10ORES, 10Graphite: Look at additional uWSGI metrics for potential use in the ORES dashboard - https://phabricator.wikimedia.org/T182915 (10Aklapper) [15:56:24] 10Machine-Learning-Team, 10ORES, 10Graphite: Look at additional uWSGI metrics for potential use in the ORES dashboard - https://phabricator.wikimedia.org/T182915 (10ACraze) 05Open→03Invalid Thanks for the interest @Carlwcheung, this is a very old ticket.. We actually moved from Graphite/StatsD over to Pr... [15:57:32] Hi-ai! [15:57:48] treposting here an article I posted in analytics chan: https://medium.com/riselab/feature-stores-the-data-side-of-ml-pipelines-7083d69bff1c [16:28:46] No team meeting today, I have other meetings and will reschedule [16:30:36] ack [16:51:23] sounds good [16:53:11] joal: that article is a great overview of how feature stores fit into the ML workflow, thanks for sharing. i can see alot of benefits for us having access to a feature store in the near future. [17:09:16] it seems like getting query latency low enough for prediction serving is a crucial part of making a feature store 'production-ready' [17:10:28] that will be tough with the ores models since the features get extracted in real-time [17:12:04] although we could probably experiment with a cache mechanism that utilizes the feature store [17:16:20] Yeah, there is way too much preprocessing that is done in ORES. Amir mentioned that without the prediction cache a single prediction would take like 4 seconds. [17:17:39] We need to rethink that model completely, instead have all that preprocessing done and ready in a feature store [17:22:34] I think there is an interesting sweet spot to find between intermediate data materialization for reuzability and processing at query-time - it will be fun :) [17:56:08] That is true. We'll also be a lot more agile (getting models into production, etc) if we don't have to set up a big prediction caching system, at least not for most models [18:26:35] 10artificial-intelligence, 10Documentation, 10Machine-Learning-Team (Active Tasks): Experiment with on-wiki model documentation - https://phabricator.wikimedia.org/T276398 (10calbon) Oh awesome thanks Isaac. Where do you host the Python code? [18:41:17] 10artificial-intelligence, 10Documentation, 10Machine-Learning-Team (Active Tasks): Experiment with on-wiki model documentation - https://phabricator.wikimedia.org/T276398 (10Isaac) > Where do you host the Python code? This is a prototype so we're just running this off stat1004 using [[https://github.com/jtm...