[00:20:07] 10Analytics, 10Developer-Advocacy, 10Product-Analytics, 10Documentation: Develop EventLogging schema for documentation feedback gadget - https://phabricator.wikimedia.org/T211638 (10chelsyx) Hey @srishakatux , to help us prioritize this task, can you provide and organize the task description in the followi... [00:56:12] (03CR) 10Nettrom: "> Patch Set 3: Verified+2 Code-Review+2" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/471038 (https://phabricator.wikimedia.org/T208332) (owner: 10Nettrom) [00:58:15] PSA: in case someone was still using bast4001 to connect to the cluster (like i did until a moment ago ;) be aware that it was switched off today and you should switch to one of those listed here: https://wikitech.wikimedia.org/wiki/Bastion (cf. https://phabricator.wikimedia.org/T178592 ) [01:09:55] 10Analytics, 10Growth-Team, 10Product-Analytics, 10Patch-For-Review: Add EditAttemptStep properties to the schema whitelist - https://phabricator.wikimedia.org/T208332 (10Neil_P._Quinn_WMF) This data is now flowing into Hive: ` select to_date(dt) as date, count(*) as events from editattemptstep whe... [01:14:01] 10Analytics, 10Growth-Team, 10Product-Analytics, 10Patch-For-Review: Add EditAttemptStep properties to the schema whitelist - https://phabricator.wikimedia.org/T208332 (10Nuria) Whitelist changes are applied since the date they get merged, older data is not revised so whatever was sanitized before the mer... [01:16:42] milimetric: see ^ re bastions [01:17:18] HaeB: ah, sorry didn't think to ping when I ran into the problem, thank you! [02:01:38] 10Analytics, 10Research, 10WMDE-Analytics-Engineering, 10User-Addshore, 10User-Elukey: Phase out and replace analytics-store (multisource) - https://phabricator.wikimedia.org/T172410 (10Neil_P._Quinn_WMF) >>! In T172410#4803991, @elukey wrote: > To follow up what I wrote (after a chat with the data persi... [02:07:10] (03PS1) 10Milimetric: Add tooltip Vue directive and use it [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/478824 (https://phabricator.wikimedia.org/T177950) [06:12:01] 10Analytics, 10Operations, 10Security-Team, 10WMF-Legal, 10Software-Licensing: Can exfat be used in WMF production? - https://phabricator.wikimedia.org/T210667 (10Legoktm) >>! In T210667#4795435, @JBennett wrote: > Thanks everyone of for their thoughtful consideration. I have no issues nor do I see a co... [08:25:18] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Presto cluster online and usable with test data pushed from analytics prod infrastructure accessible by Cloud (labs) users - https://phabricator.wikimedia.org/T204951 (10JAllemandou) \o/ !!! That's super great :) Let's discuss a one-off way to copy data ov... [08:39:05] Need to go back to the creche, Nae is not well :S Will look deeper in clickstream error when she sleeps [08:46:42] ack joal [09:15:13] 10Analytics, 10Operations, 10Performance-Team, 10Traffic: Only serve debug HTTP headers when x-wikimedia-debug is present - https://phabricator.wikimedia.org/T210484 (10Gilles) @Anomie very good point, I think it will be very hard for someone to find out about such a whitelist. Things will work for them on... [09:17:22] 10Analytics, 10Operations, 10Performance-Team, 10Traffic: Only serve debug HTTP headers when x-wikimedia-debug is present - https://phabricator.wikimedia.org/T210484 (10Gilles) >>! In T210484#4779199, @TheDJ wrote: > what about ?debug=true ? We already vary on that right ? might as well vary which set of... [09:19:03] 10Analytics, 10Operations, 10Performance-Team, 10Traffic: Only serve debug HTTP headers when x-wikimedia-debug is present - https://phabricator.wikimedia.org/T210484 (10Gilles) >>! In T210484#4794749, @fdans wrote: > Analytics needs x-analytics in every request, not only in debugging ones but we don't need... [09:37:16] (03CR) 10Fdans: Adds logic and configuration for project families (032 comments) [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/464583 (https://phabricator.wikimedia.org/T205665) (owner: 10Fdans) [09:52:49] 10Analytics, 10Research, 10WMDE-Analytics-Engineering, 10User-Addshore, 10User-Elukey: Phase out and replace analytics-store (multisource) - https://phabricator.wikimedia.org/T172410 (10elukey) >>! In T172410#4812629, @Neil_P._Quinn_WMF wrote: >>>! In T172410#4803991, @elukey wrote: >> To follow up what... [09:53:35] fdans: o/ - not sure if you got my ping but you can deploy aqs now [09:53:58] yessss sorry luca I was out then, thank you elukey !!! [09:54:02] :) [10:20:33] (03PS7) 10Fdans: Adds logic and configuration for project families [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/464583 (https://phabricator.wikimedia.org/T205665) [10:21:26] (03CR) 10Fdans: "Addressed Nuria's comment by adding a check when loading data and changing breakdowns in case they are in infringement of the check" [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/464583 (https://phabricator.wikimedia.org/T205665) (owner: 10Fdans) [11:58:41] 10Analytics, 10Readers-Web-Backlog: % of "none" referers seems too high - https://phabricator.wikimedia.org/T195880 (10phuedx) a:05phuedx>03None Not sure why I assigned this to myself… [12:02:25] * elukey lunch! [12:21:18] (03PS9) 10Michael Große: Update metric's items and properties automatically [analytics/wmde/toolkit-analyzer] - 10https://gerrit.wikimedia.org/r/475807 (https://phabricator.wikimedia.org/T209399) [12:38:01] just upgraded aqs1004 to a new nodejs 6 version (security upgrades) [12:38:04] all good [13:12:10] 10Analytics, 10Analytics-Kanban, 10User-Elukey: Return to real time banner impressions in Druid - https://phabricator.wikimedia.org/T203669 (10elukey) @AndyRussG ping :) [13:55:14] 10Analytics, 10Product-Analytics: As a user of Superset I would like it to be up-to-date so I'm not blocked by bugs that have already been fixed - https://phabricator.wikimedia.org/T211606 (10elukey) We discussed this during the Analytics standup and we have a proposal: we could start with creating a tracking... [14:00:48] 10Analytics, 10Operations, 10Research-management, 10User-Elukey: GPU upgrade for stats machine - https://phabricator.wikimedia.org/T148843 (10elukey) Very good news, finally stat1005 is ready for experiment with GPU drivers etc.. I am completely ignorant about the subject so if anybody has time/patience pl... [14:00:59] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Move users from stat1005 to stat1007 - https://phabricator.wikimedia.org/T205846 (10elukey) [14:01:14] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Move users from stat1005 to stat1007 - https://phabricator.wikimedia.org/T205846 (10elukey) [14:08:42] 10Analytics, 10Product-Analytics: As a user of Superset I would like it to be up-to-date so I'm not blocked by bugs that have already been fixed - https://phabricator.wikimedia.org/T211606 (10mpopov) I'll check with the team at our meeting later today and let you know :) [14:28:29] ottomata: o/ - last week I forgot to show you https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/477995/ [14:28:37] (03PS1) 10Rafidaslam: Add nap.wikisource to whitelist.tsv [analytics/refinery] - 10https://gerrit.wikimedia.org/r/478942 (https://phabricator.wikimedia.org/T210752) [14:29:00] 10Analytics, 10Analytics-Kanban: Grafana, icinga, prometheus in cloud-analytics project - https://phabricator.wikimedia.org/T211640 (10Ottomata) [14:29:14] after that (and a little fix) the hadoop workers started to complete their first puppet run without any issue [14:33:08] contain!?!? [14:33:10] elukey: i wonder... [14:33:19] i had a problem with prometheus .jar and .yaml stuff [14:33:27] especially on initialization of a new cluster [14:33:36] since when first provisioning an HA cluster [14:33:44] you turn on first journalnodes, then master, then standby, then workers [14:33:59] there are several levels that will fail, including things like ensuring certain files or dirs exist in hdfs [14:34:04] those all take a LONG time to timeout [14:34:33] but, the worst part was that puppet couldn't start the individual daemon processe [14:34:53] because it was trying to do so before the prometheus::jmx_exporter got evaluated (which installs the .jar and the .yaml config file) [14:35:02] so the -javaagent on each proc had references to files that didn't exist [14:35:04] so the daemons couldn't start [14:35:30] and, because the entire (failling) puppet run takes so long to complete (because of the exec timeouts) [14:35:52] i'd have to wait forever to hopefully get the prometheus stuff installed and then get a successful daemon start [14:35:55] or i'd have to cancel puppet [14:36:03] i usually cancelled puppet because i was impatient [14:36:15] but then puppet would never get around to installing prometheus jmx stuff! :p [14:36:30] so I just commented out the -javaagent stuff til I finished installing the cluster [14:36:40] ,... [14:36:40] and [14:36:45] it might not matter anyway [14:36:48] i think I knew this but: https://phabricator.wikimedia.org/T211640 [14:36:49] :( [14:36:52] wait a sec, did you get it yesterday or a long a go? [14:37:06] elukey: this was when setting up cloud-analytilcs in labs yesterday [14:37:22] usually we don't turn on monitoring_enabled in labs [14:37:26] so we don't encounter this problem [14:37:35] (when setting up a brand new hadoop cluster) [14:37:45] ah wait I fixed the issue for the workers [14:38:04] was it for the masters? [14:38:22] yes....but i think workers too........maybe not thought now i'm not sure [14:38:38] hmm, yeah i think maybe so. because puppet was able to start journalnodes [14:38:43] it was the master step where i couldnt' get anythinng to work [14:38:45] all the new workers that I have installed in prod went up at first puppet run [14:39:20] lemme find the fix [14:39:38] https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/477998/1/modules/profile/manifests/hadoop/worker.pp [14:39:51] reading about contain [14:39:52] OHHHH [14:39:53] 10Analytics, 10Analytics-Kanban, 10DBA, 10User-Banyek: Migrate dbstore1002 to a multi instance setup on dbstore100[3-5] - https://phabricator.wikimedia.org/T210478 (10Banyek) While the section install schema being discussed on T172410 I created and mounted the data directories on the new hosts. [14:39:54] i did not know that [14:39:57] huh [14:40:08] i think i thought that was what include/require did by default! [14:40:39] elukey: really????? why did that matter? [14:41:28] because I think that puppet 4+ follows the order of declaration [14:41:35] after moving it, no issues [14:41:57] huh [14:41:59] oook [14:42:00] crazy [14:42:21] elukey: what do you think we should do about https://phabricator.wikimedia.org/T211640 ? [14:42:48] really sorry about the time wasted yesterday for the masters/journalnodes :( [14:42:52] that's ok! [14:42:54] reading [14:43:07] hahaha you can't be sorry! a lot of it is my code too :p [14:43:53] yeah but I was the one adding the prometheus stuff :D [14:44:17] I have no idea if anybody monitors VM inside openstack, we might be the first ones [14:45:01] keeping a prometheus replica only for those might be really problematic [14:46:17] as starter we could add what Andrew suggests [14:46:25] yeah, that will work as a starter [14:46:40] but i think we need to figure that out very soon, before we start using the thing [14:46:57] i wonder if there's some way to get at least prometheus stats into prod ? :D [14:47:44] we could ask to Filippo if anything is available but I doubt it [14:48:03] (btw reading your article: wow! "if you delete the backing directories/files for a Hive database, the metadata for the Hive database is automatically cleaned up, see the figure below.") [14:49:57] yeah hops seems really nice :) [14:50:04] yeah too nice! [14:58:42] !log Restart clickstream job after having repaired hive mediawiki-tables partitions [14:58:43] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [15:00:56] (03PS1) 10Joal: Update clickstream job to wait for hive partitions [analytics/refinery] - 10https://gerrit.wikimedia.org/r/478952 [15:02:08] heya teaaaaaam :] [15:03:00] yoohoo [15:09:45] 10Analytics, 10Analytics-Kanban, 10DBA, 10Data-Services, and 3 others: Create materialized views on Wiki Replica hosts for better query performance - https://phabricator.wikimedia.org/T210693 (10Banyek) on labsdb1010 I stared to create materialized views from all the ` comment` views, except enwiki - as we... [19:48:02] 10Analytics, 10Research: Provide data dumps in the Analytics Data Lake - https://phabricator.wikimedia.org/T186559 (10Nuria) [19:48:04] 10Analytics, 10Analytics-Kanban, 10Research, 10Patch-For-Review: Automate XML-to-parquet transformation for XML dumps (oozie job) - https://phabricator.wikimedia.org/T202490 (10Nuria) 05Open>03Resolved [19:48:16] joal: what i do not understand is the partition change in the clickstream job though [19:48:35] joal: the int versus long makes sense and so does the jar change [19:49:51] nuria: In refinery/oozie/mediawiki/history/datasets_raw.xml - There are 2 different dataset-types per table: for instance mw_archive_table, and mw_archive_table_partitioned [19:50:17] nuria: The first one depends on an _SUCCESS file to be present, while the second depends on a _PARTITIONED file to be present [19:51:05] nuria: The first triggers jobs that read data in avro-format (for instance mediawiki-history), while the second triggers job using hive partitions over the data (clickstream for instance) [19:51:52] nuria: The first one (_SUCCESS) is written by sqoop, and the second (_PARTITIONED) is written by oozie-mediawiki-history-load-coord job, which repairs tables once data is present [19:52:54] (03CR) 10Nuria: [C: 04-1] Add tooltip Vue directive and use it (031 comment) [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/478824 (https://phabricator.wikimedia.org/T177950) (owner: 10Milimetric) [19:53:42] joal: ah i see, and what happen to oozie-mediawiki-history-load-coord? [19:54:30] nuria: failed because of change_tag (you merged that: https://gerrit.wikimedia.org/r/c/analytics/refinery/+/478146) [19:54:57] nuria: the concern was, for clickstream, that it was triggered because data was here, even if hive tables were not present [19:55:08] Therefore the patch [19:55:35] joal: ah i see, the job could mnot access tables cause partitions had not been setup [19:55:53] correct ! Data was here, but partitions not repaired in hive [19:56:15] With the patch, the job will now wait for partitions to be added (through oozie) [19:56:56] joal: ok, got it, and my mistake: shoudl have asked this before CR [19:57:06] no worries nuria :) [19:58:33] (03CR) 10Milimetric: Add tooltip Vue directive and use it (031 comment) [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/478824 (https://phabricator.wikimedia.org/T177950) (owner: 10Milimetric) [20:04:19] (03CR) 10Nuria: [C: 04-1] Add tooltip Vue directive and use it (031 comment) [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/478824 (https://phabricator.wikimedia.org/T177950) (owner: 10Milimetric) [20:05:05] nuria: I know what you mean by the separate bundle, but in this case that will be included on the dashboard as well as the detail page, because main.js always executes [20:05:22] so it wouldn't save anything, it would just cause another trip [20:05:48] milimetric: it will sava load, tooltips are likely to be plentiful [20:06:10] but that bundle would have to be loaded before Vue starts rendering, that's what I'm saying [20:06:23] milimetric: the one with text , why? [20:06:53] because there's a line in main.js that uses the tooltip directive, which uses the config [20:07:47] if we wanted to do something async, I'd have to make an API and refactor the tooltip directive to work after a promise is resolved [20:07:59] I'm not 100% sure that would work, but it probably would [20:10:08] milimetric: i think we need to do a bit more work here so tooltips are not accessed like the rest of config, as config is required for ui to work but tooltips are not [20:10:51] milimetric: less obvious of an approach but otherwise perf is just going to suffer parsing a bunch of text before rendering that the user might not use at all [20:15:37] nuria: oh I agree with you, I think it's a good point. What I'm saying is that the place to put them would be on meta, via the Dashiki config, accessed via an API like annotations. Otherwise, we'd be building a third way to pull in async stuff, and I think that's bad. So what I was saying is that we can launch it as is, with the first few tooltips which will be small, and in parallel work on the Dashiki solution [20:16:00] or if you think we can do the Dashiki thing pretty fast (as in, if you have time to review), then I'm happy going that way from the start [20:16:27] nuria: and I just realized something awesome about internationalization that maybe we should chat about in the cave when you have a minute [20:17:03] (sorry to interrupt) whenever someone has a second....is it true that webrequest logs for gerrit.wikimedia.org would be in hadoop? [20:17:28] chasemp: probably! [20:17:33] wmf.webrequest table [20:17:38] where webrequest_source='misc' [20:17:38] * chasemp nods [20:17:52] ottomata: misc is GOOOOOONE ! [20:17:54] 10Analytics, 10Developer-Advocacy, 10Product-Analytics, 10Documentation: Develop EventLogging schema for documentation feedback gadget - https://phabricator.wikimedia.org/T211638 (10srishakatux) [20:17:54] I believe it's behind the regular varnish layers now so I thought but it's a topsy turvy maze [20:17:55] oh [20:17:57] right. [20:17:57] ah :) [20:18:01] so in text [20:18:11] yup :) [20:18:19] so text and select by host I guess? [20:18:30] ya uri_host i guess ? [20:18:37] 10Analytics, 10Developer-Advocacy, 10Product-Analytics, 10Documentation: Develop EventLogging schema for documentation feedback gadget - https://phabricator.wikimedia.org/T211638 (10srishakatux) Thanks @chelsyx for sharing this information! I've updated the task description accordingly. [20:18:49] nothing in turnillo :( bad sign [20:19:12] hmmmMMMMmmmm [20:19:26] chasemp: triple checking in spark [20:19:33] tx joal [20:19:51] if missing, I guess then I wonder why [20:22:25] nope, nothing chasemp :( [20:22:40] I looked for webrequest_source = 'text' and year = 2018 and month = 12 and day = 11 and hour = 17 and uri_host like '%gerrit%' [20:22:49] ok tx, I'm not sure what to make of that [20:23:00] I would have thought in misc and if that's dead now then hm [20:23:09] I wonder if lost in the shuffle or next collected? [20:23:15] s/next/never [20:23:17] chasemp: Do gerrit actuqally go through varnish at all? [20:23:34] it used to not for a long time, but I believe it does now [20:23:40] it could be I'm fooling myself there [20:23:49] hum [20:23:58] it's defined in hieradata/role/common/cache/text.yaml [20:24:39] and hieradata/role/common/trafficserver/backend.yaml: replacement: http://gerrit.wikimedia.or [20:26:30] 10Analytics-Tech-community-metrics, 10Developer-Advocacy (Oct-Dec 2018): Have "Last Attracted Developers" information for Gerrit automatically updated / Integrate new demography panels in GrimoireLab product - https://phabricator.wikimedia.org/T151161 (10Aklapper) [20:32:54] (03PS3) 10Milimetric: Add tooltip Vue directive and use it [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/478824 (https://phabricator.wikimedia.org/T177950) [20:41:59] ottomata: I have a gap in knowledge - how does Camus work? [20:42:28] Like, how does it write wmf_raw.webrequest, and know, for example, about x-analytics, and how to write it into that table? [20:43:11] milimetric: it doesn't do anything with hive [20:43:34] yeah, I just don't have the language to talk about it otherwise :) [20:43:36] all it does is read from kafka, read the timestamp from the message, and write it to files in directories partitioned by hours [20:43:58] wmf_raw.webrequest is made on top of those files by oozie + hive [20:43:59] ok, then how does kafka know to read x_analytics from varnish? Varnish kafka? [20:44:15] how does x_analytics get into the message in kafka you mean? [20:44:32] varnishkafka is configued with a log message formatter [20:45:09] aha, https://github.com/wikimedia/varnishkafka/blob/master/varnishkafka.conf.example#L101 [20:45:13] yup [20:45:15] ok, so where's that in puppet [20:45:17] and specifically [20:45:17] https://github.com/wikimedia/puppet/blob/production/modules/profile/manifests/cache/kafka/webrequest.pp#L141 [20:45:32] there we go, ok, that's what I didn't know, thank you [20:49:15] yw! [20:49:27] 10Analytics, 10Operations, 10Performance-Team, 10Traffic: Only serve debug HTTP headers when x-wikimedia-debug is present - https://phabricator.wikimedia.org/T210484 (10Milimetric) >>! In T210484#4812997, @Gilles wrote: >>>! In T210484#4794749, @fdans wrote: >> Analytics needs x-analytics in every request,... [20:51:21] ottomata: I wanted to know so I could answer G1lles ^ [20:58:23] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Update sqoop to work with the new schema - https://phabricator.wikimedia.org/T210541 (10Milimetric) Update: big sqoop of all wikis finished, but had weird hanging problem at the end. It's possible it was writing _SUCCESS flags in all the directories and I... [21:15:04] (03PS1) 10Hashar: Jenkins job validation (DO NOT SUBMIT) [analytics/wikistats] - 10https://gerrit.wikimedia.org/r/479047 [21:15:39] (03Abandoned) 10Hashar: Jenkins job validation (DO NOT SUBMIT) [analytics/wikistats] - 10https://gerrit.wikimedia.org/r/479047 (owner: 10Hashar) [21:29:29] (03PS4) 10Nettrom: Add EditAttemptStep schema to whitelist [analytics/refinery] - 10https://gerrit.wikimedia.org/r/471038 (https://phabricator.wikimedia.org/T208332) [21:31:01] nuria: clickstream manual run confirmed [21:31:17] Gone for tonight team - see ou tomorrow [21:37:16] chasemp: did you find what you were looking for? [21:40:30] nuria: not yet, I think gerrit (cobalt) possibly is in a place where while system logs are shipped to logstash since gerrit is (it seems) only behind varnish for avatars/wmfusercontent.org the webrequest logs are never consumed off box [21:40:39] thinking about it now [21:52:49] ottomata: (unrelated to above) saw your call for help re: monitoring insanity in cloud land...yeah...idk what you'll want to do, maybe being in prod public with consuming the data from dumps is the better option. no opinion between them, but one thought just to put it out htere is toolforge does have checks that page via prod icinga and while it's an extra bit of logic it hasn't been terrible. There is a [21:52:49] toolschecker host that has nginx uwsgi so a call to a REST URI triggers a check within the project [21:53:05] i.e. shows OK or CRIT or doesn't return [21:53:32] if you wanted to stay in cloud land, it's possibly not a terrible compromise but I'll leave the math on that to you modules/icinga/manifests/monitor/toollabs.pp [22:17:51] chasemp: thanks, yeah its unclear, we have tons of stuff that just works in prod [22:17:55] gotta recreate it now in cloud [22:18:16] e.g. we will probably need to commit our grafana dashboards as json and somehow get them in grafana-labs too [22:22:08] (03CR) 10Nuria: [C: 032] "Ok, +2 and merging code" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/471038 (https://phabricator.wikimedia.org/T208332) (owner: 10Nettrom) [22:22:24] (03CR) 10Nuria: [V: 032 C: 032] Add EditAttemptStep schema to whitelist [analytics/refinery] - 10https://gerrit.wikimedia.org/r/471038 (https://phabricator.wikimedia.org/T208332) (owner: 10Nettrom) [22:29:40] milimetric: and to add extension to vagrant you just clone the repo under extensions? [22:30:56] nuria: yep, exactly, ssh://nuria@gerrit.wikimedia.org:29418/mediawiki/extensions/Dashiki [22:44:50] 10Analytics, 10Product-Analytics: Superset Updates - https://phabricator.wikimedia.org/T211706 (10kzimmerman) Thank you, @Nuria! [23:01:45] 10Analytics, 10Analytics-Kanban: Set up 5 decidated VMs on the cloudvirtan hardware in the cloud-analytics project - https://phabricator.wikimedia.org/T211599 (10Andrew) 05Open>03Resolved [23:01:50] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Presto cluster online and usable with test data pushed from analytics prod infrastructure accessible by Cloud (labs) users - https://phabricator.wikimedia.org/T204951 (10Andrew) [23:05:45] milimetric: STILL updating vagrant , is OK it is only taken an hour time enough to see engagement survey meeting [23:10:19] byeee team :] [23:11:01] ciaoo mforns [23:51:01] 10Analytics, 10Analytics-Kanban, 10Product-Analytics: Bug: can't make a YoY time series chart in Superset - https://phabricator.wikimedia.org/T210687 (10Nuria) Per our conversation @fdans to coordinate with @elukey to test whether we can run the 28 version on python 2.7 which is what we are running on debian...