[01:50:57] 10Analytics, 10Analytics-Wikistats: Wikistats Bug: "Anonymous Editor" is a broken link - https://phabricator.wikimedia.org/T206968 (10Yair_rand) [01:58:45] 10Analytics, 10Analytics-Wikistats: Wikistats Bug: "Anonymous Editor" is a broken link - https://phabricator.wikimedia.org/T206968 (10Zoranzoki21) Nothing no works there. I think to stats.wikimedia.org should be redirected to xtools.wmflabs.org [02:05:45] 10Analytics, 10Analytics-Wikistats: Wikistats Bug: "Anonymous Editor" is a broken link - https://phabricator.wikimedia.org/T206968 (10Zoranzoki21) stats.wikimedia.org is (or will be) superseded by https://stats.wikimedia.org/v2/ [02:12:34] 10Analytics, 10Analytics-Wikistats: Wikistats Bug: "Anonymous Editor" is a broken link - https://phabricator.wikimedia.org/T206968 (10MusikAnimal) > stats.wikimedia.org is (or will be) superseded by https://stats.wikimedia.org/v2/ This is what I told Zoranzoki21 on IRC, and clarified that XTools does not prov... [02:42:03] 10Analytics, 10Analytics-Wikistats: Wikistats Bug: "Anonymous Editor" is a broken link - https://phabricator.wikimedia.org/T206968 (10Yair_rand) Yes, sorry, I meant Wikistats 2, at the linked address. Thank you. [05:08:08] 10Analytics, 10Operations, 10ops-eqiad: Degraded RAID on dbstore1002 - https://phabricator.wikimedia.org/T206965 (10Marostegui) [07:23:24] morning! [07:23:31] so first recurrent systemd timer deployed [07:23:32] NEXT LEFT LAST PASSED UNIT ACTIVATES [07:23:35] Mon 2018-10-15 08:15:00 UTC 51min left n/a n/a refinery-drop-webrequest-raw-partitions.timer refinery-drop-webrequest-raw-partitions.service [07:23:44] so this one in cron was scheduled every four hours [07:23:59] \o/ ! Good morning elukey :) [07:24:05] the systemd timer is *-*-* 00/4:15:00 [07:24:18] basically everyday, starting from midnight, every 4 hours at 15m [07:24:28] and it seems correctly scheduled from above [07:25:23] awesome [07:25:40] and the unit is [07:25:41] [Service] [07:25:41] User=hdfs [07:25:41] Environment=PYTHONPATH=${PYTHONPATH}:/srv/deployment/analytics/refinery/python [07:25:44] ExecStart=/srv/deployment/analytics/refinery/bin/refinery-drop-webrequest-partitions -d 31 -D wmf_raw -l /wmf/data/raw/webrequest -w raw [07:26:21] I'll keep monitored, if it works we can swap all the others [07:26:51] \o/ [07:27:02] :) [07:33:57] ah joal aqs1006 has a broken disk FYI (one of the cassandra partitions) but it seems working fine [07:34:57] elukey: I have seen that this weekend - from what you said, RAID is doing its job? [07:35:17] yep exactly, but of course we'd need a new disk asap :) [07:35:23] Of course [07:40:17] 10Analytics, 10Analytics-Data-Quality, 10Product-Analytics: mediawiki_history datasets have null user_text for IP edits - https://phabricator.wikimedia.org/T206883 (10JAllemandou) The `event_user_text` field in `wmf.mediawiki_history` is what can be called the "current" value of the username, by opposition t... [07:40:19] 10Analytics, 10Analytics-Data-Quality, 10Product-Analytics: mediawiki_history datasets have null user_text for IP edits - https://phabricator.wikimedia.org/T206883 (10JAllemandou) 05Open>03declined [07:49:21] 10Analytics, 10Analytics-Kanban, 10User-Elukey: JVM pauses cause Yarn master to failover - https://phabricator.wikimedia.org/T206943 (10elukey) [07:55:02] elukey: also, about druid-0.12 - I'm assuming we want to do some testing in labs, right? [07:55:03] joal: feeling better today? [07:55:40] elukey: yes :) Thank you :) [07:55:43] ah yes whenever we have time! Not urgent :) [07:55:51] even later on during the week [08:33:16] joal: if you are ok I'd start raising Xmx/Xms to 4G (now 2G) for the Yarn Resource Manager [08:33:30] we have like 25G of spare memory on the masters :D [08:33:52] and then I'd check later on if we can change the GC type [08:34:00] but one thing at the time to avoid mixing things [08:34:01] Works for me elukey :) [08:39:20] (03PS4) 10Joal: Update DataFrameToHive for dynamic partitions [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/465202 (https://phabricator.wikimedia.org/T164020) [08:43:43] (03PS2) 10Joal: Add webrequest_subset_tags transform function [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/465206 (https://phabricator.wikimedia.org/T164020) [08:45:57] is the refinery-drop-druid-snapshots alert a false one? [08:46:47] if len(snapshots_to_remove) == 0: [08:46:47] sys.exit('There are no datasources to remove (total available {})'.format(len(matched_snapshots))) [08:47:12] so in this case in theory it should have exited with 0 and a message [08:48:24] ah no it exits with 1 [08:48:43] elukey@an-coord1001:~$ cat test [08:48:43] import sys [08:48:44] sys.exit("hello!") [08:48:49] elukey@an-coord1001:~$ python3 test [08:48:49] hello! [08:48:49] elukey@an-coord1001:~$ echo $? [08:48:49] 1 [08:51:00] pfff :( [08:52:00] it would have sent the email anyway, buuut it needs to be fixed before moving to timers.. will do it later on [08:54:02] ah yeah it makes sense, it raises a systemexit exception [08:58:11] (03PS1) 10Elukey: refinery-drop-druid-snapshots: avoid exit 1 when no ds available [analytics/refinery] - 10https://gerrit.wikimedia.org/r/467291 (https://phabricator.wikimedia.org/T172532) [08:59:11] (03PS2) 10Elukey: refinery-drop-druid-snapshots: avoid exit 1 when no ds available [analytics/refinery] - 10https://gerrit.wikimedia.org/r/467291 (https://phabricator.wikimedia.org/T172532) [09:00:34] (03CR) 10Joal: [C: 031] "LGTM!" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/467291 (https://phabricator.wikimedia.org/T172532) (owner: 10Elukey) [09:04:48] 2018-10-15T09:04:34 INFO There are no datasources to remove (total available 3: ['mediawiki_history_reduced_2018_09', 'mediawiki_history_reduced_2018_08', 'mediawiki_history_reduced_2018_07_no_ids']) [09:07:22] (03CR) 10Elukey: "Modified the script on an-coord1001, leads to:" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/467291 (https://phabricator.wikimedia.org/T172532) (owner: 10Elukey) [09:07:27] MOAR TIMERZ ! [09:07:39] ahahah [09:07:56] (03CR) 10Elukey: [C: 032] refinery-drop-druid-snapshots: avoid exit 1 when no ds available [analytics/refinery] - 10https://gerrit.wikimedia.org/r/467291 (https://phabricator.wikimedia.org/T172532) (owner: 10Elukey) [09:11:03] (03CR) 10Elukey: [V: 032 C: 032] refinery-drop-druid-snapshots: avoid exit 1 when no ds available [analytics/refinery] - 10https://gerrit.wikimedia.org/r/467291 (https://phabricator.wikimedia.org/T172532) (owner: 10Elukey) [09:12:33] all right so an-master* are running with the updated Xmx/Xms settings for yarn [09:12:36] let's see how it goes [09:20:58] 10Quarry: Provide a way to add hyperlink in Quarry results/output - https://phabricator.wikimedia.org/T74874 (10Seb35) >>! In T74874#4664854, @Tgr wrote: > > It's easy to construct the URLs in SQL, though. The JS patch adds a clickable link, URLs constructed in SQL would not be clickable. [09:31:20] dsaez: o/ [09:31:41] sorry to bother you but I have some disk consumption issues on notebook1003 [09:31:47] your home is ~70G.. all used? [10:05:05] elukey: we should do a ranking of people with the biggest storage usage [10:05:23] * fdans hides his 19GB [10:08:22] fdans: you don't have 19G on notebook1003 [10:08:39] elukey: ah sorry, I was looking at stat1005 [10:08:56] and I did a ranking of home dir sizes, this is why I found Diego :) [10:11:00] I have been saying for a while that we should limit people using stat/notebook hosts :D [10:15:31] in my previous job we had a daily cron which mailed the top 5 consumers of disk space in our build infrastructure, worked wonders :-) [10:17:45] :D [10:21:47] joal: qq about camus - on friday I was trying to add a camus job to import eventlogging's eventlogging-client-side topic into hdfs [10:22:17] but then I checked the message decoder (to grab the timestamp afaict) and the json one that we use everywhere might not work [10:22:33] since data from that topic is tab/space delimited no? [10:22:43] should I implement a new java class to do the job? [10:24:00] elukey: hm - I shall review this, but it is possible [10:43:09] 10Analytics, 10Analytics-Wikistats, 10User-Elukey: Git push and pull don't complete - https://phabricator.wikimedia.org/T206331 (10ezachte) p:05Low>03Normal Updating priority as I need to check-in bug fixes, which is way overdue. thanks. [10:45:06] 10Analytics, 10EventBus, 10Core Platform Team Kanban (Done with CPT), 10Services (done): Revision visibility change event sets a wrong performer - https://phabricator.wikimedia.org/T206277 (10mobrovac) 05Open>03Resolved p:05Low>03Normal [10:45:48] 10Analytics, 10Analytics-Wikistats, 10User-Elukey: Git push and pull don't complete - https://phabricator.wikimedia.org/T206331 (10elukey) >>! In T206331#4665933, @ezachte wrote: > Updating priority as I need to check-in bug fixes, which is way overdue. thanks. Erik did you see my questions above? It is f... [10:46:49] (03PS4) 10Fdans: [wip] Add change_tag to mediawiki_history sqoop [analytics/refinery] - 10https://gerrit.wikimedia.org/r/465416 [11:00:24] joal: hellooo you here? :) [11:02:31] Hi fdans! Here I am [11:02:33] :) [11:02:45] joal: do you have a second in the bc? [11:02:58] yessir, omw [11:20:01] 10Analytics: Refactor refinery/oozie/mediawiki/load to use a loop instead of repeating code - https://phabricator.wikimedia.org/T207012 (10JAllemandou) [11:37:32] (03PS5) 10Fdans: Add change_tag to mediawiki_history sqoop [analytics/refinery] - 10https://gerrit.wikimedia.org/r/465416 [11:46:00] 10Analytics, 10Analytics-EventLogging, 10EventBus, 10Core Platform Team (Modern Event Platform (TEC2)), and 2 others: Develop a library for JSON schema backwards incompatibility detection - https://phabricator.wikimedia.org/T206889 (10mobrovac) We could perhaps hack around it in the following way. Each tim... [12:16:26] 10Analytics, 10Analytics-Kanban: Clickstream dataset for Persian Wikipedia only includes external values - https://phabricator.wikimedia.org/T191964 (10Ladsgroup) Hey, Six months have passed and it's not fixed. Can I take a look at code? Maybe I can help. [12:25:03] 10Analytics, 10Analytics-EventLogging, 10EventBus, 10Core Platform Team (Modern Event Platform (TEC2)), and 2 others: Decide whether to use schema references in the schema registry - https://phabricator.wikimedia.org/T206824 (10mobrovac) I wholeheartedly agree that doing copy/pasta for shared parts of the... [12:26:31] 10Analytics, 10Analytics-Wikistats, 10User-Elukey: Git push and pull don't complete - https://phabricator.wikimedia.org/T206331 (10ezachte) Luca, the commands I use are in my first comment. So Both 'git pull' and 'git push' hang. To be sure I checked my aliasses: nope these are the original commands. So I... [12:30:43] 10Analytics, 10Analytics-Wikistats, 10User-Elukey: Git push and pull don't complete - https://phabricator.wikimedia.org/T206331 (10elukey) I'd need to know the following data: 1) path of the git repository that doesn't work (or more than one if it doesn't in multiple places). 2) user that runs the git pull/... [12:40:09] (03CR) 10Joal: [C: 04-1] "One comment about a comment, and something else I forgot: the datasets.xml files! In refinery/oozie/mediawiki/history, datasets.xml and da" (031 comment) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/465416 (owner: 10Fdans) [12:46:53] 10Analytics, 10Analytics-EventLogging, 10EventBus, 10Core Platform Team (Modern Event Platform (TEC2)), and 2 others: Decide whether to use schema references in the schema registry - https://phabricator.wikimedia.org/T206824 (10CCicalese_WMF) [12:49:07] 10Quarry: Provide a way to add hyperlink in Quarry results/output - https://phabricator.wikimedia.org/T74874 (10Seb35) I tested [[https://pypi.python.org/pypi/sqlparse|sqlparse]], with the idea to solve T188538 with the same library, but it seems a bit overkill to just retrieve the comment after the field names,... [13:01:56] PROBLEM - Check if the Hadoop HDFS Fuse mountpoint is readable on notebook1003 is CRITICAL: Return code of 255 is out of bounds [13:14:36] 10Analytics, 10Analytics-Wikistats, 10User-Elukey: Git push and pull don't complete - https://phabricator.wikimedia.org/T206331 (10ezachte) Yes, it's me who issues the command. > You should already be using the proxy commands I don't know really. I'm more or less blank on anything configuration-wise, espec... [13:36:06] heya team :] [13:36:27] yoohoo [13:37:50] Hi. A little question about EventLogging. When I'm calling EventLogging::logEvent() in PHP, do I always have to give the schema revision id as the second parameter? [13:38:28] If I have EventLoggingSchemas in extension.json, it specifies the revision ids. Where are these number used then? [13:40:24] milimetric: maybe you know some of ^ ? [13:42:26] aharoni: no, you don’t specify the revision number. When you specify it in extension.json, it registers that schema with that revision and uses it in the logEvent call [13:44:54] milimetric: what you say makes sense, but I'm curious how it works. When I look at function logEvent() in EventLogging.php, the documentation doesn't say this, [13:45:20] and when I use codesearch to look for calls to logEvent(), I see that all the extensions send a revId argument. [13:45:26] https://codesearch.wmflabs.org/search/?q=EventLogging%3A%3AlogEvent&i=nope&files=&repos= [13:45:27] aharoni: wait... hang on [13:45:32] I think I have confused js with php [13:45:40] I need PHP. [13:47:02] yeah, ok, no the php method apparently needs the revId... let me make sure there's no magic one sec [13:47:10] this is the signature: [13:47:12] https://www.irccloud.com/pastebin/4KPQiwIh/ [13:47:36] yep, that's what I'm looking at. [13:47:51] aharoni: yeah, there's no other magic [13:48:09] aharoni: when you go to a schema page, like Schema:Edit, you see the php logging code: EventLogging::logEvent( 'Edit', 17541122, $event ); [13:48:19] click on those code brackets on the top right [13:49:13] aharoni: yeah, I was confused because in JS it just uses the registered schemas, which should theoretically be available in PHP as well, but apparently... they're not [13:49:24] milimetric: actually, I think I did find another magic: [13:49:38] milimetric: see https://gerrit.wikimedia.org/g/mediawiki/extensions/Popups/+/87112444c44e014548af25fe24cc8f41d0c1b596/includes/EventLogging/MWEventLogger.php#54 [13:51:00] aharoni: yeah, you could use that [13:51:04] ah well, it's not really magic. [13:51:25] perhaps the EventLogging extension could give this to everybody. [13:51:28] I'm just now starting to work on the EL extension, I've landed a js patch. So maybe I'll add a method that does what you pointed to there, generically [13:51:32] yeah [13:58:26] milimetric: thanks! [14:01:54] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10Performance-Team (Radar): Explore NavigationTiming by faceted properties - EventLogging refine - https://phabricator.wikimedia.org/T166414 (10mforns) Just a note that I'm deleting the Druid dataset temporarily, to apply some renames and productionize t... [14:01:56] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10Readers-Web-Backlog (Tracking): Ingest data into druid for readingDepth schema - https://phabricator.wikimedia.org/T205562 (10mforns) Just a note that I'm deleting the Druid dataset temporarily, to apply some renames and productionize the final job. Wi... [14:01:58] 10Analytics, 10Analytics-Kanban, 10Page-Issue-Warnings, 10Product-Analytics, and 3 others: Ingest data from PageIssues EventLogging schema into Druid - https://phabricator.wikimedia.org/T202751 (10mforns) Just a note that I'm deleting the Druid dataset temporarily, to apply some renames and productionize t... [14:02:01] RECOVERY - Check if the Hadoop HDFS Fuse mountpoint is readable on notebook1003 is OK: OK [14:04:08] 10Analytics, 10Analytics-Wikistats, 10User-Elukey: Git push and pull don't complete - https://phabricator.wikimedia.org/T206331 (10elukey) Thanks Erik, it should work now :) If you type the following you can check if your git repo is using a proxy: ``` ezachte@stat1005:/home/ezachte/wikistats$ git config h... [14:05:00] !log swapped cobalt's ip with gerrit.wikimedia.org's one in analytics-in(4|6) firewall filters on the eqiad routers for https://phabricator.wikimedia.org/T206331#4666622. This should not cause git pulls to fail but let me know in case it does. [14:05:03] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [14:05:15] ottomata: --^ o/ [14:06:27] elukey, heloooo, can you please restart Turnilo, whenever possible? no rush though, and if I've permits to do it already, I will do it! [14:07:25] done! [14:07:38] you'll have permites after today's sre meeting [14:08:50] elukey, <3 [14:09:03] nice elukey [14:11:44] 10Analytics, 10Analytics-EventLogging, 10EventBus, 10Core Platform Team (Modern Event Platform (TEC2)), and 2 others: CI Support for Schema Registry - https://phabricator.wikimedia.org/T206814 (10Ottomata) > There is though a JSON linting plugin, we might be able to convert YAML to JSON and have a set of... [14:11:46] (03CR) 10Mforns: [C: 032] "+2ing, given that I already have a +1 and this is a new job (won't break anything), so I can deploy for final productionization." [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/465532 (https://phabricator.wikimedia.org/T206342) (owner: 10Mforns) [14:12:54] (03PS6) 10Fdans: Add change_tag to mediawiki_history sqoop [analytics/refinery] - 10https://gerrit.wikimedia.org/r/465416 [14:13:19] (03CR) 10Fdans: "@Joal I think only datasets_raw.xml is needed for this change right?" (031 comment) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/465416 (owner: 10Fdans) [14:14:52] 10Analytics, 10Analytics-Kanban, 10User-Elukey: Upgrade to Druid 0.12.3 - https://phabricator.wikimedia.org/T206839 (10Ottomata) +1 to this, thanks! [14:16:58] 10Analytics, 10Analytics-EventLogging: It should be possible to see the number of server-side validation errors for a schema in Grafana - https://phabricator.wikimedia.org/T206959 (10Ottomata) And/or/also: {T205437} [14:17:49] (03Merged) 10jenkins-bot: Refactor EventLoggingToDruid to use whitelists and ConfigHelper [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/465532 (https://phabricator.wikimedia.org/T206342) (owner: 10Mforns) [14:19:18] !log Started refinery-source deployment [14:19:19] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [14:22:30] 10Analytics, 10Operations, 10monitoring, 10Patch-For-Review: Eventstreams graphite disk usage - https://phabricator.wikimedia.org/T160644 (10fgiunchedi) 05Open>03Resolved We're doing good space wise now: ``` # du -hcs /var/lib/carbon/whisper/eventstreams/ 4.8G /var/lib/carbon/whisper/eventstreams/ ``` [14:22:48] (03PS1) 10Mforns: Update changelog.md for v0.0.77 [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/467348 [14:23:15] (03CR) 10Mforns: [V: 032 C: 032] "Merging to deploy" [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/467348 (owner: 10Mforns) [14:24:07] 10Quarry: Implement SQL Query Validator in Quarry - https://phabricator.wikimedia.org/T188538 (10Seb35) A possible solution would be to use CodeMirror and the [[https://codemirror.net/doc/manual.html#addon_lint|linting addon]] since it is already used in Quarry, but there is no support for linting the SQL syntax... [14:26:36] 10Analytics, 10Analytics-EventLogging, 10EventBus, 10Core Platform Team (Modern Event Platform (TEC2)), and 2 others: Decide whether to use schema references in the schema registry - https://phabricator.wikimedia.org/T206824 (10Ottomata) > expanding a schema with common parts during the commit process seem... [14:47:56] !log Finished refinery-source deployment [14:47:57] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [14:49:02] (03PS1) 10Fdans: Get rid of critical vulnerabilities in the aqs project [analytics/aqs] - 10https://gerrit.wikimedia.org/r/467398 (https://phabricator.wikimedia.org/T206474) [14:52:17] !log Started refinery deployment with scap [14:52:18] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [14:55:02] joal: hmm, I think aqs's npm install is failing on master [14:55:13] WAT? [14:55:39] joal: can you check on your machine? before panicking :D [14:56:03] (03CR) 10Joal: "Indeed, datasets-raw.xml only - But associated changes in coordinator.xml needed as well (input dataset and output dataset)" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/465416 (owner: 10Fdans) [14:56:12] i will fdans :) [14:56:53] joal: good news is that it doesn't fail with this change :P [14:56:54] https://gerrit.wikimedia.org/r/#/c/analytics/aqs/+/467398/ [14:57:07] mforns: lemme know if you see any weird scap issue, shouldn't be any but I changed one firewall setting [14:57:23] elukey, ok, for now all ok [14:57:27] super [14:58:11] fdans: might depend on node version? [14:58:15] fdans: works for me [14:58:29] joal: yes, it fails on 6.4.1 [14:58:59] I have 6.4.1 as well :( [14:59:05] oh damn [15:02:22] fdans: how does it fail? [15:02:31] I'll check too [15:03:09] joal: sqlite is failing for me [15:03:48] fdans: I get a "is installed via remote" message about sqlite [15:04:20] fdans: npm install doesn't fail for me, just gives warnings about security vulnerabilities [15:07:50] https://usercontent.irccloud-cdn.com/file/R4KK1sdD/Screen%20Shot%202018-10-15%20at%204.04.08%20PM.png [15:09:12] * elukey installs iTerm2 on fdan's laptop [15:12:04] 10Analytics, 10Analytics-Kanban, 10Operations, 10Traffic, and 2 others: Add Accept header to webrequest logs - https://phabricator.wikimedia.org/T170606 (10Ottomata) Documentation updated at https://wikitech.wikimedia.org/wiki/Analytics/Data_Lake/Traffic/Webrequest [15:12:53] elukey: omg no thanks [15:13:24] fdans: you don't use iTerm2? [15:13:28] what do you use? [15:13:35] apple's terminal [15:13:58] yuck doood y? [15:14:01] (03CR) 10Milimetric: [C: 04-1] Add change_tag to mediawiki_history sqoop (033 comments) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/465416 (owner: 10Fdans) [15:14:12] ottomata: iterm2 doesn't do the things i want [15:14:17] like what? [15:14:35] I can't remember but every time I give it a try I'm like ugh I can't be bothered with this [15:14:46] it's a bit like my experience with ubuntu [15:15:15] huh but split screens and cool colors and highlighting and hot keys way betterrrrrrr :) [15:15:26] well the diff is you have iTerm2 and ubuntu experts now, you can proceed either way with confidence [15:17:00] it turns out that I have it installed [15:17:38] apple's terminal seems something from the 80s [15:19:48] elukey: what if i change the font? [15:20:57] !log Finished refinery deployment with scap and refinery-deploy-to-hdfs [15:20:58] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [15:28:26] 10Analytics, 10Analytics-EventLogging: In PHP EventLogging::logEvent() should automatically read the schema revision id from extension.json - https://phabricator.wikimedia.org/T207034 (10Reedy) [15:38:12] 10Analytics, 10Analytics-Kanban: Cleanup refinery artefact folder from old jars - https://phabricator.wikimedia.org/T206687 (10Ottomata) a:03Ottomata [15:38:46] (03PS1) 10Ottomata: Remove unused refinery jars older than 0.0.70 [analytics/refinery] - 10https://gerrit.wikimedia.org/r/467404 (https://phabricator.wikimedia.org/T206687) [15:42:08] (03PS2) 10Ottomata: Remove unused refinery jars older than 0.0.70 [analytics/refinery] - 10https://gerrit.wikimedia.org/r/467404 (https://phabricator.wikimedia.org/T206687) [15:44:27] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Cleanup refinery artefact folder from old jars - https://phabricator.wikimedia.org/T206687 (10Ottomata) I grepped for '-0.0.' in oozie/ and in puppet modules/profile/{manifests/templates}/analytics, compiled a list of referenced jar versions: 0.0.24 0.0.2... [15:45:28] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Cleanup refinery artifact folder from old jars - https://phabricator.wikimedia.org/T206687 (10Ottomata) [15:51:37] 10Analytics, 10Analytics-EventLogging: In PHP EventLogging::logEvent() should automatically read the schema revision id from extension.json - https://phabricator.wikimedia.org/T207034 (10fdans) p:05Triage>03Normal [15:55:22] 10Analytics: Refactor refinery/oozie/mediawiki/load to use a loop instead of repeating code - https://phabricator.wikimedia.org/T207012 (10fdans) p:05Triage>03Normal [15:59:10] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Create .deb package for Presto - https://phabricator.wikimedia.org/T203115 (10Ottomata) [15:59:12] 10Analytics, 10Analytics-Kanban: Presto cluster online and usable with test data pushed from analytics prod infrastructure accessible by Cloud (labs) users - https://phabricator.wikimedia.org/T204951 (10Ottomata) [16:02:00] elukey: data lake node name bikeshed oh :o [16:02:10] an-presto? no. [16:02:12] an-datalake? [16:02:14] maybe [16:02:18] datalake1001? [16:02:18] ahahahah [16:02:20] probably confusing [16:02:22] * elukey runs away [16:02:26] since we also kinda call 'mw history' data lake [16:02:37] I'm making a task to set up 5 of the newly received nodes for it [16:02:42] labslake1001 [16:02:52] we aren't allowed to say the L word anymore [16:02:57] I am chatting with Chris about them and about the racking of the rest [16:03:00] cloudvpsnotlabs1001 [16:03:03] he needs to rack 13 of them asap [16:03:03] oh ok cool [16:03:06] 13? [16:03:10] (no space in the DC) [16:03:12] oh [16:03:13] haha great [16:03:14] yeah he got the first batch [16:03:16] ahahaha yes [16:03:19] May i join in bikeshedding?--^ [16:03:37] joseph1001.eqiad.wmnet ? [16:03:39] plz! [16:03:41] :-P [16:03:42] I would love it [16:04:32] guys - If we repurpose them, we'll rename them, right? [16:04:45] So why not presto100X.eqiad.wmnet? [16:05:04] I'd vote for an-datalake100X [16:05:06] i think because they aren't just presto [16:05:12] they will have a hadoop cluster too [16:05:17] Ah, true [16:05:22] and it will be more confusing if/when we add presto to analytics-cluster [16:05:38] elukey: we could just assume this is a new hadoop cluster [16:05:41] the current one is called 'analytics' [16:05:56] elukey: I don't like datalake, it already has a meaning that drives confusion - let's not add some more to it maybe? [16:05:59] we could call this one 'datalake'? [16:06:02] yeah i agree [16:06:14] but, whatever it is, we could name adopt the same naming convention we just chose for hadoop [16:06:17] e.g. -master -worker [16:06:24] but we just need somethign other than 'an' for the start [16:06:29] hdp-public? [16:06:29] I am fine whatever name we pick :) [16:06:39] i'm not! i dunno what is best... [16:06:41] will think some [16:08:22] joal: weloveoozie1001.eqiad.wmnet? [16:08:52] :D [16:08:53] ok I am going to stop [16:08:53] ahhahah [16:08:53] (it seems a name of a summer party btw) [16:11:43] 10Analytics, 10Analytics-Wikistats: Wikistats Bug: "Anonymous Editor" is a broken link - https://phabricator.wikimedia.org/T206968 (10fdans) Let's change the endpoint so that instead of "Anonymous User" we return null, and we format those without link in the UI. [16:11:54] 10Analytics, 10Analytics-Wikistats: Wikistats Bug: "Anonymous Editor" is a broken link - https://phabricator.wikimedia.org/T206968 (10fdans) p:05Triage>03High [16:14:11] 10Analytics, 10Analytics-EventLogging: It should be possible to see the number of server-side validation errors for a schema in Grafana - https://phabricator.wikimedia.org/T206959 (10fdans) p:05Triage>03Normal [16:17:33] 10Analytics, 10Analytics-Wikistats, 10Internet-Archive: Allow WikiStats2 to be archived by the wayback machine - https://phabricator.wikimedia.org/T206836 (10fdans) p:05Triage>03Low [16:18:28] 10Analytics, 10Analytics-EventLogging: It should be possible to see the number of server-side validation errors for a schema in Grafana - https://phabricator.wikimedia.org/T206959 (10Milimetric) @phuedx I agree with showing this in grafana, but wanted to check about the client-side changes I was making. I was... [16:19:06] 10Quarry: Implement SQL Query Validator in Quarry - https://phabricator.wikimedia.org/T188538 (10zhuyifei1999) The problem is, this isn't really [[https://en.wikipedia.org/wiki/SQL#Interoperability_and_standardization|standardized SQL]], but MySQL / MariaDB variant. Upstreaming this sounds unlikely. [16:19:18] 10Analytics, 10Analytics-Kanban: Splits on top metrics when selections are present on url - https://phabricator.wikimedia.org/T206822 (10fdans) p:05Triage>03High [16:19:28] 10Analytics, 10Analytics-Kanban: Splits on top metrics when selections are present on url - https://phabricator.wikimedia.org/T206822 (10fdans) a:03fdans [16:20:33] 10Analytics, 10Analytics-Wikistats, 10Russian-Sites: Wikistats summary for Report Card has English/Russian Wikipedia-Zero traffic as separate entries instead of added to English/Russian Wikipedia mobile traffic - https://phabricator.wikimedia.org/T146777 (10fdans) 05Open>03declined Wikipedia Zero is bein... [16:23:19] 10Analytics, 10Analytics-Kanban, 10User-Elukey: Upgrade to Druid 0.12.3 - https://phabricator.wikimedia.org/T206839 (10fdans) Let's remember to test turnilo against this version. [16:30:02] 10Analytics, 10goodfirstbug: Productionize job for Global Innovation Index from Hadoop Geowiki data - https://phabricator.wikimedia.org/T190535 (10fdans) p:05Low>03Normal [16:35:36] 10Analytics, 10Analytics-Wikistats: Automate creation of sqoop list of wikis to import data for from sitematrix - https://phabricator.wikimedia.org/T190700 (10mforns) p:05High>03Normal [16:37:09] 10Analytics, 10Analytics-Kanban, 10Analytics-Wikistats: Beta Release: Wikistats: support annotations in graphs - https://phabricator.wikimedia.org/T178015 (10mforns) [16:37:13] 10Analytics, 10Analytics-Wikistats: Read Dashiki annotations into Wikistats - https://phabricator.wikimedia.org/T194702 (10mforns) 05Open>03declined [16:38:24] 10Analytics: Alarms in Eventlogging hadoop sanitization - https://phabricator.wikimedia.org/T198910 (10mforns) p:05High>03Normal [16:38:36] 10Analytics: Alarms on Webrequest data processing and pageview volume - https://phabricator.wikimedia.org/T198985 (10mforns) p:05High>03Normal [16:39:24] 10Analytics, 10Operations, 10Traffic, 10User-Elukey: Refactor kafka_config.rb and and kafka_cluster_name.rb in puppet to avoid explicit hiera calls - https://phabricator.wikimedia.org/T177927 (10mforns) p:05Normal>03Low [16:42:15] 10Analytics, 10Analytics-Wikistats: Improve Annotations on Wikistats - https://phabricator.wikimedia.org/T207057 (10Milimetric) p:05Triage>03Low [16:50:08] 10Analytics, 10Analytics-Kanban, 10Analytics-Wikistats: Beta Release: Wikistats: support annotations in graphs - https://phabricator.wikimedia.org/T178015 (10Milimetric) a:03Milimetric [16:52:19] 10Analytics, 10Analytics-Kanban, 10Analytics-Wikistats: Add a tooltip to all non-obvious concepts like split categories, abbreviations - https://phabricator.wikimedia.org/T177950 (10Milimetric) [16:54:19] 10Analytics, 10Analytics-Kanban, 10Analytics-Wikistats: Wikistats Bug: "Anonymous Editor" is a broken link - https://phabricator.wikimedia.org/T206968 (10Nuria) a:03JAllemandou [16:55:15] ottomata, elukey, can you please review this last version of https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/465692/ and merge if appropriate? :] [16:55:29] whenever you have time of course [17:02:58] elukey: do we need this conditional anymore? https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/465692/3/modules/profile/manifests/analytics/refinery/job/druid_load.pp [17:03:08] 10Analytics, 10Operations, 10SRE-Access-Requests, 10Patch-For-Review: Allow Analytics team members to restart Turnilo and Superset - https://phabricator.wikimedia.org/T206217 (10Dzahn) approved in SRE meeting [17:03:44] ottomata: I'd say no, the permanent fix is to avoid MAILTO=analytics-alerts@ in labs, but I am getting rid of all the crons :D [17:03:55] so feel free to remove it [17:06:28] ok, will remove [17:10:51] * elukey going out for a run! [17:28:56] ottomata, will change the EL2Druid job to rename start_date and end_date to since and until, and redeploy [17:29:18] ok cool, thanks mforns sorry I didn' catch that before [17:29:25] no prob [17:29:33] (03CR) 10Nuria: [C: 032] Remove unused refinery jars older than 0.0.70 [analytics/refinery] - 10https://gerrit.wikimedia.org/r/467404 (https://phabricator.wikimedia.org/T206687) (owner: 10Ottomata) [17:33:00] 10Analytics, 10Analytics-Kanban, 10Operations, 10Traffic, and 2 others: Add Accept header to webrequest logs - https://phabricator.wikimedia.org/T170606 (10Nuria) 05Open>03Resolved [17:43:49] (03PS1) 10Mforns: Rename start_date and end_date to since until in EventLoggingToDruid.scala [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/467422 (https://phabricator.wikimedia.org/T206342) [17:51:43] ottomata, here's the rename, can you check please? https://gerrit.wikimedia.org/r/#/c/analytics/refinery/source/+/467422/ then I will redeploy! [17:52:26] (03CR) 10Ottomata: [C: 032] Rename start_date and end_date to since until in EventLoggingToDruid.scala [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/467422 (https://phabricator.wikimedia.org/T206342) (owner: 10Mforns) [17:52:33] thx! :] [17:54:02] 10Analytics, 10Operations, 10SRE-Access-Requests, 10Patch-For-Review: Allow Analytics team members to restart Turnilo and Superset - https://phabricator.wikimedia.org/T206217 (10Dzahn) 05Open>03Resolved a:03Dzahn This should resolve the ticket. Please reopen if something doesn't work. [17:54:03] elukey, sorry for late response, you have my last response on your email [18:07:13] (03CR) 10Nuria: [C: 031] "Thanks for taking care of this." [analytics/refinery] - 10https://gerrit.wikimedia.org/r/467112 (owner: 10Joal) [18:10:21] 10Analytics, 10Pageviews-API: Adding top counts for wiki projects (ex: WikiProject:Medicine) to pageview API - https://phabricator.wikimedia.org/T141010 (10Nuria) a:05Shizhao>03None [18:16:21] ottomata: have aminute for DataFrameToHive field-metadata? [18:22:05] dsaez: yep yep thanks a lot for the patience! [18:24:57] a-team: all of you should now be able to restart superset/turnilo [18:25:18] woohoo! [18:25:20] \o/ ! [18:25:26] Thanks ops-sharing-ops :) [18:25:32] :) [18:26:39] joal: yes! [18:26:39] bc [18:26:48] * elukey afk! [18:28:24] ottomata: OMW ! [18:36:20] (03PS1) 10Nuria: Removing error messages from whitelist for schema UploadWizardExceptionFlowEvent [analytics/refinery] - 10https://gerrit.wikimedia.org/r/467440 (https://phabricator.wikimedia.org/T136851) [18:41:56] (03PS1) 10Mforns: Update changelog.md for v0.0.78 [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/467441 [18:42:04] !log Started refinery-source deployment [18:42:06] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:42:41] (03CR) 10Mforns: [V: 032 C: 032] "Deploying" [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/467441 (owner: 10Mforns) [18:46:40] (03CR) 10Framawiki: [C: 032] Store and show query execution time [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/465659 (https://phabricator.wikimedia.org/T126888) (owner: 10Framawiki) [18:47:22] (03Merged) 10jenkins-bot: Store and show query execution time [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/465659 (https://phabricator.wikimedia.org/T126888) (owner: 10Framawiki) [18:55:00] 10Quarry: Show the execution time in the table of queries - https://phabricator.wikimedia.org/T71264 (10Framawiki) [18:55:05] 10Quarry, 10Patch-For-Review: Include query execution time - https://phabricator.wikimedia.org/T126888 (10Framawiki) 05Open>03Resolved a:03Framawiki Queries now show execution time near the resultset on the status page. [19:09:35] !log Finished refinery-source deployment [19:09:36] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [19:10:40] 10Analytics, 10Analytics-EventLogging: It should be possible to see the number of server-side validation errors for a schema in Grafana - https://phabricator.wikimedia.org/T206959 (10phuedx) >>! In T206959#4667281, @Milimetric wrote: > I was thinking people just set debug to true and left that on while debuggi... [19:10:41] !log Started refinery deployment with scap and refinery-deploy-to-hdfs [19:10:42] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [19:21:43] (03CR) 10Ottomata: [V: 032] Remove unused refinery jars older than 0.0.70 [analytics/refinery] - 10https://gerrit.wikimedia.org/r/467404 (https://phabricator.wikimedia.org/T206687) (owner: 10Ottomata) [19:21:55] mforns: ^ next time you deploy hopefully it won't take so lon! [19:21:56] long* [19:22:04] aha [19:22:38] ok, oh I could have included this patch... sorry for not asking [19:24:39] :) s'ok no no sorries! [19:24:40] next time! [19:24:45] its not in refinery-source, so easier to try it [19:26:00] ottomata, ... problemsss [19:26:46] deploy-local failed: {u'full_cmd': u'/usr/bin/git fat pull', u'stderr': u'error: unable to write file artifacts/org/wikimedia/analytics/refinery/refinery-job-spark-2.1-0.0.57.jar\nfatal: Unable to write new index file\nTraceback (most recent call last):\n File "/usr/bin/git-fat", line 708, in \n fat.cmd_pull(sys.argv[2:])\n File "/usr/bin/git-fat", line 508, in cmd_pull\n [19:26:47] self.checkout()\n File "/usr/bin/git-fat", line 485, in che [19:26:58] I think there's no space for jars... [19:27:26] rolling back [19:27:32] /o/ [19:27:34] done [19:28:07] well, will restart refinery-deploy with newest changes... [19:28:19] no? cc joal ottomata [19:33:23] HMMM [19:33:46] mforns: sorry! here wasn't looking at irc for a sec [19:33:52] which node? [19:33:58] did it say it failed on? [19:34:04] hm, lost the log... [19:34:10] mforns: you can try a git pull on deployment [19:34:12] I'm redeploying with latest changes [19:34:14] yeah [19:34:15] cool [19:34:19] yes, that's what I did [19:34:29] deploying to all groups now [19:34:32] cool [19:38:04] ottomata, finished fine (5mins) [19:44:27] great [19:45:49] !log Finished refinery deployment with scap and refinery-deploy-to-hdfs [19:45:50] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [20:02:55] mforns: qq about your puppet match [20:02:58] patch* [20:03:03] since => 6, until => 5 [20:03:04] yep [20:03:07] this will look for only a single hour? [20:03:08] right? [20:03:11] yes [20:03:13] what if the cron is disabled, or delayed [20:03:13] ? [20:03:15] is that ok? [20:03:19] ah... [20:03:20] will it end up skipping things? [20:03:50] if it is delayed for more than 59 minutes, yes [20:03:59] that can def be true, what if we are doing a cluster restart? [20:04:06] or camus is paused? [20:04:06] aha [20:04:10] or refine is broken [20:04:11] makes sense [20:04:24] refine sovles this by looking for a long period and finding stuff that needs to be refined [20:04:34] but ottomata, this job uses refine, but it does not use refineTarget [20:04:37] oh [20:04:52] right because you can't target the stuff in druid [20:04:53] so, if I put since 20 until 5, then there will be a lot of reloading [20:04:59] how are you going to know what needs to be loaded? [20:05:21] it just loads the next hour [20:05:35] right, but that's not going to work! we'll have a lot of manual reloading and checking to do if we do it that way! :) [20:05:46] hmmmm [20:06:05] we can reload a couple hours every time [20:06:05] can you query druid for datasets? [20:06:21] I think taht's still not really good enough, assume that camus is broken over the weekend [20:06:22] mm not sure [20:06:45] yes, you're right... [20:06:45] we fix it on monday, then refine finds it all and creates the partitions in the event db [20:07:10] hmmmmmmmmmmmmmmmmmmmmmmmmm [20:07:20] thejob could use refineTarget no? [20:07:43] even if it has to unpack refine targets... [20:07:51] I think RefineTarget would ahve to know about druid [20:08:00] I see... [20:08:01] since the destination of this job is druid, right? [20:08:07] yes [20:08:21] RefineTarget could be used [20:08:30] to gather all the existant datasets in a time range [20:08:37] but you'd want to filter them for ones that need to be loaded into druid [20:09:02] hm, no I think it's what you say, query Druid for existing data [20:09:40] ok, another idea, that might help also in compaction [20:10:01] yeah, i guess once you collect your source datasets (with your hive query or with RefineTarget), then you query druid for what it already has in that dataset, and filter out ones it already has [20:10:16] prob should also have a --force or --ignore-failure-flag config too, so we can reload into druid if we need to [20:10:16] every week, we have a second job that loads the whole week in daily segments, instead of hourly [20:10:57] would that be two different druid datasets? [20:11:04] for each EL table? [20:11:04] no, the same [20:11:05] oh [20:11:21] and the idea is that hourly is flaky, but weekly should always be good? [20:11:35] query granularity is still the same, but segment granularity would change, it's the approach we use with pretty much all other datasources [20:11:43] yea, exactly [20:12:15] hm [20:12:28] i guess, i'd worry that weekly could have the same problem though, just less often [20:12:34] yea [20:12:37] depends on when the weekly job runs, and when some failure (camus or other) happens [20:12:56] yes [20:14:16] we could run weekly loading with since=14days,until=7days [20:14:38] mforns: yeah i guess so.... [20:14:39] but then we would drag potential hourly issues for 7 days... [20:14:52] yeah, and i'd expect EL druid users to want their data pretty soon [20:14:58] aha, sure [20:15:01] even a week delayed would be annoying [20:15:24] we could also have a daily job... [20:15:31] anytime a failure happens, we'd have to either choose: wait a week and users can deal with it, or: manually re-load once we fix error [20:15:32] but this all starts to be hacky [20:16:10] although we could encapsulate this logic inside the eventlogging_to_druid_job.pp [20:17:04] mforns: you could extend RefineTarget into DruidTarget, and implement shouldRefine to query druid? [20:17:08] might end up being hacky too [20:17:22] weekly job since14days,until7days ; daily job since3days,until2days (to account for weekend holes) ; + hourly job [20:17:24] might be best to do like we talked about before, and just filter out existant druid datasets [20:18:38] ottomata, maybe just determine the earliest missing segment and load from there on [20:19:29] but weekly or monthly job is probably going to be needed, in case the data is there but is incomplete for the hour or corrupt [20:19:58] well, will read about querying Druid for data existance [20:20:38] mforns: aye, refine can detect incompleteness using mtimes [20:20:52] you might be able to do the same if you can get a druid dataset load mtime or ctime or something [20:21:06] what are mtimes? [20:23:29] ah! modification times ok [20:29:47] ottomata, fuuu, this will take me some time, 1 week maybe... but I should start the deletion script/sparkjob. You think we can just set since=192 (8 days) and until=168 (7 days) for now, so users will get data with 1 week of lag, and get to this change after I finish the deletion script? [20:33:09] mforns: sure [20:33:23] mforns: maybe just do daily for now? [20:33:33] since=48 until=24 ? [20:33:53] hm, but this will lead to maintance rework for us [20:33:54] most of the time camus + refine will complete within 4ish hours [20:34:02] yes yes, if things break but if they don't it should work [20:34:04] but weekend... [20:34:19] yeah a week ago is less likely to break [20:34:21] ok up to you :) [20:35:32] ok, 3 days seems a compromise: since 96 until 72 [20:35:44] leaves time to weekend issues [20:36:08] :) [20:39:41] ottomata, I made the change, but let's wait until tomorrow, and discuss with the team post-standup no? [20:40:59] maybe nuria doesn't like the temporary hack... [20:41:03] ok [20:41:29] mforns: need to read backscroll, was in meetings up to 1 hr ago [20:45:39] mforns: is there a code changeset I should look at? [20:46:00] nuria https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/465692/ [20:46:34] nuria, the refinery-source code I already merged, but is unused still [20:47:22] mforns: what is the temporary hack then? [20:47:54] using since=96, until=72 for eventlogging_to_druid_jobs [20:47:58] mforns: running loading into druid for data that is 3 days old? [20:48:05] yes [20:48:26] mforns: I see, what is the resaon to do that? [20:48:30] *reason [20:49:07] because if we load a recent hour, we expose the job for easy failure, due to missing input data (camus problems, restarts, outages, etc.) [20:49:29] and then we'd have to manually fix those datasources by launching import jobs manually [20:49:30] mforns: but we can trust the sucess flag no? [20:49:46] mforns: of the refine jobs? [20:49:47] this job does not use refineTarget, hence does not use success flags [20:49:58] just consumes the data you specify in params [20:50:14] refineTarget does not support Druid [20:51:44] mforns: does that mean that teh data is 3 days behind from what is being ingested? [20:51:52] mforns: at real time? [20:52:24] it means the last data point that show up in Druid/Turnilo will be 3 days in the past [20:52:59] mforns: ya, seems too high of adelay for data that is actually available probably 4 hours after it is produced [20:53:04] mforns: no? [20:53:24] nuria, yes totally [20:54:00] the correct way to do that would be to query Druid for existing data, and update whatever is missing from Druid and available in Hive [20:54:03] mforns: we are adding the time delay to guard against a malfunction in the refine pipeline? [20:54:35] that was the idea of a temporary fix, that would gime me time to work on deletion script, before I continue with this [20:55:33] because the change to make it work properly will take me some time, and I didn't want to push back the deletion script any more [20:56:12] the alternative is to park EventLoggingToDruid until the deletion script is done, and let users wait [20:56:29] mforns: deletion script? [20:56:50] data directory/hive partition purging script/sparkjob [20:57:06] to delete EL data older than 90 days [20:57:08] oh oh sorry not, related to el2druid, right? just trying to work on that [20:57:30] what do you mean? [20:57:31] mforns: i see, i think we need better ways to link these two workflows together. My 2 cents on this (not sure if ottomata disagrees) would be to load data 4 hours old and if things fail reload (near term) and work (mid term) in a way to join the two workflows [21:00:00] mforns: sorry i thought you were talknig about some el2druid deletion thing i didn't know about [21:00:03] you are just trying to free up your time :) [21:00:28] ottomata, yes [21:00:29] mforns: what would be the plan to link both workflows long term? [21:00:44] nuria, what do you mean with both workflows? [21:01:18] maybe we should bc [21:01:24] i could bc :) [21:01:29] mforns: ok, bc it is [21:43:09] (03CR) 10Nuria: Keep recently added navtiming + survey fields (031 comment) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/466607 (https://phabricator.wikimedia.org/T187299) (owner: 10Gilles) [22:39:40] 10Quarry: Example queries for Quarry - https://phabricator.wikimedia.org/T207098 (10Harej) [22:41:37] 10Quarry: Tool-tips explaining the parts of a query in Quarry - https://phabricator.wikimedia.org/T207099 (10Harej)