[06:23:22] 10Analytics, 10Product-Analytics, 10Reading-analysis: Assess impact of ua-parser update on core metrics - https://phabricator.wikimedia.org/T193578#4193121 (10Tbayer) >>! In T193578#4173107, @Nuria wrote: .. > mmm.. the research for this metric about bots was plentiful and much of the "user" marked traffic i... [09:08:31] 10Analytics, 10Product-Analytics, 10Reading List Service, 10Reading-Infrastructure-Team-Backlog, and 3 others: [EPIC] Reading List Sync service analytics - https://phabricator.wikimedia.org/T191859#4193348 (10Tgr) I see the quarterly check-in slides had some data on reading list sizes. One thing that would... [09:09:39] so first test of building from maven Druid 0.11 with hadoop 2.6.0 deps (vs 2.7.3) failed miserably, when buidling the indexer stuff [09:09:52] apparently due to java.lang.NoSuchMethodError: com.google.common.base.Stopwatch.elapsedMillis() [09:10:07] 2.7.3 works of course [09:49:44] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Elukey: Upgrade Druid clusters to 0.11 - https://phabricator.wikimedia.org/T193712#4193396 (10elukey) While testing, opened a gh issue: https://github.com/druid-io/druid/issues/5763 One of the options described in http://druid.io/docs/0.11.0/oper... [09:52:50] what a mess [10:33:56] * elukey lunch + errand! [11:54:18] hey team, I'm seeing that a geoeditors job failed this morning, mediawiki-geoeditors-monthly-wf-2018-01 [11:54:28] but hue says it has succeeded [11:54:39] did any one of you re-run it? :] [11:56:47] (not in sal) [12:31:59] 10Analytics, 10Product-Analytics, 10Reading-analysis: Assess impact of ua-parser update on core metrics - https://phabricator.wikimedia.org/T193578#4193812 (10fdans) Sampling one hour of activity in all Wikimedia projects. Total requests: **385M** Requests with a different ua_parsed dictionary: **173M** D... [12:46:51] mforns: I sent an email just now, sorry about that, I ran it before creating the tables, so I killed it, created, ran again, it's fine now [12:47:26] https://pivot.wikimedia.org/#mediawiki-geoeditors-monthly is live, so I'm gonna start deleting the geowiki stuff [12:48:58] !log re-run webrequest-load-wf-text-2018-5-8-17 via hue [12:49:00] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [12:49:30] I think that also pageviews hourly is blocked for ---^ [12:50:46] fdans: o/ - from stat1005: [12:50:46] cp: cannot copy a directory, '/usr/share/GeoIP/.', into itself, '/usr/share/GeoIP/archive/2018-05-09' [12:58:28] (03PS1) 10Milimetric: Rename druid datasource to underscores [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432072 [12:58:39] (03CR) 10Milimetric: [V: 032 C: 032] Rename druid datasource to underscores [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432072 (owner: 10Milimetric) [12:58:57] !log deploying very simple change just to rename druid datasource [12:58:58] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [13:08:59] FYI reimaging analytics1032/3 to Stretch! [13:09:40] o. [13:09:41] o/ [13:13:22] !log deployed refinery [13:13:23] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [13:17:43] elukey: sorry! was at lunch, gimme 10min [13:18:58] you are completely fine, whenever you have time :) [13:22:27] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Reimage the Debian Jessie Analytics worker nodes to Stretch. - https://phabricator.wikimedia.org/T192557#4193964 (10ops-monitoring-bot) Script wmf-auto-reimage was launched by elukey on neodymium.eqiad.wmnet for hosts: ``` ['analytics1032.eqiad.wmnet', 'an... [13:23:36] !log re-run webrequest-load-wf-misc-2018-5-9-12 via hue [13:23:39] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [13:34:36] hmmm, is it normal that there's no pageview-hourly-coord in hue????? [13:35:57] mforns: there is one IIRC, lemme check [13:36:26] if it is for the email to alerts@, I think it is due to one webrequest-text hour failed and not restarted [13:36:30] (already done) [13:37:13] I see [13:37:14] mforns: is it possible that now it is prefixed by "cassandra-etc..." ? [13:37:19] oh [13:37:35] I just discovered it [13:37:55] elukey, the cassandra ones seem to be cassandra loading jobs no? [13:38:00] where is the aggregation job? [13:38:28] I checked in hive and wmf.pageview_hourly has data as expected until now, but... [13:38:36] good point [13:39:48] yeah I said something stupid, there must be a pageview-hourly-coord [13:40:15] mforns: ah there you go! it is in the next page :P [13:40:34] so you to go the coordinator tab, filter for pageview-hourly-coord and then hit "next" on the bottom right [13:40:37] then you'll see it [13:41:05] as expected there is one still waiting for webrequest data [13:41:08] whaaaaaat! [13:41:12] yeah :D [13:41:17] I always forget about that [13:41:20] why does it show only 1 per page??? [13:42:14] whad do you mean? [13:43:55] after filtering with "pageview-hourly-coord" it was showing only 1 running coordinator [13:44:08] when I clicked "next page" [13:44:13] then it shows the other one [13:44:28] so it shows only 1 coordinator per page? [13:44:54] and more: it says "Showing 1 to 1 of 60 (filtered from 50 entries)" [13:45:36] it's a bug! [13:45:38] (03PS1) 10Milimetric: Index new metric in druid [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432076 [13:46:59] mforns: hue is not the best interface ever [13:47:22] BTW elukey, the cron job that you merged should be triggering right now no? [13:47:39] (03PS2) 10Milimetric: Index new metric in druid [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432076 [13:48:37] (03PS3) 10Milimetric: Index new metric in druid [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432076 (https://phabricator.wikimedia.org/T194207) [13:48:59] (03CR) 10Milimetric: [V: 032 C: 032] "gotta deploy this again forgot a field - doh" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432076 (https://phabricator.wikimedia.org/T194207) (owner: 10Milimetric) [13:49:02] mforns: shouldn't it run every beginning of hour? [13:49:30] !log deploying refinery again, forgot to index a new metric in the new datasource, sorry [13:49:31] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [13:49:32] ah yes since I merged it you mean [13:49:34] checking [13:51:21] mforns: https://yarn.wikimedia.org/cluster/app/application_1523429574968_106249 [13:51:57] thanks elukey :/ [13:52:02] :( [13:52:51] lookin [13:54:09] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Elukey: Reimage the Debian Jessie Analytics worker nodes to Stretch. - https://phabricator.wikimedia.org/T192557#4194052 (10elukey) [13:56:39] 10Analytics, 10Analytics-Kanban, 10User-Elukey: Restart Analytics hosts for Java 8 Security upgrades - https://phabricator.wikimedia.org/T194268#4194056 (10elukey) p:05Triage>03High [13:57:31] 10Analytics, 10Analytics-Kanban, 10User-Elukey: Restart Analytics hosts for Java 8 Security upgrades - https://phabricator.wikimedia.org/T194268#4194091 (10elukey) [13:57:41] nuria_: fyi, i added https://wikitech.wikimedia.org/wiki/Analytics/Systems/Refine#Refined_data_types cause I saw you answered AndyRussG q yesterday [13:59:21] !log beginning upgrade of Kafka main-eqiad cluster from 0.9.0.1 to 1.1.0 - T167039 [13:59:27] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [13:59:27] T167039: Upgrade Kafka on main cluster with security features - https://phabricator.wikimedia.org/T167039 [14:26:10] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Elukey: Reimage the Debian Jessie Analytics worker nodes to Stretch. - https://phabricator.wikimedia.org/T192557#4194213 (10ops-monitoring-bot) Completed auto-reimage of hosts: ``` ['analytics1032.eqiad.wmnet', 'analytics1033.eqiad.wmnet'] ``` an... [14:41:21] !log finished deploying refinery with proper geoeditors druid indexing template [14:41:26] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [14:42:44] PROBLEM - Kafka MirrorMaker main-eqiad_to_jumbo-eqiad max lag in last 10 minutes on einsteinium is CRITICAL: 2.685e+07 gt 1e+05 https://grafana.wikimedia.org/dashboard/db/kafka-mirrormaker?var-datasource=eqiad+prometheus/ops&var-lag_datasource=eqiad+prometheus/ops&var-mirror_name=main-eqiad_to_jumbo-eqiad [14:42:46] PROBLEM - Kafka MirrorMaker main-eqiad_to_eqiad max lag in last 10 minutes on einsteinium is CRITICAL: 2.685e+07 gt 1e+05 https://grafana.wikimedia.org/dashboard/db/kafka-mirrormaker?var-datasource=eqiad+prometheus/ops&var-lag_datasource=eqiad+prometheus/ops&var-mirror_name=main-eqiad_to_eqiad [14:42:59] eqiad to eqiad [14:43:01] .... [14:43:04] why. [14:43:05] looking [14:43:11] oh also jumbo [14:43:11] hm [14:43:12] those should work [14:43:13] looking [14:43:50] bouncing those...not sure yet [14:49:04] RECOVERY - Kafka MirrorMaker main-eqiad_to_jumbo-eqiad max lag in last 10 minutes on einsteinium is OK: (C)1e+05 gt (W)1e+04 gt 0 https://grafana.wikimedia.org/dashboard/db/kafka-mirrormaker?var-datasource=eqiad+prometheus/ops&var-lag_datasource=eqiad+prometheus/ops&var-mirror_name=main-eqiad_to_jumbo-eqiad [14:49:05] RECOVERY - Kafka MirrorMaker main-eqiad_to_eqiad max lag in last 10 minutes on einsteinium is OK: (C)1e+05 gt (W)1e+04 gt 1 https://grafana.wikimedia.org/dashboard/db/kafka-mirrormaker?var-datasource=eqiad+prometheus/ops&var-lag_datasource=eqiad+prometheus/ops&var-mirror_name=main-eqiad_to_eqiad [14:50:46] 10Analytics-Kanban, 10Patch-For-Review: Checklist for geowiki pipeline - https://phabricator.wikimedia.org/T190409#4194269 (10Milimetric) [14:51:45] (03PS1) 10Milimetric: Make the comment match the code [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432087 [14:52:05] (03CR) 10Milimetric: [V: 032 C: 032] Make the comment match the code [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432087 (owner: 10Milimetric) [14:54:30] fdans: hola, yt? [14:54:55] nuria_: hello! [14:55:44] fdans: one thing we need to look at is the uas classified as bots and see whether there are significant changes [14:56:30] nuria_: sure, will run the query now and add that to the phab task [14:56:40] fdans: also you need to test the java code before deploying it, https://wikitech.wikimedia.org/wiki/Analytics/Systems/Cluster/Hive/QueryUsingUDF#Testing_changes_to_existing_UDF [14:57:04] fdans: test ing the java version needs to happen before [14:57:10] fdans: we call it good [14:57:33] nuria_: yes but this is after your code review right? [14:57:45] or should we test with the change open? [14:59:11] fdans: once the code is reviewed we need to build a local jar with those changes, we can push to archive before too [14:59:54] sounds good [16:04:28] yo mforns howbout sum standuping? [16:06:40] milimetric: friendly remainder that we need to redo superset dashboards on top of new datasource on druid since we renamed it [16:17:42] nuria_: already done [16:17:51] also, a-team, joseph's not around so someone's gotta do scrum of scrums [16:18:03] milimetric, it's my ops week [16:18:10] oh, doh, sorry [16:18:17] no prob :] [16:18:23] I actually meant you, but didn't see you at standup [16:18:28] you ok to go? [16:18:33] (to SoS?) [16:18:39] milimetric, sure! [16:18:43] ok, cool [16:18:54] oh, I was at the CDP meeting [16:18:58] (I am) [16:19:00] gotcha [16:19:11] thought we were skipping it [16:19:20] and nuria_, list of stuff we did is: https://phabricator.wikimedia.org/T190409, in case you wanted to double check [16:19:38] right, that would make sense, but we all just did it anyway [16:19:39] oh, now I saw fdans's ping... sorry [16:19:49] are you guys still there? [16:34:31] Hello! If EL data doesn't pass validation, does it mean this data only exist in hdfs, not mysql? [16:36:05] chelsyx: it will not exists in either [16:36:17] chelsyx: if it does not pass validation it does not get persisted [16:36:31] chelsyx: both hdfs and mysql have ONLY valid events [16:37:33] nuria_: so the situation is, our developer specified the wrong revision number, so the data we are sending have 1 more fields than the revision we are sending to [16:38:40] chelsyx: you know you can test events on beta ? (test environment) [16:38:45] chelsyx: is this on prod? [16:39:02] nuria_: not on prod yet [16:39:46] chelsyx: ok, then validation can be tested on beta, is this mobiel apps? [16:39:59] nuria_: yep [16:40:13] nuria_: ok. let me test it now. I will get back to you [16:40:31] chelsyx: the app needs to publish events to teh beta beacon [16:40:33] https://wikitech.wikimedia.org/wiki/Analytics/Systems/EventLogging/TestingOnBetaCluster [16:56:42] !log disabled 0.9 MirrorMaker on kafka102[023], enabled 1.x MirrorMaker on kafka-jumbo* [16:56:53] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [16:59:55] mforns: remind me, is whitelisting now implemeted on hive? (re https://phabricator.wikimedia.org/T193176 ) [17:01:26] HaeB, we merged it today! But it has problems, I'm looking into them right now. I hope to see EL it working this week [17:04:13] cool, i mainly wanted to make sure we don't lose the entire event.popups table ;) [17:11:54] 10Analytics, 10Analytics-Kanban, 10EventBus, 10Patch-For-Review, 10Services (watching): Upgrade Kafka on main cluster with security features - https://phabricator.wikimedia.org/T167039#4194612 (10Ottomata) [17:23:19] nuria_: I tested on the beta cluster, it failed :( I will ask them to fix it. Thanks! [17:24:47] nuria_: Also we fix a bug for MobileWikiAppSearch and adjust the sampling rate. We will send ~ 5 events/sec to this schema, I hope that's ok? [17:34:48] HaeB, given the nature of HDFS (append only) it's difficult to update the event database and apply sanitization on exiting data. Because of this, sanitization in Hive actually creates another dataset named event_sanitized, which will hold the sanitized data indefinitely. [17:35:19] when that database is up to date, then we'll start working on the job that will delete the non-sanitized data after 90 days. [17:35:52] in the meantime, we can make sure event.popups is in the whitelist and we keep everything we can [17:44:46] chelsyx: ok [17:44:57] chelsyx: 5 events/sec [17:45:00] sounds good [17:56:16] * elukey off! [18:02:06] fdans: ok for me to merge https://gerrit.wikimedia.org/r/#/c/432078/ ? [18:02:51] ottomata: yesssss [18:36:17] 10Analytics, 10EventBus, 10Services (blocked), 10User-Elukey: Investigate group.initial.rebalance.delay.ms Kafka setting - https://phabricator.wikimedia.org/T189618#4194809 (10Pchelolo) @Ottomata @elukey now that we were successful in upgrading Kafka, I think we can try increasing this to 10 seconds. Do yo... [18:36:54] mforns: got a sec? [18:37:03] milimetric, sure [18:37:05] bc? [18:37:09] sure [18:42:24] 10Analytics, 10Analytics-Cluster, 10Analytics-Kanban, 10Discovery, 10Patch-For-Review: Migrate Mediawiki Monolog Kafka producer to Kafka Jumbo - https://phabricator.wikimedia.org/T188136#4194860 (10EBernhardson) php-rdkafka would be our best bet, but unfortunately they do not support hhvm and we will not... [18:53:57] Hello! I want to test a new schema on beta cluster (https://meta.wikimedia.org/wiki/Schema:MobileWikiAppiOSReadingLists), do I need to do anything before sending the event? [18:54:21] chelsyx: nope, if you've got the schema, there, just send away to the beta endpoint [18:54:22] :) [18:54:33] ottomata: Thanks! [18:56:03] 10Analytics, 10EventBus, 10Services (blocked), 10User-Elukey: Investigate group.initial.rebalance.delay.ms Kafka setting - https://phabricator.wikimedia.org/T189618#4194933 (10Ottomata) Yeah, I think that sounds fine. [18:59:27] 10Analytics, 10Analytics-Cluster, 10Analytics-Kanban, 10Discovery, 10Patch-For-Review: Migrate Mediawiki Monolog Kafka producer to Kafka Jumbo - https://phabricator.wikimedia.org/T188136#4194953 (10EBernhardson) Potential avenues to investigate: * The send timeout on mediawiki kafka is 10ms. We could try... [19:16:59] 10Analytics-Kanban: Refinery Hive python utils don't support month=2018-02 style partitions - https://phabricator.wikimedia.org/T194304#4195007 (10Milimetric) [19:17:07] 10Analytics-Kanban: Refinery Hive python utils don't support month=2018-02 style partitions - https://phabricator.wikimedia.org/T194304#4195007 (10Milimetric) p:05Triage>03Normal [19:17:15] (03PS1) 10Milimetric: Expand support for specifying dates in partitions [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432154 (https://phabricator.wikimedia.org/T194304) [19:19:18] (03CR) 10Milimetric: "This can be tested with the following command, which breaks before this fix:" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432154 (https://phabricator.wikimedia.org/T194304) (owner: 10Milimetric) [19:30:25] (03CR) 10Ottomata: Expand support for specifying dates in partitions (031 comment) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432154 (https://phabricator.wikimedia.org/T194304) (owner: 10Milimetric) [19:31:38] (03CR) 10Ottomata: Expand support for specifying dates in partitions (031 comment) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432154 (https://phabricator.wikimedia.org/T194304) (owner: 10Milimetric) [19:35:19] (03CR) 10Milimetric: Expand support for specifying dates in partitions (031 comment) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432154 (https://phabricator.wikimedia.org/T194304) (owner: 10Milimetric) [19:35:35] milimetric: [19:35:43] dateutil.parser.parse('2018-02') [19:35:44] hi [19:36:13] oh wait, did I read your thing wrong [19:36:15] rereading [19:36:22] (sorry, trying to listen to Victoria too) [19:36:27] hm,i do get a weird hour here? [19:36:28] In [3]: parser.parse('2018-02') [19:36:28] Out[3]: datetime.datetime(2018, 2, 9, 0, 0) [19:36:33] dunno where that 9 comes from [19:36:50] ottomata: oh, right, yeah, this wouldn't work [19:37:02] oh wait, it would? [19:37:09] oh yeah, it would [19:37:11] yeah, i'm just building something that looks like a date [19:37:17] from a limited list of keys [19:37:21] and then using dateuilt parser to infer the dt [19:37:24] it usually works pretty well [19:37:26] sorry, you're right [19:37:35] although, i dunno why i have that 9 in my returned datetime [19:37:44] day=9? [19:39:23] ottomata: but is dateutil installed on the machines? [19:39:29] I don't have it locally, so it's not default [19:39:55] stat1005 [19:39:56] In [1]: import dateutil [19:40:03] [@stat1005:/home/otto] $ dpkg -l | grep dateutil [19:40:03] ii python-dateutil 2.5.3-2 all powerful extensions to the standard datetime module [19:40:03] ii python3-dateutil 2.5.3-2 all powerful extensions to the standard datetime module [19:41:08] heh, yeah, it seems to be a bug that copies the month to the day [19:41:18] oh? [19:41:51] oh, no it's just always 9... maybe some timezone cra? [19:41:55] In [18]: parser.parse('2018-10') [19:41:55] Out[18]: datetime.datetime(2018, 10, 9, 0, 0) [19:42:17] wait no, how would that be timezones, it's the day [19:42:24] I should stop talking while in meetings [19:42:29] I'll just make it work, thanks for the tip :) [19:42:56] haha ok [19:56:35] 10Analytics, 10Analytics-Kanban, 10EventBus, 10Wikimedia-Logstash, and 2 others: EventBus HTTP Proxy service does not report errors to logstash - https://phabricator.wikimedia.org/T193230#4163559 (10Ottomata) Yahoo! https://logstash.wikimedia.org/app/kibana#/dashboard/default?_g=h@44136fa&_a=h@9f18fa5 [20:00:47] bearloga: So I used your instructions at https://github.com/wikimedia-research/Discovery-Interactive-Adhoc-Usage#re-run-instructions , and after eventually getting all the R packages to install, I now have this error: [20:00:55] Failed to connect to database: Error: Access denied for user 'catrope'@'10.64.53.31' (using password: NO) [20:01:05] 10Analytics, 10Analytics-Kanban, 10EventBus, 10Wikimedia-Logstash, and 2 others: EventBus HTTP Proxy service does not report errors to logstash - https://phabricator.wikimedia.org/T193230#4195138 (10Pchelolo) HA! gelf as the solution? I've told you!!! [20:01:29] I believe the way my stat1006 access works is that there's a .my.cnf file on stat1006 that has the right credentials, and that my username probably doesn't work when connecting to the DB directly [20:01:44] How do I tell this code what user+pass to use? [20:03:34] (03PS2) 10Milimetric: Expand support for specifying dates in partitions [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432154 (https://phabricator.wikimedia.org/T194304) [20:08:41] 10Analytics, 10Analytics-Kanban, 10EventBus, 10Wikimedia-Logstash, and 2 others: EventBus HTTP Proxy service does not report errors to logstash - https://phabricator.wikimedia.org/T193230#4195152 (10Ottomata) Haha, yup, the library was better and just easier to work with. [20:08:58] 10Analytics, 10Analytics-Kanban, 10EventBus, 10Wikimedia-Logstash, and 2 others: EventBus HTTP Proxy service does not report errors to logstash - https://phabricator.wikimedia.org/T193230#4195153 (10Ottomata) Thanks for the tip! :) [20:09:07] that's a funky bug milimetric [20:09:49] I know, I looked it up and found nothing ottomata [20:11:51] bearloga: nm I figured it out (edited the code) [20:14:06] oh milimetric even better and simpler: [20:14:16] parser.parse('-'.join(self.values()), fuzzy=True, default=datetime(2000,1,1,0,0)) [20:14:19] no need for the datetime_keys [20:14:21] with fuzzy=True [20:14:22] and [20:14:30] by providing a default datetime [20:14:40] oh nice [20:14:44] the date values from it are used if they aren't found in the parsed one [20:15:24] hm, so self.values joins in order properly all the time? [20:15:37] because it's an ordered dict [20:15:43] ya [20:15:47] cool [20:17:45] ha milimetric if you wanted to get TOO FANCY (don't do it) i guess you could even subclass parserinfo [20:17:45] http://dateutil.readthedocs.io/en/stable/parser.html#dateutil.parser.parserinfo [20:17:50] for hive partition parsing [20:17:52] but don't do it! [20:17:54] :) [20:18:08] hey, I didn't want to get fancy at all, as you could see from my original solution [20:18:24] :) [20:18:52] haha [20:26:08] (03PS3) 10Milimetric: Expand support for specifying dates in partitions [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432154 (https://phabricator.wikimedia.org/T194304) [20:27:04] (03PS4) 10Milimetric: Expand support for specifying dates in partitions [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432154 (https://phabricator.wikimedia.org/T194304) [20:39:18] 10Analytics, 10Pageviews-API: Add nyc.wikimedia to pageviews whitelist - https://phabricator.wikimedia.org/T194309#4195274 (10MusikAnimal) [20:47:31] (03CR) 10Ottomata: [C: 031] Expand support for specifying dates in partitions [analytics/refinery] - 10https://gerrit.wikimedia.org/r/432154 (https://phabricator.wikimedia.org/T194304) (owner: 10Milimetric) [21:15:34] 10Analytics, 10Analytics-Wikistats: Wikistats 2.0: allow to view stats for all language versions (a.k.a. Project families) - https://phabricator.wikimedia.org/T188550#4195387 (10Milimetric) p:05Normal>03High [21:15:43] 10Analytics, 10Analytics-Wikistats: Wikistats 2.0: allow to view stats for all language versions (a.k.a. Project families) - https://phabricator.wikimedia.org/T188550#4011848 (10Milimetric) (upping priority for comms) [21:52:40] PROBLEM - Check if the Hadoop HDFS Fuse mountpoint is readable on stat1005 is CRITICAL: Return code of 255 is out of bounds [21:55:21] yeah, icinga, tell me about it, that thing blows up nonstop [22:00:18] looking [22:00:19] :) [22:23:00] RECOVERY - Check if the Hadoop HDFS Fuse mountpoint is readable on stat1005 is OK: OK [22:25:32] just needed a little hammering [23:47:07] 10Quarry, 10Epic: [Epic] Make Quarry a robust data exploration tool - https://phabricator.wikimedia.org/T194330#4195825 (10Harej)