[00:47:42] 10Analytics: Estimate how long a new Dashiki Layout for Qualtrics Survey data would take - https://phabricator.wikimedia.org/T184627#3951217 (10egalvezwmf) Thanks @Nuria That makes sense to me and I can add it to the plan. @mcruzWMF and I talked about what this project might look like for next fiscal: * Part 1:... [02:01:31] 10Analytics, 10Operations, 10Research, 10Traffic, and 6 others: Referrer policy for browsers which only support the old spec - https://phabricator.wikimedia.org/T180921#3951340 (10Tgr) @Nuria the new config is live now (although it will only take effect gradually due to Varnish caching). Can you check if t... [02:34:57] 10Analytics, 10Patch-For-Review: Add json linting test for schemas in mediawiki/event-schemas - https://phabricator.wikimedia.org/T124319#3951395 (10Legoktm) This should be done in the repo itself, not in CI-infra. [02:38:54] 10Analytics, 10Continuous-Integration-Infrastructure, 10Patch-For-Review: Add json linting test for schemas in mediawiki/event-schemas - https://phabricator.wikimedia.org/T124319#3951401 (10Legoktm) [06:50:29] hello people [06:50:46] already seen that druid weird error message, when the overlord is confused [06:51:04] I am sure that there are some race conditions in there, so looking forward to upgrade it [07:03:41] # Puppet Name: json_refine_netflow [07:03:42] 45 * * * * /usr/local/bin/json_refine_netflow >> /var/log/refinery/json_refine_netflow.log 2>&1 [07:03:45] \o/ [08:32:16] aaaaaand it failed [08:34:22] ah nice, yarn application -status gives me what caused the trouble [08:34:25] without checking logs [08:34:26] User class threw exception: java.lang.RuntimeException: Regex (netflow)/hourly/(\d+)/(\d+)/(\d+)/(\d+) did not match hdfs://analytics-hadoop/wmf/data/raw/event/hourly/2018/02/03/07 when attempting to extract capture group keys [08:34:39] whaaaattt [08:41:33] elukey: Need to care Lino today, but from you posted, looks like you didn't put correct input path: dfs://analytics-hadoop/wmf/data/raw/EVENT/hourly/2018/02/03/07 [08:42:08] elukey: also, we experienced an issue with Druid overlord yesterday - Indexation task was set to fail at overlord level, but actually finished succesfully from midlle-manager perspective [08:42:54] elukey: The fat that overlord was confused makes the overall thing actually failing: new segments are available in hdfs but not present in coordinator [08:43:09] elukey: need to go, let's discuss when he take the afternoon nap [08:44:06] joal: yep yep fixing json refine atm, didn't check druid [08:47:24] so the netflow dir contains directly "hourly" etc.. [08:47:40] meanwhile the others are parent-dir/tablename/hourly/etc.. [08:47:52] /mnt/hdfs/wmf/data/raw contains netflow and it is my base path in json refine [08:47:55] mmmm [08:48:24] elukey: your base path should be /mnt/hdfs/wmf/data/raw [08:48:44] and json-refine will catch only netflow given the regex (netflow)/ .... [08:50:16] it is /mnt/hdfs/wmf/data/raw [08:50:32] profile::analytics::refinery::job::json_refine_job { 'netflow': [08:50:33] # This is imported by camus_job { 'netflow': } [08:50:33] input_base_path => '/wmf/data/raw', [08:50:48] this is why I am wondering why it picks up event [08:51:40] anyhow, I am writing in here my thoughts, you can answer later on [08:52:02] didn't mean to get your answers now, later is super fine! [09:28:19] so what I think is happening is that json refine expects that in /wmf/data/raw it will find the netflow dirs to check [09:28:37] for example, eventlogging uses the event dir [09:29:25] in there there are only dirs matching the regex [09:31:17] I think that the issue is in https://github.com/wikimedia/analytics-refinery-source/blob/master/refinery-job/src/main/scala/org/wikimedia/analytics/refinery/job/RefineTarget.scala#L354 [09:32:20] I don't understand a lot of scala but inputDatasetPaths seems to be used without filtering out values [09:37:49] in the netflow case: it grabs subdirs from /wmf/data/raw and then try to apply the refinement, assuming that all the subdirs need to match the regex (like in the event dir) [10:05:32] what if we extend the filter applying the inputpath regex too? [10:05:32] // We only care about input partition paths that actually exist, [10:05:35] // so filter out those that don't. [10:05:38] .filter(_.inputExists()) [10:06:19] inputPathRegex regex I mean [10:31:25] something like .filter(path => path matches inputPathRegex) ? [10:32:49] or maybe better applying it to pastPartitionPaths ? [12:25:27] (03PS1) 10Elukey: RefineTarget: filter out unneeded inputDatasetPath's directories [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/408790 (https://phabricator.wikimedia.org/T181036) [12:29:15] (03CR) 10jerkins-bot: [V: 04-1] RefineTarget: filter out unneeded inputDatasetPath's directories [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/408790 (https://phabricator.wikimedia.org/T181036) (owner: 10Elukey) [12:31:17] I mean, Luca, you don't know scala and you try to submit patches [12:58:25] * elukey lunch! [13:30:25] (03PS2) 10Joal: Update RefineTarget inputBasePath matches [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/408790 (https://phabricator.wikimedia.org/T181036) (owner: 10Elukey) [13:30:38] elukey: When you're back you cam ry that --^ [13:31:08] elukey: Would you mind also maybe restarting overlords in druid-public, for me to re-launch indexation? [13:33:49] joal: ah nice thanks! I need to read some scala :P [13:34:06] I am currently adding ip6 to jumbo, will restart overlords in a bit is it fine? [13:34:25] sure [13:34:42] elukey: I suggest you use an IDE that shows compilation errors for scala - it helps :) [13:36:14] joal: I am happy that I understood the code without making a horrible patch, I also need to study the language a bit [13:36:14] seems interesting! [13:37:19] good morning a-team! [13:37:46] hola fdans :) [13:42:19] o/ [13:43:01] now joal, I am waiting for Brandon to check so I can work on druid [13:44:24] o elukey [13:44:28] Hi fdans [13:49:22] !log restart overlord/middlemanager on druid1005 [13:49:28] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [13:52:24] joal: druid1006 is now the overlord leader, mind to test again indexation? [13:55:26] !log Manually restarted druid indexation after weird failure of mediawiki-history-reduced-wf-2018-01 [13:55:27] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [13:55:31] Done elu [13:55:35] Done elukey [13:56:34] let's see how it goes :) [13:57:03] joal: is it an easy explanation about the "case" in filter() or do I need to study first? [13:57:49] elukey: match in scala operates with case [13:58:02] so "x match y" makes no sense in scala [13:58:02] mornin' yall [13:58:05] heya milimetric [13:58:26] you'd write: "x match { case y => bkah } [13:59:18] Then scala allows for shortcuting "x match" at the beginning of function [13:59:25] MAkes sense? [13:59:27] I thought I tried it with spark-shell and it seemed to work (filtering a list) [13:59:43] anyhow, it makes few sense, again need to study :D [13:59:44] thanks! [13:59:49] :) [14:00:24] milimetric: o/ - I saw your invite but today was really busy, still working on Kafka Jumbo and had no time to check debezium :( [14:00:38] oh ok elukey [14:00:50] we can talk when you have time, I'll just focus on the sqoop code then [14:01:54] thanks a lot, sorry :( [14:05:36] 10Analytics-Cluster, 10Analytics-Kanban, 10Patch-For-Review: Add IPv6 addresses for kafka-jumbo hosts - https://phabricator.wikimedia.org/T185262#3952463 (10elukey) Looks good! ``` elukey@neodymium:~$ sudo cumin 'kafka-jumbo100*' 'ip addr | grep inet6' 6 hosts will be targeted: kafka-jumbo[1001-1006].eqiad.... [14:05:46] 10Analytics-Cluster, 10Analytics-Kanban, 10Patch-For-Review: Add IPv6 addresses for kafka-jumbo hosts - https://phabricator.wikimedia.org/T185262#3952465 (10elukey) a:05Ottomata>03elukey [14:06:41] hewo? [14:07:29] ottomata: hello! I wanted to add ipsec today but ipv6 was needed first, so I did it with Brandon [14:07:38] all good [14:07:39] milimetric: elukey i got a meeting scheduled [14:07:41] oh great! [14:07:46] yall coming to your meeting? [14:08:09] hah elukey https://gerrit.wikimedia.org/r/#/c/405891/ [14:08:10] :) [14:08:31] saw it only later on sorry :( [14:08:35] np [14:08:44] you guys invited me to a meeting now, ya? [14:09:12] I thought it was only me and Dan, sorry [14:09:28] I can join if you guys want but didn't prepare myself, was a bit busy today [14:09:33] milimetric: what do you think? [14:09:41] oh ok, no prob, i just joined cause i was invited! [14:09:52] I donno [14:10:02] I'm not prepared either [14:10:24] can't really prepare, we're talking about what our best guess is going forward [14:10:48] yeah I meant simply reading what it is that we are talking about :D [14:10:48] but if you'd rather read up a bit first, then definitely do that, and set up a meeting when you have time elukey [14:11:28] in general, when we set up these informal meetings, just move them if you need, they're editable [14:11:45] milimetric: if you have patience to give me an introduction I can definitely try to nod and simulate to have something working in my brain :) [14:12:30] nah, I'm just reading about it too, but I'm not even sure what exactly the criteria is that I'm supposed to use to evaluate what I'm reading [14:12:40] like, it's mostly ops considerations [14:12:53] maybe IRC / async is better for now, actually [14:13:08] ok, so I think we have to answer two questions at the same time [14:14:46] Question 1: our users want both raw data and clean OLAP data, how much do we want to be involved in cleaning and generating OLAP data, and how much do we want to leave up to them? This is very complicated. [14:16:06] Question 2: considering Question 1, how do we want to import mediawiki database data into our world? Real-time? Batch? Lambda? Lambda + cleaning? Tungsten? [14:16:43] elukey + ottomata: is that a fair formulation of what we need to talk about? ^ [14:17:39] once we're on the same page with that, we need to pick the best technology considering mostly operations and documentation [14:18:08] because no matter what we do, even heavy OLAP cleaning, the dev effort is temporary [14:18:17] elukey: from he look of it, druid1005's overlord was misbehaving - indexing task is still alive, hadoop job moves forward, looking good [14:19:48] joal: ack! [14:20:52] milimetric: I have some time now, bc? [14:21:19] omw [14:21:41] elukey: in da cave [14:24:12] ok i'm coming too [14:30:19] 10Analytics-Tech-community-metrics, 10Developer-Relations (Jan-Mar-2018): Automatically sync mediawiki-identities/wikimedia-affiliations.json DB dump file with the data available on wikimedia.biterg.io - https://phabricator.wikimedia.org/T157898#3952539 (10Qgil) This task has high priority but it is not assign... [14:39:07] 10Analytics-Tech-community-metrics, 10Developer-Relations (Jan-Mar-2018): Automatically sync mediawiki-identities/wikimedia-affiliations.json DB dump file with the data available on wikimedia.biterg.io - https://phabricator.wikimedia.org/T157898#3952599 (10Aklapper) a:03Acs AFAIK Albertinisg left. The plan i... [14:40:02] 10Analytics-Tech-community-metrics, 10Developer-Relations (Jan-Mar-2018): Provide Hatstall to fix syncing / updating of identities on the wikimedia.biterg.io production instance - https://phabricator.wikimedia.org/T157898#3952606 (10Aklapper) [14:42:33] 10Analytics-Tech-community-metrics, 10Developer-Relations (Jan-Mar-2018): Provide Hatstall to fix syncing / updating of identities on the wikimedia.biterg.io production instance - https://phabricator.wikimedia.org/T157898#3952616 (10Aklapper) [14:54:50] err... here's how Yelp does it, some custom python binlog replicator: https://engineeringblog.yelp.com/2016/08/streaming-mysql-tables-in-real-time-to-kafka.html [14:55:02] not sure about that, python is kinda slow [14:56:41] fdans: you going to scrum of scrums today? [14:58:29] milimetric: yessss [14:58:49] awesome, thx [15:03:16] 10Analytics-Tech-community-metrics, 10Developer-Relations (Jan-Mar-2018): One account (in "gerrit_top_developers" widget) counted as two accounts (in "gerrit_main_numbers" widget) - https://phabricator.wikimedia.org/T184741#3893944 (10Qgil) This task is not assigned to anyone. Is it committed to this quarter? [15:10:37] joal: you there? [15:10:55] what do you think about having the cassandra standup on Tue same time? [15:11:12] I'd like to send an email to Eric and Filippo about it [15:14:00] ottomata: https://gerrit.wikimedia.org/r/#/c/408790/ - what do you think about it? [15:17:08] elukey: not sure i understand [15:21:10] good, I probably wrote a stupid patch [15:21:26] haha [15:21:32] well i don't really understand the problem [15:21:41] why is it matching on hdfs://analytics-hadoop/wmf/data/raw/event/hourly/2018/02/03/07 [15:21:42] and [15:22:04] OHHH [15:22:08] because there are dirs in there [15:22:09] rigiht [15:22:10] ok [15:22:11] hm [15:22:12] yeah! [15:22:16] but it throws an exception? [15:22:31] event is the first one, tries to apply the regex and raise the exception afait [15:22:36] *afaict [15:22:39] huh, interesting [15:22:49] strange that a non match is an exception.. [15:23:31] elukey: move the filter up to line 351 [15:23:35] OH [15:23:36] but you can't [15:23:39] because the regex matches the whole hting [15:23:41] growl [15:23:59] that kinda sucks, because it is going to run the regex against all subdirs + hourly paths since [15:24:01] h [15:24:04] hmm [15:24:36] not if we reduce the subdirs path no? [15:24:41] (what I am trying to do ) [15:25:01] ahh the regex yes [15:25:06] it will run on all [15:25:09] partitionPathsSince generates [15:25:09] yeah [15:26:05] elukey: it hink your patch is good but maybe.... we should put netflow into its own dir? [15:26:18] this is the first time we've tried to do it like this, use camus and json refine to import from a more top level spot [15:26:28] all other camus jobs import topics into their own dirs [15:26:35] maybe we should have a /wmf/data/raw/ops [15:26:36] ? [15:26:37] dir? [15:27:14] but then say we put another "opsy" topic in there, not related to netflow, it would cause the same issue right? [15:27:27] no [15:27:31] hmm [15:27:32] no [15:27:39] as long as they can be part of the same json refine job [15:27:39] hm [15:27:59] yeah it'd have to be json, and it would have to be the same partition scheme, and have to be refiend to the same output path & hive database [15:28:10] well, hm [15:28:10] no [15:28:11] well [15:28:12] yes. [15:28:12] hm [15:28:14] hah [15:28:15] haha [15:28:28] ahahhaha [15:28:35] (03CR) 10Ottomata: [C: 031] Update RefineTarget inputBasePath matches [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/408790 (https://phabricator.wikimedia.org/T181036) (owner: 10Elukey) [15:33:15] oh ops sync [15:33:16] elukey: should we? [15:36:02] ah sorry! [15:36:06] I was doing coffee [15:36:19] joining [15:56:54] ottomata: Question about event.meta.id, “The unique ID of this event; should match the dt field.” —Is this actually enforced? I’m planning to use Python’s built-in uuid.uuid1(), which does the timestamp math internally. Do I have to extract the UUID’s timestamp to make sure meta.dt matches? [16:00:41] a-team, I'm in a call with Lenovo, my computer needs repair, will be a bit late to stand-up... sorry [16:01:14] I feel you mforns, I gotta take mine in, and leave it there for a whole DAY!!! [16:01:22] mmmmm [16:07:34] awight: looks like it is not enforced exactly! [16:07:34] https://github.com/wikimedia/mediawiki-extensions-EventBus/blob/master/EventBus.php#L320-L321 [16:07:52] hehe okay then, I’ll pretend I didn’t see the comment. [16:07:58] so i suppose if you generate the uuid1 and the dt at about the same time, that's good enough [16:08:16] 10Analytics-Kanban, 10Discovery-Analysis, 10MobileApp, 10Wikipedia-Android-App-Backlog, 10Patch-For-Review: Bug behavior of QTree[Long] for quantileBounds - https://phabricator.wikimedia.org/T184768#3952942 (10Nuria) Ok, i think i figured out how is all this plugged together, i need to do a bit more tes... [16:09:27] 10Analytics-Cluster, 10Analytics-Kanban: Move webrequest varnishkafka and consumers to Kafka jumbo cluster. - https://phabricator.wikimedia.org/T185136#3952945 (10Ottomata) [16:09:38] 10Analytics-Cluster, 10Analytics-Kanban: Move webrequest varnishkafka and consumers to Kafka jumbo cluster. - https://phabricator.wikimedia.org/T185136#3907314 (10Ottomata) [16:10:47] (03CR) 10Elukey: [C: 032] Update RefineTarget inputBasePath matches [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/408790 (https://phabricator.wikimedia.org/T181036) (owner: 10Elukey) [16:19:50] mforns: "Customers should immediately stop using affected computers, and schedule an appointment to have the laptop inspected for unfastened screws." [16:19:53] jeez! [16:21:25] milimetric, xDDD we should use this https://tinyurl.com/yb976ro3 [16:21:41] accelerated planned obsolescence! [16:21:49] hehehe [16:22:44] lol, yea [16:37:26] 10Analytics-Tech-community-metrics, 10Developer-Relations (Jan-Mar-2018): One account (in "gerrit_top_developers" widget) counted as two accounts (in "gerrit_main_numbers" widget) - https://phabricator.wikimedia.org/T184741#3953136 (10Aklapper) a:03Lcanasdiaz Reflecting https://gitlab.com/Bitergia/c/Wikimedi... [16:40:49] (03CR) 10Milimetric: Launch top pageviews by country on dashboard (031 comment) [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/405708 (https://phabricator.wikimedia.org/T185510) (owner: 10Fdans) [16:42:56] (03PS5) 10Fdans: Launch top pageviews by country on dashboard [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/405708 (https://phabricator.wikimedia.org/T185510) [16:44:09] (03CR) 10Fdans: [V: 032 C: 032] Launch top pageviews by country on dashboard [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/405708 (https://phabricator.wikimedia.org/T185510) (owner: 10Fdans) [16:48:02] elukey: check this out [16:48:04] [@stat1005:/home/otto] $ dhive --date '3 days ago' [16:48:04] year=2018/month=2/day=4/hour=16 [16:48:13] [@stat1005:/home/otto] $ dcamus --date 'jan 1' [16:48:13] 2018/01/01/00 [16:48:27] just some sneaky date aliases [16:48:29] is it an alias? [16:48:30] wowwww [16:48:30] ya [16:53:54] elukey: +1 for asking to move cassandra-standup [16:54:41] elukey, ottomata: I'd also ask if we could move that ops-sync maybe after standup? On wednesdays I really can't make it before 18:00 [16:56:16] elukey: I confirm a successful indexation :) hanks for having restarted the beast [16:56:28] yeahhh [16:58:33] joal: ayyy elukey and I accidentally did it today at 10:30 [16:58:42] ??? [16:58:52] (1.5h ago) i could do that, but often standup goes over Aa [16:58:58] true [16:59:05] hm [16:59:43] 10Analytics: Record and aggregate page previews - https://phabricator.wikimedia.org/T186728#3953218 (10Tbayer) [17:03:20] 10Analytics: Record and aggregate page previews - https://phabricator.wikimedia.org/T186728#3953218 (10Tbayer) [17:04:59] 10Analytics-Kanban, 10Analytics-Wikistats, 10Patch-For-Review: UI not querying daily granularity for 3-month and 1-month periods - https://phabricator.wikimedia.org/T186075#3953254 (10Nuria) [17:13:27] Heya nuria_ - I notice you have 3 spark-shells running in parallel on the cluster - They use small resources, but I wonder if it was intentional [17:14:40] joal: i have two intentionally yes, but can shut those down [17:15:00] nuria_: no problem of having mutiples - just wondering :) [17:22:31] 10Analytics, 10Analytics-EventLogging: uBlock blocks EventLogging - https://phabricator.wikimedia.org/T186572#3948086 (10Legoktm) This was discussed back in December 2014, see the thread starting with https://lists.wikimedia.org/pipermail/analytics/2014-December/002864.html At the time the consensus was that... [18:03:10] 10Analytics-Tech-community-metrics, 10Upstream: Make Perceval not index IP address accounts (and consider removing MediaWiki accounts from our DB) - https://phabricator.wikimedia.org/T149482#3953444 (10Aklapper) 05Open>03declined I changed my mind and I don't think this is needed or feasible, now that we p... [18:03:51] ok, logging off now, gonna go have this laptop looked at [18:04:04] text me or ping me on hangouts if you need me [18:08:14] 10Analytics-Tech-community-metrics, 10Developer-Relations: Consider enabling GitHub backend in wikimedia.biterg.io to cover canonical Wikimedia repositories not in Gerrit - https://phabricator.wikimedia.org/T186736#3953477 (10Aklapper) 05Open>03stalled p:05Triage>03Lowest [18:08:26] 10Analytics-Tech-community-metrics, 10Developer-Relations: Consider enabling GitHub backend in wikimedia.biterg.io to cover canonical Wikimedia repositories not in Gerrit - https://phabricator.wikimedia.org/T186736#3953477 (10Aklapper) [18:19:26] 10Analytics, 10Operations, 10Research, 10Traffic, and 6 others: Referrer policy for browsers which only support the old spec - https://phabricator.wikimedia.org/T180921#3953540 (10Nuria) Will do, let's let it bake a bit and i shall check. [18:43:35] (03PS1) 10Fdans: Release 2.1.7 [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/408846 [18:44:22] (03CR) 10Fdans: [V: 032 C: 032] Release 2.1.7 [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/408846 (owner: 10Fdans) [18:48:00] (03Merged) 10jenkins-bot: Release 2.1.7 [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/408846 (owner: 10Fdans) [18:50:36] (03PS1) 10Fdans: Release 2.1.7 [analytics/wikistats2] (release) - 10https://gerrit.wikimedia.org/r/408847 [18:51:18] (03CR) 10Fdans: [V: 032 C: 032] Release 2.1.7 [analytics/wikistats2] (release) - 10https://gerrit.wikimedia.org/r/408847 (owner: 10Fdans) [18:52:31] (03PS1) 10Milimetric: [WIP] Saving in case laptop catches on fire, nothing interesting yet [analytics/refinery] - 10https://gerrit.wikimedia.org/r/408848 [19:15:57] ok team, clickstream finished - cluster catchup is done for this month [19:16:12] we're ready to go next week with j8 [19:31:22] \o/ [19:32:31] :D [19:51:14] fdans: yt? [19:56:21] 10Analytics-Tech-community-metrics, 10Developer-Relations (Jan-Mar-2018): Have "Last Attracted Developers" information for Gerrit (already exists for Git) automatically updated - https://phabricator.wikimedia.org/T151161#3953917 (10Aklapper) a:05Dicortazar>03Lcanasdiaz [20:02:44] nuria_: yep! just back from lunch [20:19:00] 10Analytics, 10Operations, 10hardware-requests, 10ops-eqiad: Decommission kafka1018 - https://phabricator.wikimedia.org/T182955#3953998 (10RobH) All decommissioning should be tagged with #hw-requests. [20:24:54] fdans: let's talk about deployment in a bit [20:30:35] nuria_: 5pm pst? [20:39:58] 10Analytics, 10Operations, 10hardware-requests, 10ops-eqiad: Decommission kafka1018 - https://phabricator.wikimedia.org/T182955#3954040 (10RobH) a:03RobH [20:45:01] 10Analytics, 10Operations, 10hardware-requests, 10ops-eqiad: Decommission kafka1018 - https://phabricator.wikimedia.org/T182955#3954043 (10RobH) So I cannot see kafka1018 on the switch stack in row D. @Cmjohnson, I cannot actually finish the non-interrupt steps, since the port isn't noted. The host is cu... [20:54:16] 10Analytics, 10Operations, 10hardware-requests, 10ops-eqiad: Decommission kafka1018 - https://phabricator.wikimedia.org/T182955#3954057 (10RobH) a:05RobH>03Cmjohnson [20:55:27] 10Analytics, 10Operations, 10hardware-requests, 10ops-eqiad: Decommission kafka1018 - https://phabricator.wikimedia.org/T182955#3839641 (10RobH) Ok, ready for on-site wipe and unracking (plus the tracing and disabling of the switch port) [21:00:08] nuria_: wrong time, i meant 1pm pst [21:00:23] i added instead of substracting from central time [21:04:57] fdans: I think there are couple ux issues that need fixing [21:05:02] fdans: sorry [21:05:05] fdans: for example [21:05:14] https://usercontent.irccloud-cdn.com/file/6PqYOhGI/Screen%20Shot%202018-02-07%20at%2012.23.37%20PM.png [21:05:37] table download does looks strange with that formatting [21:06:36] and also there is a split nob but you cannot split by any dimension [21:06:54] https://usercontent.irccloud-cdn.com/file/xTSLMXkH/Screen%20Shot%202018-02-07%20at%2012.23.44%20PM.png [21:07:06] let me know what you think fdans [21:07:42] nuria_: the split is a thing with top metrics, which don't have breakdowns [21:08:27] we could hide filter and split for tops by default, I agree that it looks a bit odd [21:08:39] fdans: ok, let's do that [21:08:43] nuria_: by table download you mean the csv? [21:09:13] fdans: more the screen view of it: https://usercontent.irccloud-cdn.com/file/6PqYOhGI/Screen%20Shot%202018-02-07%20at%2012.23.37%20PM.png [21:09:20] oh right [21:09:38] yeah there's no filter applied to the numbers [21:10:04] or there is one, but it isn't applying the thousands [21:30:05] (03PS1) 10Joal: [WIP] Update sqoop-mediawiki-tables script [analytics/refinery] - 10https://gerrit.wikimedia.org/r/408930 (https://phabricator.wikimedia.org/T186541) [21:55:11] 10Analytics-Kanban, 10Patch-For-Review: Launch top per country pageviews on UI - https://phabricator.wikimedia.org/T185510#3954256 (10Nuria) {F13224007} Please see screenshot i think table view needs a bit of work with formatting [21:58:32] 10Analytics: Estimate how long a new Dashiki Layout for Qualtrics Survey data would take - https://phabricator.wikimedia.org/T184627#3954272 (10Nuria) Sounds good. [22:46:01] bye team! cya [23:43:20] fdans: see my e-mail on localization , for it to be applied you need to have the english locale set: numeral.options.currentLocale = "en" [23:43:31] fdans: I will submit a patch