[08:08:29] 10Analytics, 10ChangeProp, 10Citoid, 10ContentTranslation-CXserver, and 10 others: Node 6 upgrade planning - https://phabricator.wikimedia.org/T149331#2784925 (10Arrbee) [09:42:13] * elukey reads russian comments in https://github.com/yandex/ClickHouse/blob/master/debian/rules [09:42:36] * elukey blames joal [09:42:37] :D [09:44:02] * elukey restarts druid for openjdk updates [10:37:19] Hi elukey :) [10:37:32] o/ [10:37:36] Sorry for the russian [10:37:42] ahahah Moritz found it [10:38:05] so Clickhouse is writte in cpp! [10:38:09] *written [10:38:32] yeah, russian stuff: no bells and whistles on UI, but definitely performant ;) [10:54:27] joal: https://gerrit.wikimedia.org/r/#/c/320574/ is going to be merged [10:54:43] it will be a no-op but please let me know if you see any issue [11:11:26] elukey: I don't think of any issue [11:11:46] elukey: I am not aware of any non-analytics node using hive-server [11:13:33] looks all fine, no dropped traffic in the logging rules I enabled to double-check [11:14:20] super thanks! [11:53:42] AFK for some time [12:01:58] AFK as well! [12:42:04] 10Analytics-General-or-Unknown, 10Graphite, 06Operations, 06Performance-Team, 07Wikimedia-Incident: statsv outage on 2016-11-09 - https://phabricator.wikimedia.org/T150359#2785460 (10elukey) Thanks for pinging me, I didn't notice the problem since (afaics from #wikimedia-operations) statsv on hafnium did... [14:38:29] 10Analytics-Tech-community-metrics: Data on korma not getting updated - https://phabricator.wikimedia.org/T149984#2785707 (10Aklapper) 05Open>03Resolved Seems to have fixed itself somehow. There are updates again in https://github.com/Bitergia/mediawiki-identities/commits/master/wikimedia-affiliations.json... [14:46:30] 06Analytics-Kanban, 13Patch-For-Review: Find a strategy for integration testing - https://phabricator.wikimedia.org/T147442#2785744 (10elukey) I tried to come up with a simple prototype to automate the checks that I usually do, and the final result was good in my opinion. The tool is not generic and a lot of t... [15:25:54] 10Analytics-General-or-Unknown, 10Graphite, 06Operations, 06Performance-Team, 07Wikimedia-Incident: statsv outage on 2016-11-09 - https://phabricator.wikimedia.org/T150359#2785852 (10Krinkle) @elukey @Ottomata Also see the syslog paste in the task description. The log ends with the exception that, I beli... [15:27:09] 10Analytics-General-or-Unknown, 10Graphite, 06Operations, 06Performance-Team, 07Wikimedia-Incident: statsv outage on 2016-11-09 - https://phabricator.wikimedia.org/T150359#2785853 (10elukey) Yes I added the comment about the statsv source code line after checking that one :) [15:28:45] hi ottomata :], batcave or other? [15:29:17] err, batcave, so we don't miss standup [15:46:42] 10Analytics, 10EventBus, 10Wikimedia-Stream: Improve tests for KafkaSSE - https://phabricator.wikimedia.org/T150436#2785920 (10Ottomata) [15:48:47] 10Analytics, 10EventBus, 10Wikimedia-Stream: Tests for swagger spec stream routes in EventStreams - https://phabricator.wikimedia.org/T150439#2785967 (10Ottomata) [15:52:22] 10Analytics, 10EventBus, 10Wikimedia-Stream, 10service-template-node, 06Services (watching): Tests for swagger spec stream routes in EventStreams - https://phabricator.wikimedia.org/T150439#2785992 (10mobrovac) [16:01:39] ottomata: standdduppppp [16:01:47] ottomata, elukey [16:02:33] !! [16:02:33] 06Analytics-Kanban, 10EventBus, 10Wikimedia-Stream, 13Patch-For-Review: Productionize and deploy Public EventStreams - https://phabricator.wikimedia.org/T143925#2786019 (10Ottomata) (Changed my mind again, let's discuss the domain vs path vs nginx vs varnish routing stuff here.) Over in https://gerrit.wik... [16:27:49] 06Analytics-Kanban, 10EventBus, 10Wikimedia-Stream, 13Patch-For-Review: Productionize and deploy Public EventStreams - https://phabricator.wikimedia.org/T143925#2786146 (10BBlack) We're definitely not doing the custom nginx path-routing thing. It's just too much of an edge-case, and I don't want to have t... [16:39:21] 10Analytics: Puppetize clickhouse - https://phabricator.wikimedia.org/T150343#2786177 (10Nuria) clickhouse is a columnar datastore that we are using as an aid to run complex SQL queries on the edit data "lake" that we have as a result of the edit reconstruction project. It is similar to Druid but faster for co... [16:41:36] 10Analytics: Puppetize clickhouse - https://phabricator.wikimedia.org/T150343#2786178 (10Nuria) Pkg work, build for our OS distro, import pkgs uploading packages to wikimedia puppet code Test deployment [16:43:58] 06Analytics-Kanban: Puppetize clickhouse - https://phabricator.wikimedia.org/T150343#2782769 (10Nuria) [16:50:18] 10Analytics, 10EventBus, 10Wikimedia-Stream: Improve tests for KafkaSSE - https://phabricator.wikimedia.org/T150436#2786187 (10Nuria) Several Options: - kafka broker tests could be run just locally - mocks could be added to test jenkins with kafka Or: - we could beef up jenkins kafka setup [16:51:37] 06Analytics-Kanban, 10EventBus, 10Wikimedia-Stream: Improve tests for KafkaSSE - https://phabricator.wikimedia.org/T150436#2785920 (10Nuria) [16:54:02] 06Analytics-Kanban: (subtask) Marcel's standard metrics - https://phabricator.wikimedia.org/T150024#2771937 (10Nuria) - writing metrics to denormalize table according to Reserach's description - vetting data - iterate and troubleshoot per metric [17:07:50] joal; deploying cluster then? [17:07:58] Joal; let me dig those wikis [17:10:02] joal: we are deploying just refinery source, correct? [17:17:22] (03PS1) 10Nuria: Changelog v0.0.37 [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/320797 [17:19:02] joal/ottomata : chnagelog: https://gerrit.wikimedia.org/r/#/c/320797/ [17:19:14] nuria: deploying refinery source, then refinery (for jars to be on cluster), then restart load job [17:19:53] actually before deployng refinery, upgrading jar version number in load conf [17:19:58] Then deploy refinery [17:20:02] then restart load [17:20:43] nuria: only that iOS patch ... How sad ;) [17:21:24] (03CR) 10Joal: [C: 032 V: 032] "LGTM" [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/320797 (owner: 10Nuria) [17:22:42] mforns / joal: there's an interesting finding: the denorm table is a bit slower for some of my queries, especially surviving new editors [17:22:52] the reason is that the raw sqooped data is partitioned by wiki [17:23:00] milimetric, yes happened to me as well [17:23:05] with several queries [17:23:07] if we plan to use hive long term, it may make sense to partition denorm [17:23:14] or to think on it more [17:23:17] milimetric: actually denormalized should be slower for almost every given that fact [17:23:40] yeah, it's slower for most, some it's pretty fast when looking at only a few columns [17:23:45] (thanks parquet) [17:23:49] yup [17:24:20] milimetric: we could keep projet partitioning if needed [17:25:01] were we planning to remove wiki_db partition? [17:25:11] (03Merged) 10jenkins-bot: Changelog v0.0.37 [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/320797 (owner: 10Nuria) [17:26:56] the partition is annoying for other reasons, like if someone did want to run cross-wiki queries, they'd have to deal with a little annoyance [17:27:09] but they could say like "where wiki_db > ''"... I donno [17:27:14] we should ask neil and them [17:32:08] milimetric, mforns : I don't really see bad aspects on partitioning by wiki [17:32:27] milimetric: the little trick for when querying for all, but that's all [17:32:51] milimetric, but if we add other partitions, then the wiki_db WHERE condition is not mandatory any more no? [17:33:28] mforns: other partitions? [17:33:46] joal, I mean partition by other fields as well [17:34:18] hard to know what to partition by, though, maybe event_type? [17:34:23] hm ... Which fields would you view as interesting partitions? event_entity for instance? [17:34:24] that seems like it'd be in most queries [17:34:39] right milimetric, same question [17:34:42] yeah, event_entity and event_type seem logical [17:34:58] ottomata: you there? [17:35:06] even if you do want a query over both of those, you'd want to be explicit about it. Good point marcel! [17:35:08] then I see a concern: to much partitionning - too many small files [17:35:21] * milimetric goes to get fries with sugar sauce and fried chicken [17:35:52] mmmmm... [17:36:15] oh, maybe wiki_db, event_entity, then it's all of the history for each main type for each wiki, that's not too many files [17:36:28] 800 * 5-ish [17:36:52] event_entity by itself may not be enough though, because revision alone is too big [17:39:41] * elukey goes afk! :) [17:40:43] mforns: But adding event_type doesn't help for revision [17:40:56] they all are create ... [17:41:01] joal, totally, hehehe [17:42:54] joal, milimetric, maybe we can add the field 'year' [17:52:17] mforns: we culd [17:52:40] mforns: we need a more detailled analysis of data size per wiki per year per month potentially [17:52:52] joal, aha [17:54:44] mforns: And also a better understanding of our clients requests: no point partitioning by year if most requests use full history [17:54:58] nuria: everything ok on deploy side? [17:54:59] joal, makes total sense [17:55:48] elukey: hiy [17:55:49] was lunchin [17:56:31] is tomorrowa work holiday? milimetric ? [17:56:44] 06Analytics-Kanban, 10EventBus, 10Wikimedia-Stream, 13Patch-For-Review: Productionize and deploy Public EventStreams - https://phabricator.wikimedia.org/T143925#2786471 (10GWicke) From the API product perspective, it would be preferable to integrate event streams into the uniform REST API. The main benefit... [17:58:46] ottomata: yeah, veteran's day [17:58:59] veterans'? [17:59:08] oh [17:59:10] iinteresting! [17:59:17] i might save that day off then [18:08:33]  [18:08:39] oops :) [18:10:52] ottomata, hoal: got an aerror when deploying about no space left on device: [18:10:58] https://www.irccloud.com/pastebin/EvfTa6Fj/ [18:11:19] on 1004? cc ottomata [18:11:20] !! [18:11:21] looking [18:12:09] hm, there is plenty of space.... [18:13:37] nuria: you are deploying from tin? [18:14:20] yes [18:14:25] 1002 worked fine [18:14:41] but ottomata it is not clear where is the no space left on device error, is it? [18:15:08] https://www.irccloud.com/pastebin/L8Zk7KNk/ [18:16:46] ah an27 [18:16:47] sorry [18:16:51] yeah [18:17:00] / is full, looking [18:17:09] weird [18:17:38] nuria: Please log in chan when deploying, for future rememberings ;° [18:17:42] ;) [18:18:03] joal: ah right, will add remainder to wiki [18:19:40] ottomata: ok, let me know [18:19:54] ottomata: and i will retry and update SAL and restart jobs [18:20:21] nuria: i just cleared out some old refinery deploys [18:20:31] looks like we need to expand /srv or something [18:20:42] a refinery deploy is almost 3G [18:20:48] and scap keeps the past 6 deploys [18:20:57] ottomata: k retrying [18:21:05] ottomata: ah i was about to ask that [18:21:13] ottomata: it deletes past deployments then [18:21:26] ja, or something [18:21:37] ottomata: ok now it worked [18:22:07] !log deployed v0.0.37 of refinery-source https://gerrit.wikimedia.org/r/#/c/320797/ [18:22:08] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:26:10] Thanks for the log nuria :) [18:26:46] nuria: quick question here : Have you updated the jar version in load job before deploying refinery? [18:34:23] joal: ah no. [18:34:52] joal: i was thinking about that and i still do not see why jobs need to hold on to versioned jars rather than "current" [18:35:46] nuria: for non-backward compatibility I guess [18:35:55] joal:but we never do that [18:36:01] nuria: We discussed that with ottomata long time ago [18:36:13] joal:all our chnages are backwards compatible cc ottomata [18:36:16] *changes [18:36:27] joal: so i see no benefit of having that additional step [18:37:04] joal: i certainly never needed it on a service where you always do backwards compatible changes, i know this is not a service but still [18:37:11] nuria: I prefer to view this as an explicit-versioning more than an extra deploy step [18:37:16] ottomata: could we link to "current" hdfs:///wmf/refinery/current [18:37:32] nuria: We could [18:38:02] joal: but what is the value of that when we log deployments explicitily [18:38:14] joal: and we keep changes backwards compatible [18:38:23] i think the issue is with oozie, and not swapping out things out from under long running jobs [18:38:24] nuria: different jobs use different jar versins for instance [18:38:31] there may be jobs that we (or other teams) have running in oozie [18:38:46] that have been running for many months [18:38:48] joal: is that good? doesn't seem like it is [18:39:03] joal: more if scap is deleting deployments and only keeps teh last 6 ones [18:39:07] nuria: I don't see this as bad [18:39:16] i think its better to be explicit about the code that the job is running, rather than swapping things out between runs of that job [18:39:22] nuria: scap != hdfs deploy [18:39:24] nuria: the deployments are in hdfs [18:39:24] yeah [18:39:29] ah true [18:39:34] its just local copies on each node that are being removed [18:40:08] i just do not see any value of doing that a job that has started will not have jars swapped under until a jvm is started again [18:41:15] ottomata: and if all changes are backwards compatible and deployments are logged why do we need the jar versioning? [18:41:38] nuria: the 'jvms' are started over and over [18:41:41] each workflow [18:41:47] instance [18:42:02] oozie reads its job configuration, and then submits a new hadoop job for each run [18:42:06] hm ... I prefer explicit versioning for understanding when things change more easily (at least in my mind) -- But I can't no technical reason for which we couldn't use current [18:42:39] so each new hadoop job picks up the jars specified in the oozie config [18:42:56] if oozie config says current, AND current changes, then the next workflow instance will get the new current [18:43:14] but, we version jars anyway. so there are two levels here [18:43:32] there's the refinery path we use when we submit the jobs. AND there are the refinery jar versions that are in the job properties too [18:44:06] so, both of those might be overkill, dunno. we do occasionally prune old jar versions from refinery [18:44:32] if we happen to deploy a new current that is missing a jar version that a running job is configured to use, AND that job points at refinery current [18:44:35] then its next run will fail [18:44:42] ottomata: We could use linked versions in jobs - no change nor restart needed [18:44:47] that seems a little unlikely to happen, especially if we are careful when we prune jars [18:45:12] yeah, but then we really are swapping out jar versions out from under running jobs [18:45:14] that i don't like [18:45:46] ottomata: different runs of the same job will pull different jars [18:46:08] IF they are configured with refinery current and the linked jar, then yes [18:46:10] ottomata: right? but why is that not desirable? [18:46:28] nuria: say discovery has some oozie job that uses refinery-hive [18:46:31] ottomata: otherwise it seems we do not trust our changes to truly be backwards compatible [18:46:36] and they link to current version [18:46:52] they aren't really, i mean, we change pageview definitions [18:47:00] even if funcitonally its compatible [18:47:12] i'd rather know explicitly what version of the definition a job is using [18:47:56] ottomata: that prevent us from deploying bugfixes with 1 deploy to all running jobs that use it [18:48:06] but, yeah, i don't trust them to be truly backwards compatible, especially since other teams use them [18:48:17] indeed, but it also keeps us from introducing new bugs into jobs we don't control [18:48:17] ottomata: but see, that is what is not true [18:48:51] ottomata: changes are backwards compatible, we take care that they are expect bugs, of course [18:49:01] ottomata: bug fixes are not backwards compatible [18:49:54] i tihnk explicit version dependencies are a good thing. we do that for everything in production. we dependencies are uprgraded explicitly, not automatically [18:49:55] ottomata: right now, with the current state of affairs if another job uses our java code and we have a bug they are stucked with it for ever [18:50:15] not forever, but ja, there are manual steps now [18:50:19] nuria: our past few changes have been only bug correction, so right, in those cases no big deal - Maybe at some point we'll want to change API ? [18:50:29] ottomata: ok, i can see that for the dependencies of your application but not the application itself b [18:50:31] nuria: i think it could be ok to use refinery current, but i woudlnt' want to get rid of the explicit jar dependency version in the oozie jobs [18:51:10] joal: additions to api are backwards compatible [18:51:16] joal: just like in a service [18:51:22] joal: deletions are not [18:51:34] but anyways seems that both of you like the system [18:51:43] so we can keep it as it [18:51:53] nuria: It's more about explicit info than anything else I guess [18:52:26] joal: i need to update the jar version in the loads jobs and do the deployment again, correct? [18:52:39] correct nuria [18:52:57] joal: so when we deploy we are updating to a version that it yet doesn't exist [18:53:16] joal: right? [18:53:20] nope [18:53:46] We deploy 2 changes conjointtly [18:53:47] ah sorry no, because it 's teh source version [18:53:51] yes [18:55:14] joal: ok, so i will update the jobs and then i need to redeploy refinery? [18:55:31] joal: so i need to repeat the scap step? [18:55:46] correct nuria [18:55:53] scap steps + HDFS steps [18:56:24] joal: ok [18:56:41] (actually in this specific case, HDFS step is not needed since the change only involves a oozie propertie file that is read locally, but I'd rather have every repo in sync) [19:00:19] (03PS1) 10Nuria: Bumping up refine job to version 37 [analytics/refinery] - 10https://gerrit.wikimedia.org/r/320820 [19:02:45] ottomata: chnaging job https://gerrit.wikimedia.org/r/320820 [19:05:49] (03CR) 10Ottomata: [C: 032 V: 032] Bumping up refine job to version 37 [analytics/refinery] - 10https://gerrit.wikimedia.org/r/320820 (owner: 10Nuria) [19:27:39] nuria: things ok? [19:42:53] joal: yes, doing tin deploy now [19:50:45] nuria: I'll dicsonnect soon, do you need anything before I leave? [19:50:58] joal: no, i do not think so, thank you [19:51:28] k nuria, don't forget to kill/relaunch oozie load bundle :) [19:58:26] joal: ya that is the last thing, hdfs deployment just finished [19:58:37] cool :) [19:58:42] nuria: log log log ;) [19:58:50] :D [19:59:09] !log deployed v0.0.37 of refinery to hdfs [19:59:10] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [20:06:49] 10Analytics, 10Beta-Cluster-Infrastructure: Set up a fake Pageview API endpoint for the beta cluster - https://phabricator.wikimedia.org/T150483#2787190 (10Tgr) [20:10:15] Disconnecting a-team, see you online next week [20:10:25] bye joal :] [20:10:59] mforns: If you want to compute some more metrics, go with clickhouse ;) [20:11:07] 10Analytics, 10Beta-Cluster-Infrastructure, 10PageViewInfo: Deploy WikimediaPageViewInfo extension to beta cluster - https://phabricator.wikimedia.org/T129602#2787212 (10Tgr) 05Open>03Resolved a:03Tgr The extension has been deployed; I split the part about setting up a mock Pageview API endpoint to {T1... [20:11:38] joal, can you repeat how to? I lost my notes [20:11:48] ssh tunnel [20:12:00] 10Analytics, 10Beta-Cluster-Infrastructure: Set up a fake Pageview API endpoint for the beta cluster - https://phabricator.wikimedia.org/T150483#2787190 (10Tgr) >>! In T129602#2787212, @Tgr wrote: > [using live enwiki data in an extension on the beta cluster] is fine for testing the extension but not useful fo... [20:14:52] ottomata: if i kill the refine job using oozie CLI do i need to restart it ? or will it restart on its own [20:18:55] 10Analytics-General-or-Unknown, 10Graphite, 06Operations, 06Performance-Team, 07Wikimedia-Incident: statsv outage on 2016-11-09 - https://phabricator.wikimedia.org/T150359#2787229 (10Gilles) a:03Gilles [20:22:37] nuria: you'll need to restart it if you kill it [20:22:54] more correctly: resubmit it [20:24:32] ottomata: is this the refine one? https://hue.wikimedia.org/oozie/list_oozie_coordinator/0022558-161020124223818-oozie-oozi-C/? [20:26:59] heh, that one says 'load' [20:28:24] oh hm [20:28:25] hang on [20:28:28] maybe we combined them... [20:29:22] ah we did [20:29:33] nuria: you want to restart the whole bundle, not just hte coordinator [20:29:36] so [20:29:37] this one [20:29:37] https://hue.wikimedia.org/oozie/list_oozie_bundle/0014988-161020124223818-oozie-oozi-B [20:30:10] ottomata: that one is DEAD now [20:30:23] ahem .. i hope i know how to restart [20:30:23] did you kill it? [20:30:25] it just died [20:30:27] yes [20:30:30] ok cool [20:30:30] :) [20:31:45] nuria: lemme know if you need help [20:33:09] ottomata: something like: oozie job -Duser=$USER -Drefinery_directory=hdfs://analytics-hadoop/wmf/refinery/2015-01-07T19.35.18Z--838a594 -run -config /srv/deployment/analytics/refinery/oozie/webrequest/refine/bundle.properties [20:33:12] ottomata: ? [20:34:07] a, but you'll want to override start time [20:34:35] set it to start at the time the one you killed had left off [20:34:38] looks like Thu, 10 Nov 2016 21:00:00 [20:34:55] you also want queue_name production [20:34:57] so [20:35:58] maybe [20:36:13] sudo -u hdfs oozie job \ [20:36:13] -Duser=$USER \ [20:36:13] -Dstart_time=2016-11-04T21:00Z \ [20:36:13] -Dqueue_name=production \ [20:36:13] -Drefinery_directory=hdfs://analytics-hadoop$(hdfs dfs -ls -d /wmf/refinery/2016* | tail -n 1 | awk '{print $NF}') \ [20:36:13] -oozie $OOZIE_URL -run -config /srv/deployment/analytics/refinery/oozie/webrequest/refine/bundle.properties [20:36:18] oop, 11-10 [20:36:22] sudo -u hdfs oozie job \ [20:36:22] -Duser=$USER \ [20:36:22] -Dstart_time=2016-11-10T21:00Z \ [20:36:22] -Dqueue_name=production \ [20:36:22] -Drefinery_directory=hdfs://analytics-hadoop$(hdfs dfs -ls -d /wmf/refinery/2016* | tail -n 1 | awk '{print $NF}') \ [20:36:22] -oozie $OOZIE_URL -run -config /srv/deployment/analytics/refinery/oozie/webrequest/refine/bundle.properties [20:36:52] oh but [20:36:58] the path is now webrequest/load [20:37:03] sudo -u hdfs oozie job \ [20:37:03] -Duser=$USER \ [20:37:03] -Dstart_time=2016-11-10T21:00Z \ [20:37:03] -Dqueue_name=production \ [20:37:03] -Drefinery_directory=hdfs://analytics-hadoop$(hdfs dfs -ls -d /wmf/refinery/2016* | tail -n 1 | awk '{print $NF}') \ [20:37:03] -oozie $OOZIE_URL -run -config /srv/deployment/analytics/refinery/oozie/webrequest/load/bundle.properties [20:37:04] right? [20:37:35] ottomata: yes [20:37:39] https://www.irccloud.com/pastebin/cmsZXKB5/ [20:38:10] aye [20:42:14] ottomata: i think all is in order: https://hue.wikimedia.org/oozie/list_oozie_bundle/0026850-161020124223818-oozie-oozi-B [20:43:02] ja looks good i think [20:57:36] ottomata: many thanks for your help, will check ios pageviews tomorrow [20:58:03] 10Analytics-Dashiki, 06Analytics-Kanban, 13Patch-For-Review: Migrate from bower to yarn and clean up folder hierarchy - https://phabricator.wikimedia.org/T147884#2787366 (10Nuria) 05Open>03Resolved [20:58:20] 06Analytics-Kanban, 13Patch-For-Review: Update code in refinery-source and refinery to remove old-aqs compatibility - https://phabricator.wikimedia.org/T147841#2787367 (10Nuria) 05Open>03Resolved [20:58:33] 06Analytics-Kanban, 10EventBus, 06Operations, 13Patch-For-Review: setup/install/deploy kafka1003 (WMF4723) - https://phabricator.wikimedia.org/T148849#2787369 (10Nuria) 05Open>03Resolved [20:58:48] 06Analytics-Kanban, 13Patch-For-Review: Totals doesn't show up on first page load in Vital Signs - https://phabricator.wikimedia.org/T150027#2787371 (10Nuria) 05Open>03Resolved [20:59:17] 10Analytics-Cluster, 06Analytics-Kanban, 13Patch-For-Review: Update Kafka MirrorMaker jmxtrans attributes and make MirrorMaker grafana dashboard - https://phabricator.wikimedia.org/T143320#2787372 (10Nuria) 05Open>03Resolved [21:00:15] 10Analytics: Varnishkafka testing framework - https://phabricator.wikimedia.org/T147432#2787375 (10Nuria) [21:00:17] 06Analytics-Kanban, 13Patch-For-Review: Find a strategy for integration testing - https://phabricator.wikimedia.org/T147442#2787374 (10Nuria) 05Open>03Resolved [21:07:13] yup ! :)