[01:09:13] Analytics, Engineering-Community, MediaWiki-API, Research-and-Data, ECT-July-2015: Metrics about the use of the Wikimedia web APIs - https://phabricator.wikimedia.org/T102079#1469605 (Tgr) [01:09:21] Analytics, MediaWiki-extensions-General-or-Unknown: Track hook usage counts - https://phabricator.wikimedia.org/T106450#1469607 (Tgr) [09:50:23] Analytics-Backlog: Change Icinga graphite alert for EventLogging delay - https://phabricator.wikimedia.org/T106495#1470189 (JAllemandou) NEW [09:52:05] Analytics-Backlog: Sanitize aggregated data presented in VitalSign using K-Anonymity {musk} [8 pts] - https://phabricator.wikimedia.org/T104485#1470196 (JAllemandou) Open>declined [09:53:21] Analytics-Backlog: Sanitize aggregated data presented in VitalSign using K-Anonymity {musk} [8 pts] - https://phabricator.wikimedia.org/T104485#1418485 (JAllemandou) Team has decided not to K-anonymize this granularity of pageview aggregation. No very precise user-related dimension is released, so no anonymi... [09:55:35] Analytics-EventLogging, Analytics-Kanban: Make EventLogging alerts based on Kafka metrics {stag} - https://phabricator.wikimedia.org/T106254#1470209 (JAllemandou) Will this task solve https://phabricator.wikimedia.org/T106495? If so, we should document them as dups. [11:11:04] Analytics: Update http://reportcard.wmflabs.org/ (and Wikistats) with June 2015 dump data - https://phabricator.wikimedia.org/T106505#1470346 (Tbayer) NEW [11:11:34] Analytics: Update http://reportcard.wmflabs.org/ (and Wikistats) with June 2015 dump data - https://phabricator.wikimedia.org/T106505#1470356 (Tbayer) [11:20:20] PROBLEM - Difference between raw and validated EventLogging overall message rates on graphite1001 is CRITICAL 20.00% of data above the critical threshold [30.0] [11:22:21] RECOVERY - Difference between raw and validated EventLogging overall message rates on graphite1001 is OK Less than 15.00% above the threshold [20.0] [11:29:11] Analytics: Update http://reportcard.wmflabs.org/ (and Wikistats) with June 2015 dump data - https://phabricator.wikimedia.org/T106505#1470429 (Tbayer) [11:30:32] Analytics: Update http://reportcard.wmflabs.org/ (and Wikistats) with June 2015 dump data - https://phabricator.wikimedia.org/T106505#1470346 (Tbayer) [14:22:06] Analytics-Tech-community-metrics, ECT-July-2015: Exclude third-party / pulled upstream code repositories from metrics - https://phabricator.wikimedia.org/T103984#1470772 (Dicortazar) According to the work done in {T104845}, I've updated the list of Gerrit repositories. I didn't find some of the ones liste... [14:59:14] Analytics-EventLogging, Analytics-Kanban: Make EventLogging alerts based on Kafka metrics {stag} - https://phabricator.wikimedia.org/T106254#1470843 (Ottomata) Hm, I'm not really sure. I think so, because these metrics would then be based on messages per second in a topic, which we get all at once from J... [15:15:24] ottomata, how much of a PITA is it to set up a hive client on a machine? [15:15:53] the search team wants to set up a big 64-bit machine to hammer experimental approaches to searching on against actual logs and I'd like to get a hive client on it if it comes to fruition so I can steer them further towards "store logs in HDFS" :d [15:16:26] Ironholds: hive cliedoes not [15:16:36] uhh [15:16:46] Ironholds: yup, will get better at typing at some point [15:16:49] ahah [15:16:59] Ironholds: hive client is probably not what you're aft5er [15:17:10] hive doesn't work without hadoop :( [15:17:48] Ironholds: We could try and load some search logs on the cluster and show the team how to work them in hive [15:17:56] hm, uhhh, Ironholds, this woudl be a machine in prod? [15:18:15] ottomata, I'd like to steer them away from prod precisely for that problem (or..prodlem?) [15:18:16] the hadoop pupppet stuff is pretty geneirc, so they could install a single node hadoop thing. but i don't think that would be a very good test of hive's abilities [15:18:18] i.e., the firewalling [15:18:22] nonono [15:18:31] I'm not saying "set up parallel cluster" [15:18:55] I'm saying "stick machine in analytics cluster so we can access hive data and not break everyone else's scripts by running massively parallelised java on stat1002 to test language detection" :D [15:19:25] by hive client I just mean "this machine'd need a way to run queries against the hadoop cluster otherwise it's pointless and we should just stick it in prod and rely on the tarballed logs and the pain that goes along with it" [15:19:44] Ironholds: hive doesn't run on stat1002 [15:19:47] hive runs on the cluster [15:20:04] doesn't matter if you launch the job from stat1002 or somewhere else [15:20:11] (i mean, it can kiiinda matter, but not really) [15:20:13] as long as it's in the analytics cluster rather than prod? [15:20:27] so the answer is "zero effort it's cluster-wide by default" [15:20:34] i don't understand [15:20:40] okay. Hypothetical. [15:20:51] if you are trying to test hive, and don't want to interfere with others, you will need another hadoop cluster [15:20:59] no, no [15:21:05] alright, let's hypothetical this [15:21:18] or...use case it. [15:21:22] we want to be able to test and experiment with new search processes [15:21:29] for this we need search logs to test against. [15:21:38] We *also* want search logs to live in Hadoop so Oliver's life is easier. [15:21:59] we *also also* don't want to break everyone's stat1002 or 1003 scripts due to us locking the machine with some untested, experimental, java thing [15:22:30] solution: new machine living in the analytics cluster which search can experiment on. They can access the search logs in Hadoop from it, but if they break their local machine with said experimental java, will not make researchers cry. [15:22:41] think of it as stat1013 [15:22:46] ah, sure, ok [15:22:47] that's fine. [15:22:51] cool! [15:23:10] so I'm reading that any machine under those manifests/in that cluster has the ability to access hive by default, as part of the config. Yes? [15:23:11] Ironholds: how are you going to get search logs? Is there a plan for that? that ticket hasn't been updated in a while [15:23:29] I'm going to wait until Dan's back on Thursday and then kick him until he assigns some engineers to it ;p [15:23:34] Ironholds: any machine in the cluster that has the hive::client role applied will have acces to hadoop and hive. [15:23:36] shoudl be easy to do [15:23:46] in the meantime I'm putting together the table schema I'd like and pinging it back and forth with erik to make sure we can actually include those fields [15:23:51] because then it's just implementation [15:23:51] Ironholds: if we are going to make Mediawiki send to Kafka directly, which I think is a good idea [15:23:55] we'll need a Kafka PHP client. [15:23:58] yup [15:24:01] and I tried to hack on that at the hackathon [15:24:04] and it is hard [15:24:05] I saw your notes on the ticket and suggested Trey take a look at it [15:24:07] because: hhvm [15:24:08] aye ok [15:24:09] cool [15:24:10] I'll put you two in touch [15:24:28] ok cool [15:26:09] awesome; thankya! [15:28:06] wait, wasn't analytics planning on putting EL data in Hadoop? Does that not involve kafka+mediawiki? [15:28:49] it does, which is one of the reasons i was hacking on it [15:28:59] but it isn't the top priority at the moment, client side is more important [15:30:43] ottomata: standup :) [15:30:57] danke! [15:35:59] Analytics-Kanban: Troubleshoot EventLogging validation alerts - https://phabricator.wikimedia.org/T105167#1470910 (kevinator) [15:36:00] Analytics-Backlog: Change Icinga graphite alert for EventLogging delay - https://phabricator.wikimedia.org/T106495#1470909 (kevinator) [15:49:48] Analytics-Kanban: Troubleshoot EventLogging validation alerts [3 pts] - https://phabricator.wikimedia.org/T105167#1470945 (ggellerman) [15:52:08] Analytics-Backlog, Analytics-EventLogging: Deploy EventLogging on Kafka to eventlog1001 (aka production!) {stag} - https://phabricator.wikimedia.org/T106260#1470949 (Ottomata) [15:52:15] Analytics-Backlog, Analytics-EventLogging: Send raw server side events to Kafka using a PHP Kafka Client {stag} - https://phabricator.wikimedia.org/T106257#1470951 (Ottomata) [15:52:21] Analytics-Backlog, Analytics-EventLogging: Send raw client side events to Kafka using varnishkafka instead of varnishncsa {stag} - https://phabricator.wikimedia.org/T106255#1470952 (Ottomata) [15:52:32] Analytics-Backlog, Analytics-EventLogging: Make EventLogging alerts based on Kafka metrics {stag} - https://phabricator.wikimedia.org/T106254#1470953 (Ottomata) [15:52:37] Analytics-Backlog, Analytics-EventLogging: Regularly purge EventLogging data in Hadoop {stag} - https://phabricator.wikimedia.org/T106253#1470954 (Ottomata) [15:55:47] ottomata, aha [15:57:57] Ironholds: ? [15:58:34] re: top priority, client-side, etc [16:15:26] ottomata, jgage : sorry for having missed the end of the meeting [16:24:45] np [17:20:11] Analytics-Cluster, Analytics-Kanban, Patch-For-Review: Add Pageview aggregation to Python {musk} [13 pts] - https://phabricator.wikimedia.org/T95339#1471576 (kevinator) [17:20:53] Analytics-Kanban, Analytics-Visualization, Patch-For-Review: Update Vital Signs UX for aggregations {musk} [13 pts] - https://phabricator.wikimedia.org/T95340#1471580 (kevinator) [17:20:54] Analytics-Cluster, Analytics-Kanban: {musk} Pageviews in Vital Signs - https://phabricator.wikimedia.org/T101120#1471579 (kevinator) [17:20:56] Analytics-Kanban, Analytics-Visualization: {Epic} Community reads pageviews per project in Vital Signs {crow} - https://phabricator.wikimedia.org/T95336#1471581 (kevinator) [17:23:59] Analytics-Kanban: Troubleshoot EventLogging validation alerts {oryx} [3 pts] - https://phabricator.wikimedia.org/T105167#1471599 (kevinator) [17:30:47] Analytics-Kanban: Troubleshoot EventLogging validation alerts {oryx} [3 pts] - https://phabricator.wikimedia.org/T105167#1471624 (kevinator) verifying this is done... It's not an EventLogging problem. new task created to adjust Icinga alerts: T106495 [17:30:57] Analytics-Backlog: Change Icinga graphite alert for EventLogging delay - https://phabricator.wikimedia.org/T106495#1470189 (kevinator) [17:30:58] Analytics-Kanban: Troubleshoot EventLogging validation alerts {oryx} [3 pts] - https://phabricator.wikimedia.org/T105167#1471626 (kevinator) Open>Resolved [17:33:24] kevinator: btw the PV forecasting app is up again at https://ewulczyn.shinyapps.io/pageview_forecasting [17:33:51] Analytics-Kanban: Python Aggregator: Solve inconsistencies in data ranges when using --all-projects flag - https://phabricator.wikimedia.org/T106554#1471651 (mforns) NEW a:mforns [17:38:26] Analytics-Kanban: Python Aggregator: Solve inconsistencies in data ranges when using --all-projects flag - https://phabricator.wikimedia.org/T106554#1471670 (kevinator) [17:38:27] Analytics-Cluster, Analytics-Kanban, Patch-For-Review: Add Pageview aggregation to Python {musk} [13 pts] - https://phabricator.wikimedia.org/T95339#1471671 (kevinator) [17:38:45] Analytics-Kanban: Python Aggregator: Solve inconsistencies in data ranges when using --all-projects flag {slug} - https://phabricator.wikimedia.org/T106554#1471672 (kevinator) [17:39:23] Analytics-Kanban: Python Aggregator: Solve inconsistencies in data ranges when using --all-projects flag {slug} - https://phabricator.wikimedia.org/T106554#1471651 (kevinator) [17:40:47] Analytics-Cluster, Analytics-Kanban: Generate test data for Pageview API {slug} [5 pts] - https://phabricator.wikimedia.org/T101785#1471679 (kevinator) The team has decided not to k-anonymize the data delivered for the API because we are not exposing any geolocation and therefore cannot identify any editor. [17:43:20] Analytics-Backlog, Analytics-Cluster: Test storage strategy 3 {slug} [5 pts] - https://phabricator.wikimedia.org/T101788#1471700 (kevinator) [17:43:21] Analytics-Backlog, Analytics-Cluster: Test PostgreSQL as a storage strategy {slug} [5 pts] - https://phabricator.wikimedia.org/T101787#1471701 (kevinator) [17:43:23] Analytics-Cluster, Analytics-Kanban: Generate test data for Pageview API {slug} [5 pts] - https://phabricator.wikimedia.org/T101785#1471699 (kevinator) Open>Resolved [17:43:25] Analytics-Backlog, Analytics-Cluster: Test Cassandra as a storage strategy {slug} [5 pts] - https://phabricator.wikimedia.org/T101786#1471702 (kevinator) [17:46:22] Analytics-Kanban: Vet data in intermediate aggregate {wren} [8 pts] - https://phabricator.wikimedia.org/T102161#1471720 (kevinator) [17:46:33] Analytics-Kanban: Vet data in intermediate aggregate {wren} [8 pts] - https://phabricator.wikimedia.org/T102161#1471721 (kevinator) Open>Resolved [17:47:09] Analytics-Kanban: Bug: puppet not running on wikimetrics1 instance, Vital Signs stale {musk} [5 pts] - https://phabricator.wikimedia.org/T105047#1471723 (kevinator) Open>Resolved [17:54:37] Analytics-Backlog, Analytics-EventLogging: Make EventLogging code mark new tables for purging as default - https://phabricator.wikimedia.org/T106558#1471733 (mforns) NEW [18:12:17] DarTar, allow me to be the first to say "HAH IT'S A SHINY APP TELL ME MORE ABOUT HOW PYTHON IS THE FUTURE" [18:12:23] and also the first to say "ooh that looks really cool" [18:12:46] lolz [18:13:06] are you referring to the PV forecast app? [18:14:56] yup! [18:41:42] madhuvishy: hiyaaa [18:41:49] ottomata: heyy [18:41:58] just the person i was looking for [18:42:04] did you run into any weird unicode issues when working with pykafka? [18:42:13] ummm no not yet [18:42:17] i haven't successfully run it (on analytisc1010) yet, and i'm getting error: argument for 's' must be a string [18:42:26] huh [18:42:30] no [18:42:31] i'm pretty sure its some utf8 encoding thing [18:42:33] what is s [18:42:52] dunno! [18:42:54] i'm trying to run it on analytics1004 [18:42:56] trying to figure that out [18:42:57] yeah? [18:43:03] how goes? [18:43:10] and its trying to connect to local zookeeper [18:43:19] and failing [18:43:30] what's the right zk url? [18:43:51] conf1001.eqiad.wmnet:2181/kafka/eqiad [18:43:55] will do [18:44:14] aah [18:44:29] madhuvishy: btw, i've edited your code to do [18:44:29] kafka_consumer_args = { [18:44:30] k: str(v) for k, v in items(kafka_consumer_args) [18:44:30] if k in inspect.getargspec(BalancedConsumer.__init__).args [18:44:30] } [18:44:46] ohh i was doing PyKafkaClient [18:44:47] right [18:44:51] that makes sense [18:45:15] AH i think it is consumer group [18:45:59] hm, still problems, will figure this out, lemm eknow if you get farther than me! [18:47:29] ottomata: okay [18:54:03] off for today lads ! [18:54:11] See ya tomorrow :) [18:56:21] laters! [18:58:11] ottomata: what is the consumer group you are using? [18:58:25] mine fails at 2015-07-22 18:46:33,913 (MainThread) Attempting to discover offset manager for consumer group 'eventlogging-group' [19:00:28] madhuvishy: [19:00:36] i suspect that this kafka client does not work with our version of kafka [19:00:47] ottomata: huh [19:00:48] [2015-07-22 19:00:39,337] 2449722969 [kafka-processor-9092-1] ERROR kafka.network.Processor - Closing socket for /2620:0:861:105:8a43:e1ff:fec2:94e8 because of error [19:00:49] kafka.common.KafkaException: Wrong request type 10 [19:00:49] at kafka.api.RequestKeys$.deserializerForKey(RequestKeys.scala:57) [19:00:49] at kafka.network.RequestChannel$Request.(RequestChannel.scala:53) [19:00:49] at kafka.network.Processor.read(SocketServer.scala:353) [19:00:49] at kafka.network.Processor.run(SocketServer.scala:245) [19:00:49] at java.lang.Thread.run(Thread.java:745) [19:01:09] ummm [19:02:42] Offset Commit/Fetch API [19:02:42] These APIs allow for centralized management of offsets. Read more https://cwiki.apache.org/confluence/display/KAFKA/Offset+Management. As per comments on https://issues.apache.org/jira/browse/KAFKA-993 these API calls are not fully functional in releases until Kafka 0.8.1.1. It will be available in the 0.8.2 release. [19:02:44] https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommit/FetchAPI [19:02:55] we are on 0.8.1.1 [19:02:58] so, uhhh [19:03:06] that sentence is a little weird [19:03:07] hm. [19:03:09] welp! [19:03:14] oh [19:03:16] upgrading kafka might be a blocker>.>>>.... [19:03:22] :| [19:03:32] it is on the todo, but will take a while [19:03:33] sigh. [19:03:52] welp! [19:03:52] but what's with the consumer group thing? [19:03:59] madhuvishy,that is a unicode thing [19:04:02] aah [19:04:05] wrap any args you pass to pykafka in str() [19:04:13] okayy [19:04:15] will try that [19:04:33] well crap crackers. [19:04:40] ok, i'm going to work on upgrading kafka asap then [19:04:42] yeahh [19:04:56] can't promise much speed. first step is to upgrade servers to jessie using 0.8.1.1 [19:05:23] right [19:05:26] alright, i've to head to lunch, will ping in a bit [19:05:59] k [19:14:53] Analytics, Analytics-Kanban: Too few page views for June/July 2015 - https://phabricator.wikimedia.org/T106034#1472117 (kevinator) Hi @ezachte, I don't think this is an error in the data. Here are some of the combined factors that are causing the drop: * seasonality is causing a downward trend * The HTTP... [19:15:17] Analytics, Analytics-Kanban, Research-and-Data: Too few page views for June/July 2015 - https://phabricator.wikimedia.org/T106034#1472119 (DarTar) [19:18:26] Analytics, Analytics-Kanban, Research-and-Data: Too few page views for June/July 2015 - https://phabricator.wikimedia.org/T106034#1472130 (DarTar) @ezachte, I second @kevinator – we just discussed this briefly – I don't think there's any data loss or anything suggesting we should regenerate PV dumps, b... [19:22:58] Analytics-Cluster, operations: Build new latest stable (0.8.2.1?) Kafka package and upgrade Kafka brokers - https://phabricator.wikimedia.org/T106581#1472141 (Ottomata) NEW a:Ottomata [19:23:37] Analytics-Cluster, operations: Build new latest stable (0.8.2.1?) Kafka package and upgrade Kafka brokers - https://phabricator.wikimedia.org/T106581#1472162 (Ottomata) [19:23:39] Analytics-EventLogging, Analytics-Kanban, Patch-For-Review: Prep work for Eventlogging on Kafka {stag} - https://phabricator.wikimedia.org/T102831#1472161 (Ottomata) [19:24:21] Analytics-EventLogging, Analytics-Kanban, Patch-For-Review: Prep work for Eventlogging on Kafka {stag} - https://phabricator.wikimedia.org/T102831#1374728 (Ottomata) We want to use pykafka to use BalancedConsumer. This client requires a later version of Kafka than we have. Now we have to figure out... [19:27:47] Analytics-Kanban: Restart Pentaho - https://phabricator.wikimedia.org/T105107#1472166 (Tbayer) [19:29:01] Analytics-Cluster, Ops-Access-Requests, operations: Sudo permissions for hdfs user madhuvishy on analytics-hadoop - https://phabricator.wikimedia.org/T104020#1472168 (RobH) Please note that @madhuvishy already has an existing shell account and has signed L3. As such, this request still requires both... [19:30:34] Analytics-Kanban: Restart Pentaho - https://phabricator.wikimedia.org/T105107#1472176 (Tnegrin) Hi folks -- Tilman has a time sensitive request that he needs the pentaho data for. If we can't restart the interface, can we give him access to the underlying database? thanks, -Toby [19:44:26] be back shortly! [19:48:00] Analytics, Analytics-Kanban, Research-and-Data: Too few page views for June/July 2015 - https://phabricator.wikimedia.org/T106034#1472297 (ezachte) - Seasonality Granted, there is a seasonal component, but 2015 drop is twice largest earlier drop May->Jun MoM May->June for all Wikipedias combined 20... [20:03:15] Analytics, Analytics-Kanban, Research-and-Data: Too few page views for June/July 2015 - https://phabricator.wikimedia.org/T106034#1472421 (ezachte) To expand on that example: http://stats.grok.se/en/201507/London_Bridge receives between 500-800 views per day. Last time I checked about 10 of those wer... [20:12:21] hi madhuvishy. [20:12:29] hey leila :) [20:13:14] so re estimating uniques: I'm reading some prior work, I'll spend a day on it next week to see if we can figure out the error in the current estimates. [20:13:35] we can get to it some time next week, on or after Wednesday, madhuvishy. Does that work? [20:14:01] leila: sure [20:14:08] yes that works [20:14:15] great. :-) [20:15:58] Analytics-Kanban: Restart Pentaho - https://phabricator.wikimedia.org/T105107#1472478 (kevinator) p:Triage>High [20:16:42] hello milimetric. [20:16:52] I'm coming after y'all one by one. ;-) [20:17:34] leila: milimetric is sick today [20:17:45] oh, sorry to hear it kevinator. [20:18:37] kevinator: do you think there is chance the two of you can reschedule your meeting on July 28 at 9:30am? I need to add milimetric to a meeting that has many parties involved and this is the only time that seems to work for everyone except him [20:18:40] :-( hopefully he's feeling better tomorrow [20:19:52] yes I can reschedule. I'm curious tho... what is the meeting? [20:19:54] akh! forget it kevinator. Toby is OoO on that day. recalculating. ... [20:20:18] the meeting is the opt-out/opt-in discussion for research purposes, kevinator. [20:20:30] ok [20:20:31] kevinator: please don't move your meeting. I'm looking at everyone's schedule again. [20:20:41] sorry for the confusion, didn't see the OoO part. [20:20:41] ok, I have not updated it. [20:37:16] Analytics, Analytics-Kanban, Research-and-Data: Too few page views for June/July 2015 - https://phabricator.wikimedia.org/T106034#1472572 (DarTar) @ezachte see T102431 for more context on the recent HTTPS rollout, I don't think there has been a public-facing report of this data yet. For some countries... [20:39:15] Analytics, Analytics-Kanban, Research-and-Data: Too few page views for June/July 2015 - https://phabricator.wikimedia.org/T106034#1472575 (DarTar) > these data are already from hadoop, but indeed using an old definition, compatible with Domas' initial choices yes, I'm referring to T44259 [21:20:22] ottomata: around? [21:24:16] ye [21:24:17] s [21:24:40] madhuvishy: [21:25:04] ottomata: so i dont think the error i get has anything to with unicode [21:25:22] https://www.irccloud.com/pastebin/kEJD7xFg/ [21:25:50] ah, you might be right, although i got around that, [21:25:50] _discover_offset_manager [21:26:17] i was also getting pykafka.exceptions.SocketDisconnectedError [21:26:20] along this code path [21:26:38] so i started editing installed pykafka code and printing stack traces and exceptions [21:26:44] then i started looking at logs on brokers [21:26:51] and saw that invalid protocol number 10 [21:26:52] or whatever [21:27:05] which im' pretty sure is because our version of kafka doesn't have this in the protocol [21:36:46] ottomata: aah [21:37:02] okayy [22:09:10] Analytics-Cluster, operations: Build new latest stable (0.8.2.1?) Kafka package and upgrade Kafka brokers - https://phabricator.wikimedia.org/T106581#1473094 (Ottomata) [22:09:13] Analytics-Cluster, operations: Build Kafka 0.8.1.1 package for Jessie and upgrade Brokers to Jessie. - https://phabricator.wikimedia.org/T98161#1473093 (Ottomata) [22:09:16] Analytics-Cluster, operations: Build Kafka 0.8.1.1 package for Jessie and upgrade Brokers to Jessie. - https://phabricator.wikimedia.org/T98161#1473099 (Ottomata) [22:17:02] Analytics-Cluster, operations: Build Kafka 0.8.1.1 package for Jessie and upgrade Brokers to Jessie. - https://phabricator.wikimedia.org/T98161#1473136 (Ottomata) I was able to build a 0.8.1.1 Jessie package using Alex's debianization. I did it by adding the noted missing dependency jars to ext/ and cha... [22:28:46] Analytics-Cluster, Fundraising Tech Backlog, operations: Verify kafkatee use for fundraising logs on erbium - https://phabricator.wikimedia.org/T97676#1473174 (awight) @ellery: Ping, we're waiting to see if you think we should care about the statistical differences between udp2log and kafkatee? [22:41:53] Analytics-Cluster, Fundraising Tech Backlog, operations: Verify kafkatee use for fundraising logs on erbium - https://phabricator.wikimedia.org/T97676#1473197 (awight) a:AndyRussG>awight [23:02:45] halfak: around? [23:13:38] good night folks! see you tomorrow [23:29:24] Analytics, Analytics-Cluster, Fundraising Tech Backlog, operations: Verify kafkatee use for fundraising logs on erbium - https://phabricator.wikimedia.org/T97676#1473365 (awight) [23:29:48] Analytics-Backlog, Performance-Team, Release-Engineering, operations, Varnish: Verify traffic to static resources from past branches does indeed drain - https://phabricator.wikimedia.org/T102991#1473368 (ori) To investigate this, @Krinkle and I collected 10 minutes' worth of requests to `poweredby... [23:36:15] o/ Mdhu [23:36:25] *madhuvishiy [23:36:46] madhuvishy, ugh. Typing is hard. [23:36:53] Had a nap, not booted back up yet [23:37:09] halfak: that's okay, i had some questions, but sending email [23:37:11] Analytics-Backlog, Release-Engineering, operations, Varnish: Verify traffic to static resources from past branches does indeed drain - https://phabricator.wikimedia.org/T102991#1473421 (ori) [23:38:25] madhuvishy, OK. wikilabels related? [23:38:36] halfak: naah. EL [23:41:38] sent [23:45:58] * halfak reads [23:47:17] Analytics-Cluster, operations, ops-eqiad, Patch-For-Review: rack new hadoop worker nodes - https://phabricator.wikimedia.org/T104463#1473511 (Cmjohnson) 1042-1045 have base install w/out puppet certs.