[00:23:17] ori, mediawiki seems not to want to work is my Vagrant 1.6.5 too old? [00:23:32] ori: i just get Class undefined: Cdb\Writer [00:23:51] with latest master [00:24:09] run vagrant ssh, cd to /vagrant/mediawiki, run 'composer install' [00:25:21] ori: composer eh? nice, done, let me rerun puppet [00:25:47] i'm not sure why this isn't handled better, it's a side-effect of the 'librarization' push [00:27:45] ori: dejavu, i reemember when we didi this at my prior job but composer did not existed yet [00:29:15] ori: still nothing on the logs but not working , i will ask on dev [01:20:05] (PS2) Bmansurov: Check ownership before adding tag to cohort [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/182391 [01:20:12] (CR) jenkins-bot: [V: -1] Check ownership before adding tag to cohort [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/182391 (owner: Bmansurov) [01:22:02] (PS3) Bmansurov: Check ownership before adding tag to cohort [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/182391 [14:17:34] Ironholds: https://github.com/twitter/AnomalyDetection [14:55:38] milimetric: Can you please do your $MAGIC again? I am trying since 10 minutes and I cannot join the Batcave :-/ [14:55:49] It's hanging at "Trying to join the call. Please wait ..." [14:55:56] :( [14:55:57] Ha! [14:55:59] Thanks. [14:56:10] it seems your ":(" worked :-) [14:56:24] The batcave is empty (it's early) but now I'm in. [14:56:28] Thanks! [14:57:24] ?? lol, coming in to talk about this [14:57:28] Multimedia, Analytics: Track image context and pass information onto X-Analytics - https://phabricator.wikimedia.org/T85922#959429 (Gilles) It might be possible for thumb.php to serve that header based on a GET parameter, yes, which would avoid any varnish frontend magic. The downside is that we would have mu... [15:32:45] Multimedia, MediaWiki-extensions-MultimediaViewer, Analytics: Make upload.wikimedia.org serve images with Timing-Allow-Origin header - https://phabricator.wikimedia.org/T76020#959586 (Joe) Open>Resolved [15:37:25] nuria, real quick, i am doing two things at once, fixing this graphite thing [15:37:26] but [15:37:30] what about just using dt? [15:37:37] it shoudl hash pretty uniformly, right? [15:37:43] every second, new hash [15:37:54] d? [15:38:11] ottomata: wait dt stands for ....? [15:38:16] Analytics-EventLogging: Missing extension page - https://phabricator.wikimedia.org/T43378#959657 (Aklapper) [15:39:11] Analytics-Engineering, Analytics-Wikimetrics: Epic: WikimetricsUser deletes user from cohort - https://phabricator.wikimedia.org/T76421#959702 (mforns) [15:39:32] Analytics-EventLogging: Missing extension page - https://phabricator.wikimedia.org/T43378#959703 (Nuria) I am not sure what this ticket is about ... [15:39:36] Analytics-Dashiki, Analytics-Wikimetrics: Vital Signs user sees annotations on graphs [13 pts] - https://phabricator.wikimedia.org/T78151#959704 (mforns) Open>Resolved [15:39:51] ottomata: unix timestamp? [15:41:10] nuria, yeah, reques ttimestamp, but it is a string format [15:41:11] Analytics-Dashiki: Failure to retrieve a metric json file should not break the UI - https://phabricator.wikimedia.org/T85233#959758 (mforns) a:mforns [15:42:05] ottomata: that will also not work as it our traffic has oscillations [15:42:10] prominent ones [15:42:50] well, depends how the hashing of the request timestamp is done i imagine but say [15:43:10] bucketing to the hour will produce uneven bucketing too right? [15:44:32] nuria, yeah, traffic fluctuations don't very by second though [15:44:37] vary [15:44:46] its not bucketing to an hour [15:45:24] it is hashed, remember [15:45:31] ok, so all events in a hour will "hash" differently and thus be bucketed differently correct? [15:45:59] no [15:46:09] it is a second value [15:46:11] 2014-12-30T18:06:37 | md5sum [15:46:11] f77072c73c6316a19f3493c36976dbc9 [15:46:15] (probably not md5, but ja) [15:46:29] so, every request in that second would go in the same bucket [15:46:37] f77072c73c6316a19f3493c36976dbc9 % 64 [15:46:39] whatever that is [15:47:19] vs [15:47:46] echo 2014-12-30T18:06:38 | md5sum [15:47:46] b6419820f1f228f61189bb10708f018d [15:48:35] ottomata: i think that is an ok strategy [15:48:55] i mean, it is an easy choice, because it has so much variation, but i dunno, seems weird still [15:49:28] ottomata: what other choices we have? [15:50:39] ottomata: or rather, why does it seem wired? [15:58:58] just feels a little contrived, i guess? dunno. it is so arbitrary. might as well pick a random number [16:14:37] nuria: Today I'll be around for about another hour. So if there's some way I can help you with the EventLogging thing, let me know. [16:15:01] qchris: found the issue , patch on the way [16:15:08] k. cool. [16:22:36] qchris: issue is that events are not sending the 'schema [16:22:45] qchris: thus they do not validate [16:23:02] qchris: as ahem .... they have nothing to validate against [16:23:06] Mhmm. Not good. [16:23:16] qchris: not the best for backfilling no [16:23:25] But luckily, there are not tons of schemas, so one [16:23:32] can match by their attributes. [16:23:32] qchris: dan could merge the patch but we cannot deploy it until ori wakes up [16:24:05] qchris: ya, not impossible but not as easy as it might be otherwise [16:24:36] I cannot deploy MW either :-/ [16:26:23] ottomata: for partitition key, let's consult with qchris: do you think using timestamp is strange? [16:28:18] * qchris reads backlog [16:29:13] I think dt would be ok. [16:29:24] One does not need perfect hashing, just 64 buckets. [16:29:36] One thing though ... [16:29:53] hashing dt would make adjacent requests have the same hash. [16:30:08] Not sure if that's ideal for sampling. [16:30:36] Because if you sample by buckets afterwards, dt is far from continous within the bucket. [16:31:21] Meh. That's bikeshedding. [16:31:36] Without a clear use-case that we build against, I guess everything is fine. [16:31:46] And once the use cases arise, one would tune [16:32:03] or change the column by which to bucket. [16:32:36] hm, that is true, if you are trying to sample, you would get results in the same second more often [16:32:40] that could bias sampling [16:33:00] What about CONCAT(hostname, sequence) ? [16:33:07] it will take multiple fields [16:33:13] that's a good idea i suppose [16:33:13] sure [16:34:45] qchris: sequence being what thing? [16:34:56] the sequence number of the request. [16:35:01] It's a number that is [16:35:02] ahhhh [16:35:07] increasing for each request. [16:35:13] yaya what comes fro kafka, right? [16:35:16] And it's a per cache number. [16:35:26] that IS better [16:35:35] from varnishkafka, yes. [16:36:01] from the theoretical point of getting a random sample [16:36:29] As caches in the same cluster and same datacenter typically have about the same sequence number (after all caches are typically fair load balanced) [16:36:42] I'd add the hostname to the mix. [16:36:51] To avoid bias. [16:37:30] ottomata: I think seq number is a better alternative [16:38:17] ja i like [16:38:18] will do [16:38:19] thanks qchris [16:38:29] yw :-P [16:40:52] nuria: About the EL thing ... is that waiting for ori, or is there something I can help to get it done? [16:59:52] moving locations, back shortly. [17:14:59] qchris: we would need to deploy teh fix [17:15:02] *the [17:15:30] but we have no deploy permits (neither... ahem... do we want them) [17:15:31] nuria: Hahaha. That's great timing. Just wanted to close the IRC client. [17:15:46] qchris: I was having 2nd breakfast [17:15:53] Sooooo ... the one deployment period just ended. [17:16:42] Would it be ok to wait for the next lighting deploy, or should we escalate with greg-g? [17:16:49] I'd say it's fine to wait. [17:16:52] i'm on it [17:17:00] ta-tachannnnn [17:17:13] In comes the superhero :-D [17:17:16] Thanks ori. [17:17:31] the ta-ta-chann was the soundtrack.... [17:18:23] Ok, then I'll call it a day. [17:18:50] See you tomorrow then. [17:18:53] * qchris waves. [17:21:06] done [17:21:08] thanks guys [17:23:16] * greg-g looks in, walks away slowly [17:28:07] ori: did you deploy? [17:30:55] nuria: you have said in a previous mail related to setting up eventlogging-devserver on the analytics mailing list that I can see the events on console. What do you mean by console? Is the output received when running 'eventlogging-devserver' ? [17:31:24] tuxilina: the browser console [17:32:33] tuxilina: running the devserver is just an aid for development, the system by which events get sent has nothing to do with the server, it lives on the browser [17:33:00] nuria: oh I see. does the output also gets logged into some files, is it related to the variable $wgEventLoggingFile ? [17:33:22] nuria: I understand now, thanks for making it clear. [17:33:23] tuxilina: no, it does not, output does not get logged [18:07:53] When logging an event in browser console, I keep getting 'Validation error against schema : Unknown schema: ', where is something from https://meta.wikimedia.org/wiki/Special:PrefixIndex/Schema: .Should I set anything else besides $wgEventLoggingSchemaIndexUri = 'http://meta.wikimedia.org/w/index.php'; ? [18:12:50] wikimedia/mediawiki-extensions-EventLogging#319 (wmf/1.25wmf14 - 98b6778 : Reedy): The build passed. [18:12:50] Change view : https://github.com/wikimedia/mediawiki-extensions-EventLogging/commit/98b6778e4de7 [18:12:50] Build details : http://travis-ci.org/wikimedia/mediawiki-extensions-EventLogging/builds/46222823 [18:13:21] tuxilina, well, does the schema you're logging exist? :P [18:17:56] Ironholds, I have tried a few from https://meta.wikimedia.org/wiki/Special:PrefixIndex/Schema: Isn't the check made with these ones? [18:18:42] I'd hope so! You might want to ask on the analytics mailing list :) [18:33:36] tuxilina: does your event include the schema? [18:37:23] nuria: I did something like this http://pastebin.com/ejskv0wi where test is from here https://meta.wikimedia.org/wiki/Schema:Test [18:39:45] tuxilina: you cannot log events directly, as there is a lot more things that go into the event, you need to use the provided EL client in javascript. Please look for another examples on mediawiki codebase. Events include all this data: https://meta.wikimedia.org/wiki/Schema:EventCapsule [18:42:26] nuria: thank you! I will look into it more [18:48:31] nuria: I see the schema being passed with events in production - so I assume the fix was deployed? [18:48:34] I saw it was merged [18:49:01] do you have a link to the graph c-r-isti-a-n uses? [18:49:09] milimetric: i know as much as you man, i was looking at errors and i do not see every single event failing so yeah [18:49:12] :) [18:49:17] ok, good [18:49:23] Analytics-Cluster: Respect X-Forwarded-For only from trustworthy sources - https://phabricator.wikimedia.org/T56783#960302 (dr0ptp4kt) [18:49:23] milimetric: ya, it's our graphs in graphite [18:49:54] Analytics-General-or-Unknown: Number of Wikipedia Zero increasing drastically in mid March 2014 - https://phabricator.wikimedia.org/T64848#960310 (dr0ptp4kt) [18:50:19] Analytics-General-or-Unknown: Count X-CS=502-16 only if it came through an Opera Mini PROXY - https://phabricator.wikimedia.org/T58118#960314 (dr0ptp4kt) [18:50:27] Analytics-General-or-Unknown: Mobile and zero graphs lack api PageViews - https://phabricator.wikimedia.org/T56782#960317 (dr0ptp4kt) [18:51:57] milimetric: here it is: http://picpaste.com/Screen_Shot_2015-01-07_at_10.50.28_AM-NsMSPgHp.png [19:06:35] Analytics-Wikimetrics: User sees TOS when loging in - https://phabricator.wikimedia.org/T76826#960427 (kevinator) This was accomplished while working on T76107. This task will be merged into that one. [19:07:17] Analytics-Wikimetrics: Wikimetrics User reads disclaimer on website [3 pts] - https://phabricator.wikimedia.org/T76107#789675 (kevinator) [19:07:18] Analytics-Wikimetrics: User sees TOS when loging in - https://phabricator.wikimedia.org/T76826#960431 (kevinator) [19:18:56] nuria: i'm glad we changed the cluster fields [19:18:58] this looks better [19:19:07] ottomata: Buckets look better? [19:19:12] the 64 files for a text hour are each 116M [19:19:17] ottomata: size in buckets right? [19:19:22] the table I was using yesterday varied a lot more [19:19:22] yes [19:19:24] ahammmm niceeee [19:19:31] ok, sounds good [19:20:14] ottomata: i should have thought about request ids, i forget always we have those and they are very handy [19:20:44] ottomata: then we are ready to goooooo and merge your 2nd patch to your changeset [19:21:02] ottomata: after we should document it so we make sure nobody needs to query the raw tables [19:21:14] ok, real quick i want to make sure i udnerstand your coment about is_pageview backwards compatibility [19:21:17] did my response make sense? [19:21:21] https://gerrit.wikimedia.org/r/#/c/182478/7/oozie/webrequest/refine/refine_webrequest.hql [19:22:03] milimetric: Hi, do you know where the last line in this datafile coming from? http://datasets.wikimedia.org/limn-public-data/mobile/datafiles/thanks-daily.csv [19:22:29] ottomata: ya, sounds good, I was concern about number or args in udf like is_pageview (arg1, arg2, arg3) [19:23:05] milimetric: same problem, but in the 2nd line of this file: http://datasets.wikimedia.org/limn-public-data/mobile/datafiles/diff-activity.csv [19:23:08] i meant "backwards compatible" as in "if we add fourth arg on the udf it should be optional" [19:23:20] ah, yeah we will have to edit something no matter what there [19:23:34] oh, i dunno, maybe, that would be equivalent to changing the udf [19:23:44] i was thinking that maybe we will want a refinery version field on each request [19:23:53] that indicated the version of the udf that was used to classify [19:23:55] but, meh? [19:23:55] ottomata: no, the way udfs are coded if we add an optional arg it should work [19:24:06] no. not if oliver changes out the definition completely [19:24:15] the meaning would change [19:24:18] ottomata: i know, no biggie [19:24:24] ottomata: right, taht too [19:24:26] *that [19:24:27] sure, we could keep it backwards compatible, but the meaning would be different [19:24:31] so there's no real reason to [19:24:36] but, anyway, ja, let's deal with that later i think [19:25:29] ottomata: ok, then as soon as you upload a new patchset we are ready to go [19:26:03] (PS9) Ottomata: First draft of refinement phase for webrequest [analytics/refinery] - https://gerrit.wikimedia.org/r/182478 [19:28:31] (CR) Nuria: [C: 2] "Makes more sense this way, agreed. Tested things run OK on 1002." [analytics/aggregator] - https://gerrit.wikimedia.org/r/183148 (owner: QChris) [19:28:49] (Merged) jenkins-bot: Recompute the "total sum" column upon rescaling [analytics/aggregator] - https://gerrit.wikimedia.org/r/183148 (owner: QChris) [19:30:34] (PS10) Ottomata: First draft of refinement phase for webrequest [analytics/refinery] - https://gerrit.wikimedia.org/r/182478 [19:30:43] (CR) Ottomata: [C: 2 V: 2] First draft of refinement phase for webrequest [analytics/refinery] - https://gerrit.wikimedia.org/r/182478 (owner: Ottomata) [19:40:16] (CR) Bmansurov: [C: -1] Update scripts in light of recent changes [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/181428 (owner: Jdlrobson) [19:48:27] !log started first webrequest refine oozie job [19:48:35] nuria: thar she blows! [19:48:40] i just started it with the current hour [19:48:56] it will pick up new data as it rolls in [19:48:57] ottomata: with your new tables? [19:49:01] yup [19:49:06] wmf.webrequest (instead of wmf_raw) [19:49:07] ottomata: niceeeee [19:49:41] ottomata: I think it warrants documenting so everyone knows those are available, it'll reduce cluster utilization a bunch [19:50:00] yes, nuria, i'd like to wait a little bit, there isn't m uch data there now [19:50:00] but ja [19:50:11] i can start some docuemntation [19:50:17] right now i want to say it is still experimental [19:51:07] oh HMM [19:51:12] i may have the location in a bad place. [19:51:13] hm [19:51:15] well, i mean, it isn't bad [19:51:27] but the other wmf tables are in /wmf/data/wmf [19:51:43] i left this in the default hive warehouse at /user/hive/warehouse/wmf.db [19:52:12] dunno. i guess christian and I made that decisions a while ago, I don't really remember [19:52:21] ottomata: actually to test is good [19:52:43] ottomata: i would move it once the system has "baked" and we know it is good [19:53:11] do you have to re-start the cluster everytime you deploy a new jar? [19:57:51] restart the cluster/ [19:57:52] ? [19:58:09] no [19:58:13] depending on what the change is, i would have to restart the oozie job [19:58:15] that is all [19:58:17] but maybe even not then [19:58:30] ottomata: ah ok, so jars are "read" when job starts [19:58:34] I think I should move this data now, nuria, oozie actually won't notice [19:58:41] ottomata: k [19:58:43] well, nuria, it depends on how you start the job [19:58:52] christian had me add some parameters to make this easier [19:58:52] so [19:58:54] i started this with [19:59:02] this command: [19:59:02] oozie job -Duser=$USER \ [19:59:03] -Dstart_time=2015-01-07T17:00Z \ [19:59:03] -Drefinery_directory=hdfs://analytics-hadoop/wmf/refinery/2015-01-07T19.35.18Z--838a594 \ [19:59:03] -run -config /srv/deployment/analytics/refinery/oozie/webrequest/refine/bundle.properties [19:59:14] notice that refinery_directory is set to a particular deploy snapshot in hdfs [19:59:23] rather than the 'current' symlink [19:59:25] that is the dfeault [19:59:34] ottomata: ajajam [19:59:36] this means that this particular job will not pick up a new jar [19:59:43] which i htink is better for production stuff [19:59:48] deployments won't break running jjobs [20:00:02] ottomata: ok, taht was my main concern [20:00:39] nuria, just added this: https://wikitech.wikimedia.org/wiki/Analytics/Cluster/Hive#wmf.webrequest [20:03:00] ottomata: great! I have added also: https://wikitech.wikimedia.org/wiki/Analytics/Cluster/Hive#wmf_raw.webrequest [20:04:17] cool [20:04:32] Analytics-EventLogging: Cannot pass null to EventLogging::logEvent for optional boolean fields - https://phabricator.wikimedia.org/T78325#960635 (kaldari) This issue has bitten us again: https://phabricator.wikimedia.org/T85963 Could someone at least triage this bug? [20:06:10] (PS1) Ottomata: Change location of refined wmf.webequest table to /wmf/data/wmf/webrequest [analytics/refinery] - https://gerrit.wikimedia.org/r/183310 [20:06:17] (PS2) Ottomata: Change location of refined wmf.webequest table to /wmf/data/wmf/webrequest [analytics/refinery] - https://gerrit.wikimedia.org/r/183310 [20:07:18] (CR) Nuria: [C: 1] Change location of refined wmf.webequest table to /wmf/data/wmf/webrequest [analytics/refinery] - https://gerrit.wikimedia.org/r/183310 (owner: Ottomata) [20:10:51] (CR) Ottomata: [C: 2] Change location of refined wmf.webequest table to /wmf/data/wmf/webrequest [analytics/refinery] - https://gerrit.wikimedia.org/r/183310 (owner: Ottomata) [20:10:58] (CR) Ottomata: [V: 2] Change location of refined wmf.webequest table to /wmf/data/wmf/webrequest [analytics/refinery] - https://gerrit.wikimedia.org/r/183310 (owner: Ottomata) [20:12:44] Analytics-EventLogging: if logEvent fails due to not matching Schema requirements, it should return false (or an error) instead of true - https://phabricator.wikimedia.org/T75678#960644 (kaldari) Open>declined a:kaldari [20:22:54] nuria: Hi, do you know how I can regenerate data files in http://mobile-reportcard.wmflabs.org/#other-graphs-tab ? Some files are corrupted but I don't know why. When I generate those locally everything works fine. This one for example: http://datasets.wikimedia.org/limn-public-data/mobile/datafiles/thanks-daily.csv [20:23:22] nuria: The last line in that file should not be there [20:24:04] bmansurov: regenerating the files requires erasing them from stat1003 [20:24:12] because the process reads the files to see what's missing [20:24:47] milimetric: so the process appends the new data? [20:25:39] it appends or inserts if one particular day was missing [20:25:43] or, at least, it should [20:25:57] there's no magical hidden process btw, that's the logic from generate.py [20:26:03] but it's super confusing to read [20:26:38] there's also a history.json file which keeps a list of the last update date for each file, and that factors in too [20:27:13] milimetric: those files are in stat1003 or limn1.eqiad.wmflabs? [20:27:20] bmansurov: so one problem with re-generating is that some of that data may have been truncated from the logs [20:27:36] Analytics-EventLogging, Analytics-Engineering: Cannot pass null to EventLogging::logEvent for optional boolean fields - https://phabricator.wikimedia.org/T78325#960673 (kevinator) p:Triage>High [20:28:00] bmansurov: the files are computed by the big analytics db box, written to stat1003, then rsynced over to a public website [20:28:22] the only way limn1 interacts with them is that the client code served by limn1 reads them over the internet from datasets.wikimedia.org [20:28:54] (datasets.wikimedia.org is hosted on stat1001 I believe, and that's updated from the rsync on stat1003) [20:29:04] so to change the files, all we have to think about is stat1003 [20:29:55] milimetric: do you remember having such problem before? I don't understand how this may happen [20:31:04] bmansurov: do you have a specific example? I can take a look [20:31:13] in my opinion, generate.py is anything but robust [20:31:39] milimetric: yes, we have 3 problems, but let's look at this one first: http://datasets.wikimedia.org/limn-public-data/mobile/datafiles/thanks-daily.csv [20:32:05] milimetric: last line [20:32:40] heh... uh... no, i've never seen that before [20:32:43] lemme take a look at the sql [20:34:38] milimetric: this one too (line 2): http://datasets.wikimedia.org/limn-public-data/mobile/datafiles/diff-activity.csv [20:34:50] Analytics-Wikimetrics: Valid username is ruled invalid by Wikimetrics - https://phabricator.wikimedia.org/T73923#960691 (mforns) All users cited in the description and comments are recognized as valid now. [20:35:15] Analytics-Wikimetrics: Valid username is ruled invalid by Wikimetrics - https://phabricator.wikimedia.org/T73923#960692 (mforns) Open>Resolved [20:36:45] bmansurov: these problems occur at a weird time - they seem to coincide too well with when Event Logging started having problems last night [20:36:58] but missing data shouldn't cause generate.py to freak out, obviously [20:37:22] milimetric: the problem is with graphs, they don't look right http://mobile-reportcard.wmflabs.org/?#other-graphs-tab [20:37:32] yeah, makes sense, they wouldn't with files like that [20:38:26] bmansurov: I've got two meetings coming up soon that I have to prepare for :( [20:38:33] but I have to take a look at this... grrr [20:38:42] milimetric: ok thanks [20:43:14] Analytics, operations: Fix Varnishkafka delivery error icinga warning - https://phabricator.wikimedia.org/T76342#960714 (Ottomata) I turned on a check_graphite check for this today. I'm going to let this be for a few days, and then hopefully turn off the check_ganglia ones next week. [20:44:55] Analytics-Engineering: generate.py produced broken CSV files - https://phabricator.wikimedia.org/T86059#960716 (Milimetric) NEW a:Milimetric [20:52:06] Analytics-Wikimetrics, Analytics-Engineering: User reads result of validation after creating a cohort - https://phabricator.wikimedia.org/T76914#960744 (mforns) I think we should update this task, because it does not include a conversation we (Kevin and me, if I remember well) had after resolving task T75350... [20:59:36] bmansurov: fyi i filed this: https://phabricator.wikimedia.org/T86059 [20:59:59] kevinator: https://phabricator.wikimedia.org/T86059 should be considered for tomorrow too, it's a production issue [21:00:04] ok thanks [21:05:13] Analytics-Engineering: generate.py produced broken CSV files - https://phabricator.wikimedia.org/T86059#960716 (bmansurov) Another one: http://datasets.wikimedia.org/limn-public-data/mobile/datafiles/diff-activity.csv ``` Day,Thanks,User page link,Navigate to subject page,Clicks previous or next (beta only)... [21:13:03] Analytics-Engineering: Logstash is not working - https://phabricator.wikimedia.org/T86065#960810 (ggellerman) NEW a:Tnegrin [21:15:12] Analytics-Cluster: Logstash is not working - https://phabricator.wikimedia.org/T86065#960820 (ggellerman) [21:15:34] Analytics-Wikimetrics, Analytics-Engineering: Story: Analytics Eng locks wikimetrics-staging so nobody else can deploy to it. - https://phabricator.wikimedia.org/T86066#960822 (kevinator) NEW [21:17:38] kevinator: this is also a prod issue: https://phabricator.wikimedia.org/T86067 [21:18:06] Analytics-EventLogging: eventlogging dev server broken - https://phabricator.wikimedia.org/T86067#960832 (Nuria) [21:42:35] Analytics-EventLogging: Delete teahouse directory - https://phabricator.wikimedia.org/T86071#960971 (ggellerman) NEW a:kevinator [23:07:13] Analytics-Wikimetrics, Analytics-Engineering: User reads result of validation after creating a cohort - https://phabricator.wikimedia.org/T76914#961133 (kevinator) I like having the final message be the same that we see when we view the members of a cohort: 72 users in 660 projects; 64 invalid entries Whi... [23:26:20] (PS1) Gergő Tisza: Do not count Commons opt-ins as opt-outs [analytics/multimedia] - https://gerrit.wikimedia.org/r/183392 [23:45:42] o/ milimetric [23:59:41] milimetric, if you come and read scrollback, I was hoping to talk to you a bit about Samza -- nothing that can't wait. :)