[00:54:22] (PS1) Neil P. Quinn-WMF: Fix calculation of time-to-read [analytics/limn-ee-data] - https://gerrit.wikimedia.org/r/280371 (https://phabricator.wikimedia.org/T131206) [01:48:49] Analytics-Kanban: Break down "Other" a little more? - https://phabricator.wikimedia.org/T131127#2159775 (Krinkle) [05:54:28] (CR) Mforns: "LGTM, but there is a WARNING :]" (1 comment) [analytics/limn-ee-data] - https://gerrit.wikimedia.org/r/280371 (https://phabricator.wikimedia.org/T131206) (owner: Neil P. Quinn-WMF) [06:02:21] Analytics-Kanban, Patch-For-Review: Dashiki visualization that shows a hierarchy {lama} - https://phabricator.wikimedia.org/T124296#2159968 (mforns) Thanks @Milimetric! [07:04:21] (PS1) Mforns: [WIP] Change line- and tabular- browser reports to percent [analytics/reportupdater-queries] - https://gerrit.wikimedia.org/r/280386 (https://phabricator.wikimedia.org/T130406) [07:08:45] (Abandoned) Mforns: [WIP] Sanitize pageview_hourly table [analytics/refinery/source] - https://gerrit.wikimedia.org/r/260408 (https://phabricator.wikimedia.org/T118838) (owner: Mforns) [09:14:18] cross-posting from -security: https://cloud.google.com/bigquery/public-data/ [09:14:23] o/ [09:44:52] Analytics-Tech-community-metrics, DevRel-March-2016, Patch-For-Review: What is contributors.html for, in contrast to who_contributes_code.html and sc[m,r]-contributors.html and top-contributors.html? - https://phabricator.wikimedia.org/T118522#2160239 (Aklapper) Open>Resolved >>! In T118522#2119... [09:45:22] Analytics-Tech-community-metrics, Developer-Relations, DevRel-April-2016, DevRel-March-2016: Play with Bitergia's Kabana UI (which might potential replace our current UI on korma.wmflabs.org) - https://phabricator.wikimedia.org/T127078#2160241 (Aklapper) [10:15:15] Analytics-Kanban: Cassandra Backfill July {melc} - https://phabricator.wikimedia.org/T119863#2160296 (JAllemandou) a:JAllemandou [10:39:38] Analytics-Tech-community-metrics: top-contributors should have real names for the main contributors - https://phabricator.wikimedia.org/T124346#2160325 (Lcanasdiaz) [10:39:40] Analytics-Tech-community-metrics, DevRel-March-2016: Key performance indicator: Top contributors: Find good Ranking algorithm fix bugs on page - https://phabricator.wikimedia.org/T64221#2160326 (Lcanasdiaz) [10:39:42] Analytics-Tech-community-metrics, DevRel-March-2016, Regression: top-contributors.html only displays seven entries - https://phabricator.wikimedia.org/T129837#2160322 (Lcanasdiaz) Open>Resolved Done http://korma.wmflabs.org/browser/top-contributors.html These are the changes I've just deployed:... [12:45:03] (PS2) Mforns: [WIP] Change line- and tabular- browser reports to percent [analytics/reportupdater-queries] - https://gerrit.wikimedia.org/r/280386 (https://phabricator.wikimedia.org/T130406) [12:49:50] (PS3) Mforns: [WIP] Change line- and tabular- browser reports to percent [analytics/reportupdater-queries] - https://gerrit.wikimedia.org/r/280386 (https://phabricator.wikimedia.org/T130406) [12:51:16] * joal is away for a bit [13:09:14] (PS4) Mforns: [WIP] Change line- and tabular- browser reports to percent [analytics/reportupdater-queries] - https://gerrit.wikimedia.org/r/280386 (https://phabricator.wikimedia.org/T130406) [13:09:40] Analytics-Kanban, Patch-For-Review: Browser visualization in dashiki, too many items below the fold - https://phabricator.wikimedia.org/T130144#2160547 (Milimetric) Andre, you rock, thanks :) [13:28:43] (PS5) Mforns: [WIP] Change line- and tabular- browser reports to percent [analytics/reportupdater-queries] - https://gerrit.wikimedia.org/r/280386 (https://phabricator.wikimedia.org/T130406) [13:40:22] ottomata: https://graphite.wikimedia.org/render/?width=951&height=460&target=servers.kafka*.network.nf_conntrack_count [13:40:35] NICE! just saw that! [13:40:39] i'm adding it to the grafana dash now [13:41:13] (PS6) Mforns: Change line- and tabular- browser reports to percent [analytics/reportupdater-queries] - https://gerrit.wikimedia.org/r/280386 (https://phabricator.wikimedia.org/T130406) [13:44:13] elukey: https://grafana-admin.wikimedia.org/dashboard/db/kafka [13:45:07] ottomata: niceeeeeee [13:47:10] very [13:48:03] (CR) Mforns: Change line- and tabular- browser reports to percent (2 comments) [analytics/reportupdater-queries] - https://gerrit.wikimedia.org/r/280386 (https://phabricator.wikimedia.org/T130406) (owner: Mforns) [13:48:45] nf_conntrack is counting flows from what I've gathered, so src->dst and return dst->src + actual state [13:49:05] this is why TIME_WAITs on kafka hosts are few but conntracks are 120k [13:49:08] that was puzzling [13:49:26] anyhowww this doesn't explain why it went up to 250K during easter [13:49:37] (CR) Mforns: "This shouldn't be merged until we merge the related reportupdater patch: https://gerrit.wikimedia.org/r/#/c/280201/" [analytics/reportupdater-queries] - https://gerrit.wikimedia.org/r/280386 (https://phabricator.wikimedia.org/T130406) (owner: Mforns) [13:49:58] aye [13:50:02] but at least we can see it now! [13:50:23] yep yep! [13:51:01] Analytics-Kanban, Patch-For-Review: Add (and default to) a breakdown in percentages also for the line chart. - https://phabricator.wikimedia.org/T130406#2160571 (mforns) This is ready for review, but it can not be merged until we merge the related RU patch: https://gerrit.wikimedia.org/r/#/c/280201/ Thanks! [13:54:53] Analytics-Kanban, Patch-For-Review: Add (and default to) a breakdown in percentages also for the line chart. - https://phabricator.wikimedia.org/T130406#2160580 (mforns) The deployment plan is: 1) Merge the RU changes first: https://gerrit.wikimedia.org/r/#/c/280201/ 2) Merge these changes 3) Wait for pup... [13:58:52] ottomata: cp1043 is running varnish 4 and the new varnishkafka.. I believe that there is a small memleak but overall it works [13:59:03] (maps) [13:59:56] AWESOOOOOME! [13:59:57] looking [14:01:46] CooOol looking good, I see messages from it in kafka [14:02:30] hmmm elukey [14:02:31] just checking [14:02:38] i'm looking at messages from cp1044 and cp1043 [14:02:45] all the ones from cp1044 have x-analytics set with some values [14:02:55] yep I was noticing the same [14:02:56] the cp1043 ones all have "-" [14:03:03] wanted to ask you why [14:04:57] let's chat in -traffic [14:04:57] ok [15:09:42] hey milimetric, AQS deploy after standup ? [15:28:30] ottomata: vk module changes merged [15:30:15] k awesome [15:33:20] sure joal [16:15:56] milimetric: I'm taking a few minutes to say hello to Lino, deploy after? [16:24:27] milimetric: back ! Melissa is gone with Lino without me noticing ... [16:43:17] ottomata: can we do the whole "take aqs1001 out of LVS, deploy, put it back, take aqs1002 out of LVS, deploy... etc" thing? [16:43:37] oh I guess it involves stopping puppet as well, so no [16:44:43] why stopping puppet [16:44:43] ? [16:44:59] oh you're right, we didn't change config [16:45:56] puppet won't start restbase if it's stopped, right? [16:46:11] *aqs not restbase [16:46:22] going offline folks, talk with you tomorrow! [16:46:26] ottomata: but we still need you for LVS drain-stop right? [16:46:28] nite elukey [16:47:08] Bye elukey [16:48:51] ? [16:48:56] if aqs is stopped [16:48:56] yeah [16:48:58] and puppet runs [16:49:00] it will probably start it [16:49:10] i'm not sure if you need me for LVS drain... [16:52:03] it shouldn't though [16:52:03] ottomata: so how do we take aqs1001 out of LVS then? [16:52:38] i guess if we take it out of LVS it doesn't matter that puppet starts / stops it [16:52:42] because it won't serve [16:52:46] and we can un-deploy if we need [16:53:08] right [16:53:09] um [16:53:15] lets look for docs! [16:55:07] hmm, i thought someone documented this somewhere [16:55:43] https://wikitech.wikimedia.org/wiki/LVS doesn't seem to have it [16:56:37] https://wikitech.wikimedia.org/wiki/Depooling_servers [16:56:49] can we just do "depool"? [16:56:55] or is that something completely different [16:58:39] hm, no, it's not a command on aqs [16:58:43] aqs100x [16:58:50] https://wikitech.wikimedia.org/wiki/Conftool#Pooling.2Fdepooling_a_server_from_all_the_related_services [16:58:51] ? [16:58:53] thik so [16:59:07] confctl --find --action set/pooled=no aqs1001.eqiad.wmnet [16:59:18] can check this [16:59:19] http://config-master.wikimedia.org/conftool/eqiad/aqs [16:59:21] after running [16:59:30] where do we do that? [16:59:31] i think i have to run that [16:59:41] i think palladium [16:59:43] ok [16:59:44] which you don't have access to [16:59:48] right [17:00:02] I'll add this to the deploying docs though [17:00:34] k [17:01:36] mforns: yt? [17:01:45] nuria, yes [17:01:52] ok, ottomata so do you have time to run through this with us now? or later's fine too [17:02:02] mforns: question about the percentages stuff, [17:02:07] aha [17:02:28] k, updated https://wikitech.wikimedia.org/wiki/Analytics/AQS#Deploying [17:05:40] milimetric: this is for deploy? [17:05:41] mforns: even in our case (where the oozie job is calculating percentages, but not storing them) you think is easier to have those being re-calculated by report updater? [17:05:41] why do you need to depool to deploy? [17:05:41] ottomata: just in case the deploy fails [17:05:41] then we'd un-deploy and repool [17:05:41] nuria, wmf.browser_general has a daily granularity, and RU is aggregating weekly, it can not get ready percentages from wmf.browser_general [17:05:41] ah ok, that is what it was [17:05:41] we would need to change browser_general granularity to weekly too [17:05:41] milimetric: you want to do this every deploy? [17:05:41] but, I see browser_general as an intermediate table that we can use for other things, so it is good that it's daily [17:05:41] i think maybe you just want a canary [17:05:41] canary? [17:05:49] https://doc.wikimedia.org/mw-tools-scap/scap3/quickstart/setup.html#targets [17:05:50] also, it makes sense to transform to percentages as late as possible [17:07:27] mforns: but the "other" bucketing is happening before [17:07:59] mforns: what means that you are already discarding data based on a daily percentage calculation [17:08:06] nuria, yes, for anonymity reasons, and also to reduce the size of the table [17:08:16] aha [17:08:18] mforns: i think that is ok for now but we might ned to look into that [17:08:26] nuria, yes, you're right [17:08:54] milimetric: i need some food! Maybe canary will work for you? i'll be back in a bit to talk more [17:08:56] mforns: cause I think that mathematically -when it comes to % calculations- that is incorrect [17:09:04] k [17:09:48] nuria, that's why I insist in that we call the "other" row not "other" but "unknown" [17:10:00] at least in the visualization [17:10:12] dunno [17:10:36] nuria, note that there is no discarding of data [17:10:44] just rebucketing [17:10:56] mforns: yes, but we discard info [17:11:18] everything that is below the threshold is not discarded, it is moved to the "other" bucket, which also counts in the percentages calculations [17:11:26] Right right [17:12:28] but the bucketing will actually look very different if we did a weekly runs to fill browser-data, actually it will be bigger I think, as we will have more diversity of browsers/os combinations ... not sure 100% [17:13:56] nuria, I'm not sure about that [17:14:12] mforns: ya , no me neither [17:15:56] nuria, the only way I see we could improve the quality (reduce the other group) is having oozie jobs for each report hitting pageview_hourly each week [17:16:40] so that we wouldn't have to reaggregate the 4 dimensions of browser_general after anonymization [17:17:50] but this would imply a *lot* of oozie code and repetition of almost identical queries to pageview_hourly [17:19:37] nuria, ^ [17:20:35] mforns: ya, i just wrote that very thing on a ticket! [17:21:05] or maybe having 2 intermediate tables: one for browser (browser_general) and one for os (os_general) [17:21:14] mforns: ya, wrote that too [17:21:22] mforns: 1 table per ui split actually [17:21:54] mforns: we might need to do this if due to other bucket being too large data is not actionable [17:21:58] mforns: we'll see [17:22:01] aha [17:22:12] makes sense [17:22:50] Analytics-Kanban: Break down "Other" a little more? - https://phabricator.wikimedia.org/T131127#2161319 (Nuria) Let me explain why this is happening. We are cutting the long tail for any combination of dimensions smaller than 0.05 See:https://github.com/wikimedia/analytics-refinery/blob/master/oozie/brows... [17:22:59] mforns: https://phabricator.wikimedia.org/T131127 [17:23:53] (PS1) Joal: Add manual test scripts in new test folder [analytics/aqs/deploy] - https://gerrit.wikimedia.org/r/280465 [17:24:20] milimetric: --^ [17:24:21] :) [17:24:42] th [17:24:43] thx [17:24:49] like that, we could even deploy the texst script :) [17:28:30] nuria, thanks! another thing we could do is anonymize in a way similar with what we did with pageview_hourly, I'll explain it in the task [17:29:15] mforns: ya, i get it, anonymize on a per dimension basis [17:29:21] nuria, yes [17:29:23] mforns: not per "record" [17:29:27] aha [17:29:39] mforns: i can update if you want [17:29:47] nuria, already writing [17:29:50] k [17:30:36] milimetric: eating, but, ja, will canary work for you? [17:30:48] is it enough to have scap attempt a single deployment and run checks, before proceeding? [17:31:07] that'll be nice to setup maybe later, but we can already do that manually [17:31:18] deploy --limit and check manually [17:31:45] what I'm thinking is that we should have a way to depool to make sure the deploy doesn't cause errors for 1/3 of the traffic [17:32:00] if we had another host to deploy to that worked the same as the prod cluster, that'd be cool too [17:33:24] Analytics-Kanban: Break down "Other" a little more? - https://phabricator.wikimedia.org/T131127#2156750 (mforns) Another action we can take is anonymizing in a less aggressive way: Today, when we find a bucket that is too small, we rewrite all the dimensions of the bucket as 'other'. This is an easier approac... [17:38:35] (CR) Ottomata: "If you plan on using this for scap checks, you probably want any of the commands to fail, right? I *think* scap just checks the exit val " [analytics/aqs/deploy] - https://gerrit.wikimedia.org/r/280465 (owner: Joal) [17:43:20] milimetric: ok, i'm ready to do this if you are [17:43:31] I'm in SoS, it'll be over soon though [17:43:32] you just need me to do the puppet and conctl part, right? [17:43:33] ok [17:43:34] kinda empty witih hackathon [17:46:21] Analytics-EventLogging, Analytics-Kanban, Scap3 (Scap3-Adoption-Phase1): Stop using global pip install for eventlogging deploy - https://phabricator.wikimedia.org/T131263#2161505 (Ottomata) [17:53:54] ok ottomata, batcave? [17:53:59] sho [17:55:59] mobrovac: yt? [17:56:19] do yall have perms to depool your stuff using confctl ? [17:57:25] nope, i don't [17:57:25] so when you deploy you just push one at a time and check? [17:57:25] and revert if it goes wrong? [17:57:25] what do you do if you want to deploy something new, but test it before its live to real users? [17:57:35] (Joseph: we're about to deploy if you're around but I'm not pinging you in case you're eating dinner :)) [17:57:59] Analytics-Kanban: Break down "Other" a little more? - https://phabricator.wikimedia.org/T131127#2161563 (Nuria) >This would mean, though, transforming the hive job that creates the intermediate browser_general into a spark job, which we know is loonger to develop. Right, anonymizing per dimension rather than... [17:58:18] ottomata: milimetric: we have "staging" nodes that are part of prod, but are not reachable from the outside [17:58:25] so we first test there [17:58:32] aye ya [17:58:35] cool, thanks [17:58:36] then deploy only to one canary node [17:58:36] sounds like aqs needs something like that [17:58:49] look at logs nad metrics for 30 mins or so [17:58:52] and then continue [17:59:10] but, if it's a real risky one, then we ask someone from ops to depool one node from pybal and lvs [17:59:18] ok [18:00:22] !log depooling aqs1001 [18:00:25] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log, Master [18:00:55] also, before even deploying in staging, we deploy in betacluster and test the integration there [18:01:55] http://config-master.wikimedia.org/conftool/eqiad/aqs [18:06:06] !log repooling aqs1001 [18:06:09] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log, Master [18:10:02] hey madhuvishy i've mostly decided to not do this docker virtualenv thing i was doing [18:10:02] which means i don't have big changes to make [18:10:02] but there are a few up there that i've added you as a reviewer too [18:10:02] they are simple [18:10:14] ottomata: okay cool. Why did you want a venv in docker? [18:10:45] mainly because i was thinking of not using deb packages [18:10:57] and i wanted to see if we could make the docker setup similar to pod [18:10:59] prod [18:11:07] we might go down the venv + wheels road one day [18:11:11] but i think its not time [18:11:19] so, i'm going to stick with deb packages for now [18:11:23] especially since we have most of them working [18:11:33] i'll just need to see if I can get tornado into our trusty apt repo [18:11:43] aside from that, as long as we don't pip install [18:11:45] it is fine [18:57:42] neilpquinn: yt? [18:58:52] nuria: sorry? [18:59:12] neilpquinn: Hola [18:59:42] neilpquinn: i have invited you to a meeting with asaf regarding his editing data requests, have you guys talked about those in the past? [19:00:40] No, this is the first I've ever heard of it. [19:04:44] Analytics: Making geowiki data public - https://phabricator.wikimedia.org/T131280#2161972 (Nuria) [19:05:08] Analytics: Making geowiki data public - https://phabricator.wikimedia.org/T131280#2161985 (Nuria) [19:06:31] neilpquinn: do you guys look at similar datasets? [19:06:39] neilpquinn: i.e. editors per country? [19:16:12] nuria: no, we've never done that since I've been here. [19:16:42] I believe Dan handled a request from legal for per-country data on zhwiki editors several months ago. [19:17:28] neilpquinn: ok, do you have a description somewhere at data you have compiled for editing thus far? so we know what type of data you have readily available [19:20:55] I'm not sure what you mean...I have access to the MariaDB and Hadoop stats clusters, so anything there is theoretically available to me. [19:21:51] I've created a bunch of different ad-hoc datasets like "edits by editors in the VE A/B test" or "active editors who stopped editing in the last 6 months" but I haven't kept a list. [19:22:22] hey milimetric, has the deploy gone ok? [19:22:49] joal: we deployed, started, it created the table, then we undeployed [19:22:58] arf [19:23:00] the reason is because icinga would've thrown alerts with 404 [19:23:05] nono, it's good, all good [19:23:14] so you can fill the table tomorrow and then we can finish the deploy [19:23:18] hm [19:23:42] I think I know why [19:23:53] the swagger spec tester thing is automaticv [19:23:58] it tests everything that's exposed [19:24:29] milimetric: in the yaml spec I defined the endpoint as monitored, expecting data for 19700101 [19:24:35] yep [19:24:41] That's why :) [19:24:51] I had not consciously handled that :) [19:25:00] I know, but it's all good, basically we know the deploy works and doesn't break anything in pageviews [19:25:02] Sorry for the errors [19:25:06] the manual tests were all fine except that 404 [19:25:10] it's not a problem! [19:25:17] ok cool :) [19:25:32] k, good night, we'll deploy tomorrow after you insert [19:25:37] Tomorroe I'll check the table structure, confirm everything is good, and start manual feeding [19:25:59] milimetric: I think it'll be better to have the data inserted using your script [19:26:26] I'll actually add the script to set up test data in the aqs-deploy folder, with your permission [19:26:36] yeah, of course [19:27:28] Have a good end of day milimetric, thanks for the deploy (ottomata too !) [19:27:40] Bye a-team, tomoooooooorrow [19:28:23] nite [19:35:29] laters! [19:47:34] (CR) Neil P. Quinn-WMF: Fix calculation of time-to-read (1 comment) [analytics/limn-ee-data] - https://gerrit.wikimedia.org/r/280371 (https://phabricator.wikimedia.org/T131206) (owner: Neil P. Quinn-WMF) [19:51:49] ottomata: okay cool [19:52:02] :) [20:29:59] madhuvishy: gimme reviiewwwwws :) [20:30:20] ottomata: was at lunch then 1:1 :) seeing nowww [20:30:23] yeehaw [20:30:27] 3 eventlogging, 2 puppet :) [20:37:28] a-team: this is the thread where the services team is talking about the monitoring that would've prevented Monday night's bug: https://github.com/wikimedia/restbase/pull/581 [20:51:56] ottomata: the temp file for EL service thing - is trying to have the service read events from an open file right? [20:53:11] yes [20:53:17] i wanted to be able to test the full flow of events [20:53:37] eventlogging comes with a file writer, so that was the easiest way to test that [20:53:43] well [20:53:48] madhuvishy: no not read events from an open file [20:54:03] the change allows the Event.factory method to know how to read from an open file itself [20:54:03] ah [20:54:07] rather than having to read the strings out first [20:54:12] with that added [20:54:36] it just made testing easier, i could just pass the open file to Event.factory and then assert that the events were what I expect [20:55:30] oh [20:55:37] okay i think i understand [20:58:37] madhuvishy: the hasattr was recommended from some stackoverflows, i could also wrap it in a try/except [20:58:43] but, this is duck typing! :p [20:59:03] ottomata: i agree - i'm just asking what happens if it does throw [20:59:10] then it throws! [20:59:22] if that's fine then it's okay i guess [21:00:11] i was pointing out that it could throw [21:01:18] madhuvishy: [21:01:20] i could add [21:01:21] and hasattr(f.read, '__call__') [21:02:06] ottomata: hmmmm I thought of that - not sure if its clean though [21:02:17] hm [21:02:17] https://en.wikipedia.org/wiki/Python_syntax_and_semantics#Exceptions [21:03:09] ok madhuvishy i'll change it to a try [21:03:14] ottomata: ya that looks better to me - although we'd just log the exception and move on? [21:03:14] it seems that is more usual [21:03:19] ya [21:03:35] ah, and madhuvishy no, it shoudl not be elif [21:03:42] each conditional is down the chain of conversion [21:03:51] whaa [21:03:52] if a file -> convert to strings/bytes [21:03:59] interesting [21:04:00] if has a decode method -> convert to utf8 [21:04:16] if finally a string, parse json [21:04:20] if now a list -> convert to lists of events [21:04:30] that wasn't obvious [21:04:45] will comment [21:07:26] ottomata: wish we could write (-> data read-file read-bytes read-json read-list) like we can in clojure :D [21:08:45] hehe [21:10:48] madhuvishy: hm [21:16:57] would this make more sense madhuvishy? [21:16:57] https://gist.github.com/ottomata/992369016fada4c17830d48cdd6216ac [21:19:05] ottomata: looking [21:22:20] ottomata: yeah looks good to me - I might add a note that says 'data' is being transformed further in each steps [21:22:23] step* [21:22:48] k [21:25:00] k madhuvishy https://gerrit.wikimedia.org/r/#/c/278979/9 [21:26:22] oh i got test failuers hang on [21:26:24] ottomata: ah i think [21:26:35] you can't do AttributeError, TypeError [21:26:53] it will thing TypeError is the name of the exception object [21:27:26] (AttributeError, TypeError) [21:27:29] is right [21:27:29] hmm oh python 3 thing [21:28:42] yes [21:31:01] madhuvishy: jenkins is happier now [21:31:09] ottomata: okay I +2-ed [21:31:19] feel free to merge in whatever order you want [21:31:24] because you have 3 things [21:32:27] yeah the 3rd we shoudl wait until the puppet stuff is merged [21:32:36] danke! [21:32:43] would appreciate review on the puppet stuff too [21:32:59] 2 changes [21:33:00] https://gerrit.wikimedia.org/r/#/c/280486/ [21:33:09] https://gerrit.wikimedia.org/r/#/c/280497/ [21:35:08] ottomata: cool looking [21:35:33] ottomata: are we making any changes for the puppet autoload module layout stuff? [21:36:03] (not asking based on looking at your patch, just in general) [21:36:04] autoload module layout? [21:36:09] ya [21:36:12] oh like for the role classes? [21:36:15] yup [21:36:21] haven't been thinking about it [21:36:25] maybe eventually! [21:39:13] https://phabricator.wikimedia.org/T119042 stuff [21:39:15] okay [22:15:58] Analytics, Wikipedia-iOS-App-Product-Backlog, iOS-app-feature-ANALYTICS, iOS-app-v5.0.2-Rubiks: Fix iOS uniques in mobile_apps_uniques_daily after 5.0 launch - https://phabricator.wikimedia.org/T130432#2163469 (JMinor) Yes, I'm not sure what we can do here, besides make a bunch of guesses. We can m... [22:16:13] Analytics, Wikipedia-iOS-App-Product-Backlog, iOS-app-feature-ANALYTICS: Fix iOS uniques in mobile_apps_uniques_daily after 5.0 launch - https://phabricator.wikimedia.org/T130432#2163470 (JMinor) [22:16:32] Analytics, Wikipedia-iOS-App-Product-Backlog, iOS-app-feature-ANALYTICS: Fix iOS uniques in mobile_apps_uniques_daily after 5.0 launch - https://phabricator.wikimedia.org/T130432#2135848 (JMinor) a:JMinor>None [23:23:40] a-team: This got published! https://blog.wikimedia.org/2016/03/30/unique-devices-dataset/