[08:13:37] Analytics-Kanban, Operations, Traffic, HTTPS: OCG failing with new GlobalSign intermediate workaround - https://phabricator.wikimedia.org/T148076#2715534 (MoritzMuehlenhoff) Beside ocg we have a other precise/trusty systems not using nodejs 4: - sca1* still has it installed, but the only remainin... [08:15:25] joal: o/ [08:15:32] elukey: \o [08:15:40] if you have time I'd have some newbie questions about hive/beeline [08:15:41] :) [08:15:53] you're definitely not a newbie ;) [08:15:59] I have time :) [08:16:02] in making queries yessss [08:16:18] :) [08:16:20] so I already killed two yarn apps that I sent because they were taking ages [08:16:29] then I went for a simpler one [08:16:30] select count(*) from webrequest where webrequest_source = 'upload' and year = 2016 and month = 10 and day = 13 and hour = 8 and hostname = 'cp3045.esams.wmnet' and dt = '-' and user_agent = 'facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)'; [08:16:41] yesterday a query like this one took minutes [08:16:48] meanwhile now it takes ages [08:17:00] elukey: Have you ckeck cluster load? [08:17:10] afaics it looks normal [08:17:23] but I also saw Hadoop job information for Stage-1: number of mappers: 73; number of reducers: 1 [08:17:37] so not sure if the reducers needs to be tuned? [08:17:39] elukey: How do you check cluster load? [08:18:17] ehm now that I double check in Yarn the green line is at 96% [08:18:22] I checked only default [08:18:23] :) [08:18:32] ahhhh snap [08:18:35] What is default? [08:18:50] root.default, the orange line [08:19:03] ah, right [08:19:05] in https://yarn.wikimedia.org/cluster/scheduler [08:19:10] Let's discuss queues in yarn :) [08:19:23] scheduler UI is definitely the right place to look at [08:19:59] We have 5 queues in our resource management system, one them being deprecated - So we have 4 useful queues [08:20:33] They are, in decreasing prioty order: essential, production, priority, default [08:21:02] and we dispach user or regular jobs in these queues [08:21:04] so IIRC [08:21:34] production is for our hadoop jobs, meanwhile default is for things like user queries (like mine and yours in spark shell) [08:21:38] elukey: camus jobs (only thos ones) get's run in esential [08:21:47] ah didn't know this [08:21:59] this essential queue has preemption capactiy to make sure camus doesn't get late [08:22:14] and it has the most by default resource allocation for the same reason [08:22:38] and camus is running now [08:22:47] production queue receives all hdfs-user launched jobs (basically, oozie) [08:23:09] this queue doesn't have preemption but still has a big protion of the resources [08:23:12] I got this one at least :D [08:23:33] priority is a queue to allow fasttracking some user jobs [08:24:01] when reqgular user - default queue - is full, you can fasttrack important jobs using the priority queue [08:25:05] I don't like the idea that resource doesn't get shared equaly, but some job for Fundraising for instance are sometime to be fasttracked (and ellery doesn't have production rights) [08:25:06] this makes complete sense [08:25:29] I mean the whole explanation [08:25:36] so it is not my query that is slow [08:25:42] but the cluster is under load :D [08:25:53] now all of us regular users end up in the default queue, with it's small amount of resources if prod jobs are running [08:26:13] Indeed - Cluster is loaded [08:26:58] You can even know who loads the cluster - Look at the currently running job list, then running containers column [08:27:08] yep yep [08:27:15] You quickly scan for that and queue column, and that's it :) [08:27:37] thanksssssss [08:27:42] No prob [08:28:13] elukey: When you have small queries, you can cheat and run them in prod or priority queue [08:29:04] how can I do it? [08:30:56] elukey: SET mapred.job.queue.name=priority; IIRC [08:45:10] joal: https://goo.gl/IoFy58 [08:46:16] I am trying to see if yesterday's impact is visible in pivot [08:46:19] elukey: Mac users are mostly on Safari :) [08:46:36] yes but the impact was mostly for Mac Os sierra [08:46:43] and around 4/5pm you can see a dop [08:46:45] *dip [08:47:58] elukey: right [08:48:24] as I understood it all webkit based browsers (chrome, safari, opera) were impacted on Sierra [08:48:40] yeah but Safari seeems the only heavy impacted [08:48:41] elukey: When looking at a wider time range (a few days), the dip is less of an evidence, but it's true it's here [08:49:04] elukey: I think the mostly impacted is AppleMail (pink one) [08:49:24] I see impact on safari, chrome and AppleMail [08:49:34] from an explanation by mark, that is an intermediary certificate that got revoked and most cert libs do not validate it except for the recent OS X Sierra [08:49:57] wow [08:50:00] Interesting [08:50:07] (CR) Hashar: "check experimental" [analytics/mediawiki-storage] - https://gerrit.wikimedia.org/r/315660 (owner: Hashar) [08:51:33] (CR) Hashar: "Nuria mentioned on my dashiki change that you are in the process of migrating from npm/bower to "yarn" ( https://gerrit.wikimedia.org/r/#/" [analytics/mediawiki-storage] - https://gerrit.wikimedia.org/r/315664 (https://phabricator.wikimedia.org/T148023) (owner: Hashar) [08:52:57] (CR) Hashar: [C: 1] "I dont know bower at all, but it seems the lib published on https://libraries.io/bower/mediawiki-storage has metadata filled from /bower.j" [analytics/mediawiki-storage] - https://gerrit.wikimedia.org/r/315660 (owner: Hashar) [08:53:21] joal: pick that with a grain of salt though cause I really have no idea how CA/certs work [08:53:43] hashar: I actually can't do otherwise given my understanf of this :) [08:54:04] s/understanf/understanding [08:54:20] mforns_: good morning, I think the patch I made for analytics/mediawiki-storage can land and you will get some basic CI coverage :D [08:54:37] mforns_: the dashiki patch is a different story, hitting a wall when trying to require(lodash) :/ [08:55:02] joal: if you are curious, you can try poking bblack this afternoon. He is an excellent teacher :] [08:56:03] hashar: Why not! Thanks for the idea, I'll probably suggest elukey, as he know the man already :) [09:04:29] (CR) Hashar: "Nuria wrote:" [analytics/dashiki] - https://gerrit.wikimedia.org/r/315659 (https://phabricator.wikimedia.org/T148019) (owner: Hashar) [09:08:41] Analytics-Kanban, Operations, Traffic, HTTPS: OCG failing with new GlobalSign intermediate workaround - https://phabricator.wikimedia.org/T148076#2715606 (akosiaris) >>! In T148076#2714544, @Volans wrote: > FYI it's worth noticing that the upgrade of NodeJS for this service looks a bit broken by... [09:17:04] (CR) Hashar: ">> >Uncaught Error: Module name "lodash" has not been loaded yet for context" [analytics/dashiki] - https://gerrit.wikimedia.org/r/315659 (https://phabricator.wikimedia.org/T148019) (owner: Hashar) [10:15:58] Analytics-Kanban, Operations, Traffic, HTTPS, Patch-For-Review: Windows 10 & MacOS Sierra Cert errors - https://phabricator.wikimedia.org/T148045#2715846 (BBlack) [10:16:02] Analytics-Kanban, Operations, Traffic, HTTPS: GlobalSign intermediate updates for one-offs - https://phabricator.wikimedia.org/T148069#2715843 (BBlack) Open>Resolved a:BBlack Resolving for now, as we've covered what we can cover here in Ops. We'll need this ticket as a reference if w... [10:47:34] Analytics, Documentation, Easy: Mark documentation about limn as deprecated - https://phabricator.wikimedia.org/T148058#2715871 (Aklapper) [10:55:13] joal: registered for ApacheCon \o/ [10:57:05] Analytics-Kanban, Operations, Traffic, HTTPS, Patch-For-Review: Windows 10 & MacOS Sierra Certificate errors due to GlobalSign - https://phabricator.wikimedia.org/T148045#2715888 (Aklapper) [11:07:10] great elukey :) [11:30:52] joal: still there ? [11:30:58] yup [11:31:03] wasup? [11:31:14] do you have time to listen to my ramblings? [11:31:19] 10 mins top [11:31:21] I do :) [11:31:27] thanksss [11:32:16] so yesterday I created some instance of varnishlog checking for VSL timeouts [11:32:21] on upload caches [11:32:30] and up to now I only got weird results [11:32:47] namely logs that I haven't seen before like [11:33:06] Client Req IP [11:33:20] --> request XID [11:33:26] --> request XID2 [11:33:27] etc.. [11:33:45] basically varnishlog is telling me that one big client request was ending up in multiple ones [11:33:53] eventually hitting the timeout [11:33:56] (1500 seconds) [11:34:09] that is a lot [11:34:28] but I have a theory that might explain what is happening [11:34:54] hm - not sure I understand the full picture - How are you sure your request gets splitted? [11:35:36] well the only think that I know is that "subrequests" are tagged as Link [11:35:50] and from https://www.varnish-cache.org/docs/trunk/reference/vsl.html [11:36:00] Links this VXID to any child VXID it initiates. [11:36:22] VIXD is a number for a request basically [11:37:22] so what if clients like the facebook bot open one connection asking for keep alive, then make another one not hitting the timeout, then another one, etc... [11:37:56] up to the point in which you hit the overall VSL timeout and the last / ongoing request(s) get logged with dt:'-' ? [11:37:56] elukey: I think I don't understand the sub-request notion correctly [11:38:31] I think it is only a way that Varnish has to link requests [11:38:40] usually you see it like [11:38:44] client request [11:38:49] ---> backend request [11:38:57] response [11:39:00] elukey: what I undestand is that there would be a parent requests handling many children requests - Cheildren are fast enough, but since parent is not closed before all children succeed, it end up in timeout [11:39:11] yes [11:39:32] Ok, but I dodn't even know it was possible to have a prent/children relation for requests [11:39:35] maybe the first request is not special (the parent) but just the first one opened with keep alive [11:39:46] well me too [11:39:49] :D [11:40:01] this is why I need your advice to know if I am crazy or not [11:40:07] elukey: huhuhu [11:40:19] mmmm maybe I can try on my vagrant machine [11:40:34] and see if I can see something similar with keep alive [11:40:34] elukey: I really don't know why face [11:41:20] end of previous / book would not close a regular request [11:44:37] well my theory is that it is how Varnish represents a long series of keep alives [11:45:31] going to do some tests , probably it is not the right one [11:45:36] thanksss [11:46:08] elukey: I have not the feeling I have helped ... [11:46:20] elukey: I think nuria might a better person than me to talk about that [11:46:37] elukey: She has done more then me in web oriented thigs [11:47:12] nono you gave me a lot of ideas [11:47:27] For example why Varnish links the requests in that way [11:47:41] I need to repro first then maybe I can progress my theory :) [11:50:15] ok joal I got it [11:50:30] :) [11:50:36] the initial "parent" request is called Session, namely the TCP connection [11:50:42] that links together a lot of HTTP requests [11:50:48] makes complete sense [11:54:20] this is an example from my local vagrant: [11:54:25] * << Session >> 33256 [11:54:26] - Begin sess 0 HTTP/1 [11:54:26] - SessOpen ::1 39233 :6081 ::1 6081 1476446038.802922 23 [11:54:26] - Link req 33257 rxreq [11:54:26] - VSL timeout [11:54:28] - End synth [11:54:42] the one that I am try to repro has many more "Link" tags [11:54:50] ending up in timeout [11:54:57] makes a lot of sense indeed !!!!! [11:55:36] For low-level network optimisation, TCP connection is recycled --> Botsa for instance !!! [12:00:16] thanks joal, probably this is not the final answer but you unblocked me :) [12:00:34] * elukey lunch! [12:01:30] elukey: usual rubberducking :) [12:33:00] mforns_: I'm deploying or attempting to deploy the weekly squish thing [12:36:57] hey milimetric, do we try to solve the sqoop thingy? [12:37:20] joal: yeah, I should do this deploy first, I've got crons disabled and stuff [12:37:37] joal: but my morning's all free so we can do it in a bit when this works [12:37:38] k [12:56:08] (PS2) Milimetric: Report session funnel weekly instead of daily [analytics/limn-edit-data] - https://gerrit.wikimedia.org/r/315829 (https://phabricator.wikimedia.org/T147492) [12:57:14] (CR) Milimetric: [C: 2 V: 2] "Merging with a lot of hesitation. This reduces the file size but loses some data. I will keep the original data in my home directory and" [analytics/limn-edit-data] - https://gerrit.wikimedia.org/r/315829 (https://phabricator.wikimedia.org/T147492) (owner: Milimetric) [13:00:44] grrrrrrr [13:03:13] elukey: would you be able to install path.py for python 2 on stat1003? [13:03:25] I'd normally ask ottomata [13:03:33] I can't believe it's not installed [13:04:02] milimetric, team, hi! [13:04:05] locally it's "pip install path.py" and I'm kind of stuck mid-deploy without it... I had no idea it wasn't built-into python [13:04:08] starting now [13:04:14] mforns_: hey [13:04:27] milimetric: checking if we have it on the apt repos [13:04:27] mforns_: do you know of good built-in path.py alternatives in python? [13:04:30] I always used path... [13:04:38] like, "from path import path" [13:04:47] milimetric, os.path ? [13:05:23] I didn't look at it a lot but it seemed very bare bones [13:05:26] I don't know if it does what you need though [13:05:34] ok, I'll look at it more closely [13:05:48] there is also os.dirlist to list the files in a dir [13:06:07] or os.walk, a bit more sofisticated [13:06:34] probably it is python-unipath on apt repo [13:06:41] python-unipath - object-oriented approach to file/pathname manipulations [13:06:47] uh... that doesn't sound right el [13:07:01] elukey: never mind, I'll slum it with this os.path thing [13:07:06] it's ...ok-ish :) [13:07:06] version Version: 0.2.1+dfsg-1 [13:07:18] elukey: yeah, not that [13:07:20] no worries [13:07:30] okok [13:16:22] (PS1) Milimetric: Use os.path instead of path.path [analytics/limn-edit-data] - https://gerrit.wikimedia.org/r/315946 [13:16:33] (CR) Milimetric: [C: 2 V: 2] Use os.path instead of path.path [analytics/limn-edit-data] - https://gerrit.wikimedia.org/r/315946 (owner: Milimetric) [13:18:19] mforns_: I realized there's a pretty big problem with that filter I wrote [13:18:24] the visualization first builds a tree [13:18:28] aha [13:18:35] and then crops actions smaller than 0.08% [13:18:50] so they have to be smaller than the other slices at the same level of the tree [13:18:59] not smaller than the total sum (which is what the python does) [13:19:03] aha [13:19:10] if we want to filter properly, we'd have to create the tree in python [13:19:20] mmmmm [13:19:20] and so I decided to ignore it and write an ugly hack comment :( [13:19:51] I'm about to execute it on the real output, but I saved the old stuff in my directory [13:19:53] what do you think? [13:20:15] what do you mean with ugly hack comment? [13:20:27] oh, like a TODO [13:20:40] https://github.com/wikimedia/analytics-limn-edit-data/blob/master/scripts/aggregate-and-filter-sessions.py#L44 [13:22:07] milimetric, I see, I think it's OK, but we can open the dashboard before the changes get rsync'ed [13:22:19] so that we are able to compare and see the differences in the chart, no? [13:22:33] and see the difference? It won't matter for the big date ranges, should be fine then, but yeah, let's do taht [13:22:42] (I'll do that) [13:23:43] milimetric, I see, cause I was not dimensioning how this small difference in the 0.08% would impact the chart [13:23:53] how much [13:24:06] * [13:24:10] yep [13:24:51] Analytics-Kanban, Operations, Traffic, HTTPS, Patch-For-Review: Windows 10 & MacOS Sierra Certificate errors due to GlobalSign - https://phabricator.wikimedia.org/T148045#2716594 (BBlack) Resolving this. The mitigation deployed yesterday (alternate intermediate->root chain) seems to have wor... [13:28:21] Analytics-Kanban, Operations, Traffic, HTTPS, Patch-For-Review: Windows 10 & MacOS Sierra Certificate errors due to GlobalSign - https://phabricator.wikimedia.org/T148045#2716627 (faidon) Open>Resolved a:BBlack [13:32:21] well, it's super crazy faster, mforns, which makes sense [13:32:28] but it does show a bit of distortion in data [13:32:38] oh! [13:32:39] I'll upload a couple screenshots [13:32:49] * mforns looks curious [13:33:06] https://usercontent.irccloud-cdn.com/file/S8qeZCN8/after-squish.png [13:33:17] https://usercontent.irccloud-cdn.com/file/cXdIwnmq/before-squish.png [13:34:28] milimetric, it's almost the same! :D [13:34:42] I mean... that's one perspective :) [13:34:46] the other is that it's not [13:34:47] hehe [13:35:12] I'll write them an email and see what they think. Do they want to trade the speed for the loss of data. [13:35:21] going forward the data will be back to normal with no squishing [13:35:25] *no filter [13:35:29] aha [13:36:09] but man, it's practically the same, if it was a tier 1 service, for sure we'd invest more time, but... [13:36:57] milimetric, we could also do the tree in python, I don't think it will be very difficult [13:37:23] with defaultdicts no? [13:37:54] dunno, I think it's fine [13:39:03] mforns: probably yea, but then it probably wouldn't save much space :) [13:39:10] like, the mistake is useful? [13:39:13] milimetric, aha right [13:39:23] I donno, I agree, I sent the email, we'll see what they think [13:39:28] ok [13:39:39] fyi I have a backup of the old sessions in /home/milimetric/sessions.backup and /tmp/sessions.old [13:39:47] I'll remove them if they're fine with this [13:39:56] joal: ok, sqoop! ? [13:40:02] milimetric: Yay :) [13:40:31] yay :) [13:40:44] soon to be "$%^&*()_(*&^" :) [13:40:51] batcave? [13:40:58] milimetric: OMW! [14:58:07] (PS18) Joal: [WIP] Join and denormalize all histories into one [analytics/refinery/source] - https://gerrit.wikimedia.org/r/307903 (owner: Milimetric) [15:00:12] (CR) jenkins-bot: [V: -1] [WIP] Join and denormalize all histories into one [analytics/refinery/source] - https://gerrit.wikimedia.org/r/307903 (owner: Milimetric) [15:11:54] (PS19) Joal: [WIP] Join and denormalize all histories into one [analytics/refinery/source] - https://gerrit.wikimedia.org/r/307903 (owner: Milimetric) [15:18:55] mforns: http://imgur.com/gallery/5tLPv1x [15:19:04] milimetric, looking [15:20:38] milimetric, hehehe awesome [15:22:38] how many times did he shoot that schene? hehehe [15:46:44] Analytics-Kanban, Operations, Traffic, HTTPS, and 2 others: Windows 10 & MacOS Sierra Certificate errors due to GlobalSign - https://phabricator.wikimedia.org/T148045#2716979 (Aklapper) [15:54:34] (PS5) Mforns: build run karma test with just "npm test" [analytics/mediawiki-storage] - https://gerrit.wikimedia.org/r/315664 (https://phabricator.wikimedia.org/T148023) (owner: Hashar) [15:55:01] mforns: thanks :] [15:55:25] hashar, it was just 1 typo [15:55:35] mforns: nuria mentionned you are moving to "yarns" [15:55:39] yarn [15:55:40] actually in a comment, so, minor nit-pick [15:55:55] nuria, yes, we created that task yesterday in tasking meeting [15:56:05] so it is up to your team to land those patches or hold them till yarn is a thing [15:56:05] sorry, not nuria, hashar [15:56:21] sounds like yarn is more or less back compatible [15:56:24] so maybe it will just work [15:56:43] we will probably want to add yarn to the CI slaves as well [15:56:52] so you get a job that does something like yarn install && yarn test [15:56:56] hashar, sure, I think we can merge them now [15:57:05] aha [15:57:11] the one on dashiki fails due to some random require(loadash) issue [15:57:24] but that is not something I have the capacity to even start investigate :( [15:57:51] mmm aha, will look into it [16:00:05] (CR) Mforns: [C: 2 V: 2] build run karma test with just "npm test" [analytics/mediawiki-storage] - https://gerrit.wikimedia.org/r/315664 (https://phabricator.wikimedia.org/T148023) (owner: Hashar) [16:01:52] https://gerrit.wikimedia.org/r/315968 analytics/mediawiki-storage enable npm job [16:01:52] :D [16:02:25] hashar, when changing the node package to private, how can dashiki pull it with bower? [16:03:33] tis published via /bower.json ? :] [16:04:13] (CR) Hashar: "recheck" [analytics/mediawiki-storage] - https://gerrit.wikimedia.org/r/315664 (https://phabricator.wikimedia.org/T148023) (owner: Hashar) [16:04:22] mforns: but maybe I am entirely wrong :( [16:04:28] I have no idea how bower works [16:04:37] then the package.json version is outdated, so unlikely it is being used [16:04:56] hashar, oh, yes, the node is only for testing... [16:05:07] ok, makes sense [16:05:27] Analytics-Dashiki, Continuous-Integration-Config, Patch-For-Review: Add CI job for analytics/mediawiki-storage - https://phabricator.wikimedia.org/T148023#2717037 (hashar) Open>Resolved a:hashar With help of @mforns :] [16:05:47] might want to ask someone for review [16:05:57] the CI job is enabled [16:06:01] so npm will no run for all patches [16:06:10] if that is an issue, poke #wikimedia-releng about it :] [16:06:15] hashar, awesome, thanks! [16:06:22] ok [16:06:35] or maybe cherry pick https://gerrit.wikimedia.org/r/#/c/315664/ to tip of master branch [16:06:36] +2 it [16:06:50] and the other change that change package.json version + set private can wait for later [16:06:59] kids duty! I am off [16:07:13] hashar, thanks! [16:09:07] (PS6) Mforns: build run karma test with just "npm test" [analytics/mediawiki-storage] - https://gerrit.wikimedia.org/r/315664 (https://phabricator.wikimedia.org/T148023) (owner: Hashar) [16:10:20] (CR) Mforns: [C: 2 V: 2] package.json: mark as private, drop version [analytics/mediawiki-storage] - https://gerrit.wikimedia.org/r/315660 (owner: Hashar) [16:10:59] (CR) Mforns: [C: 2 V: 2] build run karma test with just "npm test" [analytics/mediawiki-storage] - https://gerrit.wikimedia.org/r/315664 (https://phabricator.wikimedia.org/T148023) (owner: Hashar) [16:22:45] :] [16:22:56] mforns: thanks!!! I am off for real now [16:23:08] hashar, thank you@ [16:23:09] ! [16:24:25] (PS21) Milimetric: Script sqooping mediawiki tables into hdfs [analytics/refinery] - https://gerrit.wikimedia.org/r/306292 (https://phabricator.wikimedia.org/T141476) [16:28:15] milimetric, strainu's web is still not there for me.. maybe because of my location? [16:30:07] joal: sqoop is running, going in meeting will ping if done [16:34:23] have a good weekend people! [16:34:28] * elukey afk [16:49:00] Bye elukey ! [17:02:17] joal: it finished sqooping, /user/milimetric/wmf/data/raw/mediawiki/tables [17:02:26] it's got simplewiki and hawiktionary [17:02:29] Great mili ! [17:02:34] milimetric sorry [17:02:46] I'm going to try a job against those :) [17:02:55] milimetric: --^ [17:03:01] ottomata: Hi ? [17:03:13] great, lemme know if you have trouble otherwise i'll run it for good [17:03:25] joal: haiya [17:03:48] ottomata: I run into a known limit on spark : too many open files [17:03:55] ottomata: Could we bump up the ulimit? [17:04:59] (PS20) Joal: [WIP] Join and denormalize all histories into one [analytics/refinery/source] - https://gerrit.wikimedia.org/r/307903 (owner: Milimetric) [17:05:05] ja for sure [17:05:13] That'd be great [17:05:20] joal: not really working today ( :) ), so not today? [17:05:22] would take a bit [17:05:23] ottomata: I assume we'd have to restart processes [17:05:26] yeah [17:05:28] yeah [17:05:36] can you make a ticket? [17:05:38] no bother ottomata, maybe early next week? [17:05:38] that'll take a bit [17:05:40] yeah [17:05:41] Sure [17:05:45] Thanks mate [17:05:46] make a ticket, remind me on monday :) [17:05:50] :) [17:05:55] Have a good weekend ! [17:09:22] Analytics, Analytics-Cluster: Bump up fd ulimit on hadoop workers - https://phabricator.wikimedia.org/T148206#2717332 (JAllemandou) [17:28:54] mforns: I totally forgot to invite you: https://hangouts.google.com/hangouts/_/wikimedia.org/labsdb [17:29:16] even though you said you were interested, sorry [17:29:25] that meeting's happening now [17:29:37] but I'm not sure who will make it [18:19:59] Analytics, Mobile-Content-Service, Pageviews-API, Wikipedia-Android-App-Backlog, Spike: [Feed] Establish criteria for blacklisting likely bot-inflated most-read articles - https://phabricator.wikimedia.org/T143990#2717545 (bearND) [18:24:59] milimetric: sorry I didn't make it earlier - I've stopped paying attention to my calendar since switching to one meeting a week mode [18:26:27] time to schedule that second weekly meeting madhuvishy :D: [18:26:32] ha ha [18:30:59] milimetric: I don't know anything about actual viability but I found this to be interesting [18:30:59] http://jroller.com/dschneller/entry/mysql_replication_using_blackhole_engine [18:31:05] http://dev.mysql.com/doc/refman/5.7/en/blackhole-storage-engine.html [18:31:32] :) madhuvishy no problem, we were just getting to the good part when you arrived [18:31:52] I anticipate most of the conversation will be on the task I'm updating now (https://phabricator.wikimedia.org/T146444) [18:32:18] chasemp: very cool, I'll check that out [18:32:35] milimetric: thanks :) [18:33:45] I forgot you guys have probably been in a thousand meetings together :) [18:35:02] chasemp: that seems accurate