[06:55:59] Hi a-team - early morning [06:57:58] elukey: I messed up deploying yesterday because of stat1002 free space - I had checked before, but look like 5.5G was not enoug [07:00:37] joal: morning! I saw your ping, fixing it [07:00:47] hopefully with stat1005 we'll not have this problem anymore [07:00:51] I'm sorry elukey for the not- nice morning [07:01:03] elukey: for once it's a real person, not oozie ;) [07:02:39] done! [07:02:59] Thanks mate, redeploying [07:03:34] !log Deploying refinery with scap (after yesterday's failure) [07:03:35] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [07:04:11] Arfff, forgot the deployment message :( [07:04:17] * joal facepalm [07:06:20] elukey: same problem as last time: deploy was successful on stat1002 since folder existed, but very much not complete [07:07:15] use the force joal [07:07:24] --force --limit stat1002 [07:07:44] the force is weak with me elukey, you know that :) [07:08:10] do you want me to do it? [07:08:19] man ... This deploy is not willing to happen ... failed again [07:08:28] trying [07:08:30] ok? [07:08:39] elukey: space issue again on another host [07:08:43] aahahahah [07:08:49] elukey: please do it, [07:08:55] which one? [07:09:00] elukey: I feel like it's against me [07:09:09] analytics1003 [07:09:14] I think we need a cronjob that clears old revisions [07:09:18] I agre [07:09:46] I don't get it: 27G fre on anaytics1003 [07:09:50] yeah [07:10:20] and 43G free on stat1004 [07:10:27] That's weird [07:10:39] deploying on stat1002 now [07:10:52] elukey: ok [07:11:01] ELukey, I wonder if it's not an archiva issue [07:13:57] very weird, stat1002 fails .. seems a git-fat error + rsync error [07:14:11] joal: where did you get the error for an1003? [07:14:42] elukey: last line of deploy process was: 07:06:36 2 of 2 default targets failed, exceeding limit [07:15:50] rsync: failed to connect to archiva.wikimedia.org (208.80.154.18): Connection timed out (110) [07:16:52] elukey: Yes I got the same - But the last line diverted me from the real error [07:18:09] I am retrying the deployment to stat1002 [07:20:07] joal: they don't want to let you go [07:20:10] :D [07:20:22] elukey: well, if that's a message, let's not do it then :) [07:21:17] I am checking on stat1002 now [07:21:31] I can try the hammer [07:22:33] ok let's do it [07:26:53] nope didn't work [07:27:52] hm - I have a feeling it could be that archiva would have been recycled (revooted, or whatever word I don't know about renewing a VM) [07:27:57] Could it be that? [07:29:04] ah did meitnerium.wikimedia.org got recycled? [07:29:15] I don't know [07:29:19] * elukey checks [07:30:15] 11:48 renumber dubnium fermium meitnerium ununpentium [07:30:18] yepp [07:30:32] in theory it should have preserved the IP [07:30:54] the first thing that comes to mind is the vlan firewall [07:31:41] elukey https://www.youtube.com/watch?v=t5CAQU6KsMI [07:32:00] the IP changed! [07:32:22] * elukey dances watching the video [07:32:57] -meitnerium 5M IN A 208.80.154.73 ; VM on the ganeti01.svc.eqiad.wmnet cluster +meitnerium 5M IN A 208.80.154.18 ; VM on the ganeti01.svc.eqiad.wmnet cluster [07:33:06] so it is the vlan [07:37:28] joal: since we are lucky there is a big network maintenance event in a bit (codfw) that will probably delay our fix [07:39:05] yay elukey - the universe is against this deploy ... [07:41:02] yep we need to wait [08:09:00] elukey: I have a question for you on hadoop-conf-on-puppet [08:09:26] sure, I am following the codfw network maintenance but will answer with a bit of lag [08:09:39] I have no problem with lag :) [08:09:56] elukey: I wonder where is code for fair-scheduler configuration [08:10:26] elukey: the template we have in cdh sub-module is almost empty, and I couldn't manage to put my hands on the real one [08:11:27] do you mean the balancer? [08:11:37] nope, I mean the fair-scheduler [08:11:42] :) [08:11:58] elukey: the portion of stuff that manages queues [08:12:26] and is it on puppet? [08:12:38] elukey: I assume so, it's hadoop conf [08:13:09] so you'd like to know where the hadoop config is? [08:13:18] elukey: I guess so [08:13:27] this isn't a regular job or cron or whatever right? (I am asking to know where to look for) [08:13:45] elukey: on any stat machine: cat /etc/hadoop/conf/fair-scheduler.xml [08:14:19] ah wrong dir! [08:14:22] elukey@analytics1001:/etc/hadoop/conf.analytics-hadoop$ ls [08:14:34] the conf one is empty [08:14:41] really ? [08:14:48] on stat1004, there is a simlink [08:15:18] I wasn't aware of it, I usually go directly straight to /etc/hadoop/conf.analytics-hadoop :) [08:15:25] :) [08:15:49] is that the config that you were looking for? [08:16:03] elukey: on puppet, I only found: modules/cdh/templates/hadoop/fair-schedulers.xml/erb [08:16:08] without typo [08:16:18] nwhich is very different [08:21:12] in class role::analytics_cluster::hadoop::client I can see [08:21:21] # Use fair-scheduler.xml.erb to define FairScheduler queues. [08:21:24] fair_scheduler_template => 'role/analytics_cluster/hadoop/fair-scheduler.xml.erb', [08:22:49] elukey: I must be dumb, but I couldn't find it :( [08:23:11] in puppet, role/ doesn't exist anymore [08:23:36] I assumed it was modules/role/analytics/cluster <-- this one exists [08:23:49] But it doesn't contain hadoop/fair-scheduler.xml.erb file [08:24:31] modules/role/templates/analytics_cluster/hadoop/fair-scheduler.xml.erb [08:24:34] :) [08:24:54] RAHHHH ! TOO MANY PLACEEEEEESS ! [08:25:05] Thanks elukey :) [08:25:23] puppet is so nice that it assumes the 'templates' part of the path [08:25:34] this is why you didn't find it [08:25:38] elukey: doesn't make it easier for humans though [08:25:47] puppet is not for humans :D [08:25:51] n:) [08:26:03] I got lost several time in puppet because of this [08:26:06] learned the hard way :D [08:26:09] true [08:26:19] template(some path erb) assumes 'templates' [08:26:28] meanwhile source() assumes 'files' [08:30:08] 10Analytics-Cluster, 10Analytics-Kanban: Hadoop: Add a lower priority queue: nice queue - https://phabricator.wikimedia.org/T156841#3389791 (10JAllemandou) a:03JAllemandou [08:37:19] 10Analytics: Default hive table creation to parquet - https://phabricator.wikimedia.org/T168554#3389803 (10JAllemandou) Parquet as default has been added only in version 2.3.0 (see https://stackoverflow.com/questions/44038151/hive-how-to-set-parquet-orc-as-default-output-format). [08:37:34] 10Analytics: Default hive table creation to parquet - needs hive 2.3.0 - https://phabricator.wikimedia.org/T168554#3389804 (10JAllemandou) [08:59:32] 10Analytics-Kanban: Add normalized_host.project_family and deprecate and remove normalized_host.project_class - https://phabricator.wikimedia.org/T168874#3389854 (10JAllemandou) [08:59:45] 10Analytics-Kanban: Add normalized_host.project_family and deprecate and remove normalized_host.project_class - https://phabricator.wikimedia.org/T168874#3379221 (10JAllemandou) a:03JAllemandou [09:05:17] (03PS1) 10Joal: Rename project_class to project_family [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/362159 (https://phabricator.wikimedia.org/T168874) [09:16:14] Heya elukey - is archiva down at the moment? [09:16:31] shouldn't [09:16:32] Looks like I can't build refinery-source from stat1004 [09:16:44] failing to downlaod jars from archiva [09:16:46] yeah we still need the new ip whitelisted [09:16:55] right - waiting for that :) [09:17:04] sorry, didn't make the connection in my head [09:17:43] Like, archiva not available for deploy is equivalent to archiva not available for build ... Right [09:19:08] archiva kaput [09:19:11] :D [09:19:39] huhu :) [09:19:43] Jar censorship [09:20:13] (03PS1) 10Joal: Add project_family to webrequest normalized_host [analytics/refinery] - 10https://gerrit.wikimedia.org/r/362160 (https://phabricator.wikimedia.org/T168874) [09:23:34] 10Analytics: Pivot "MediaWiki history" data lake: Feature request for "Time" dimension to sp\lit by calendar month / quarter / year -- needs druid 0.10 - https://phabricator.wikimedia.org/T161186#3389901 (10JAllemandou) [09:51:41] * joal is in kill-task-mode today [10:15:11] joal the taskslayer [10:15:26] Hi fdans :) [10:15:35] ;) [10:15:35] fdans: it really depends on days :) [10:15:57] fdans: not true, joseph always work like 3/4 people [10:16:04] today he could replace the analytics team [10:16:12] not true ! [10:16:18] but he can't sustain this rythm for a long time [10:16:29] holidays for everyone today! [10:16:30] so he has to do it only when needed :D [10:16:36] :) [10:16:39] :D :D :D [10:22:21] helloooo [10:22:32] Hi mforns [10:22:44] +1 joseph task-desintegrator [10:23:16] just logging in quickly to check with elukey :] [10:23:50] mforns: o/ - saw your email but haven't had time to work on it, big network maintenance event in codfw + archiva issue [10:23:53] :( [10:23:54] will try after lunch [10:24:24] elukey, no problem at all, was just asking in case you wanted to work on that and needed explanation of my code [10:24:47] I have to do some groceries now, do you want to pair after your lunch? [10:25:51] sure! [10:27:28] elukey, k, will ping you when I'm back, cheers! [10:53:02] joal: can you try to build the refinery source on stat1004? I'll retry the deploy now [10:54:06] just done the deployment [10:54:11] finished correctly [10:55:04] re-deploying with --force [10:55:23] so we'll be sure that archiva is pulled [10:56:46] stat1002 completed, other ones are in progress [10:57:09] !log fixed archiva whitelist in the analytics VLAN (VM changed IP) [10:57:10] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [10:58:03] deployment done! [10:58:59] !log deploy refinery to HDFS [10:59:00] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [11:02:56] joal: done deploying the refinery, will merge your patch for druid deep storage cleanup after standup [11:03:03] argh lunch [11:03:05] :P [11:10:31] * elukey off to lunch! [11:14:45] elukey: building refinery-source on stat1004 - success ! [11:14:48] thanks a lot mate [11:14:55] Ok, back to deploy refinery then :) [11:18:23] !log Update tables and restart mediawiki_history oozie jobs after deploy [11:18:24] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [11:34:43] !log Kill and restart druid webrequest sampled oozie jobs after deploy [11:34:44] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [11:39:34] !log Update tables and archived data and kill/start jobs for unique-devices per project-family [11:39:35] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [12:27:06] elukey, I'm back [12:36:36] just got back too, but haven't read the code review yet [12:37:03] Deployed everythin [12:37:27] If we don't have any oozie email tomorrow, it's weird :) [12:37:37] elukey, if you want we can go over it in da cave [12:37:46] mforns: that would be awesome thanks [12:37:49] :) [12:37:59] elukey, to the batcave! :] [12:39:05] mforns: gimme 5 mins, issues with wifi in the co-working [12:39:20] elukey, sure np, will be there [12:53:17] joal: if/when you have time: https://github.com/schana/recommendation-translation/pull/4 [12:54:27] Hi schana - I have seen your request - Let's take a moment to discuss, I prefer it than CR for code I don't know well [12:54:51] sure, when works for you? [12:56:35] Can do now :) [12:56:45] schana: --^ [12:56:56] schana: https://hangouts.google.com/hangouts/_/wikimedia.org/a-batcave-2 [13:23:46] Taking a break a-team [13:24:16] hey fdans [13:24:22] yoo [13:24:51] milimetric: currently moving loaddata stuff [13:25:20] fdans: ok, I'm still tinkering with this build [13:25:27] for one, I don't understand why it's 4MB [13:25:30] :) [13:34:24] spotty connection at the co-working people, might get me some time to answer sorry :( [13:58:37] fdans: man, this salesforce copyright notice is duplicated like 40 times, that's a ton of source right there, lol [13:58:50] ok, maybe 5 times [13:58:51] still [14:01:11] jajajaj milimetric are we at that stage of optimisation? [14:01:41] :) just popped up while I was reading the minified source [14:01:54] but it's still failing to babelize/minify properly, I'm trying to figure out why [14:02:02] it's really hard because there are still errors in source [14:02:12] this was a mistake, I think, trying to do deploy before the code is cleaner [14:03:53] fdans: let's hang [14:04:32] Gimme 10min milimetric, transitioning from cafe to home [14:04:39] cool [14:12:13] joal: the traffic team is going to merge https://gerrit.wikimedia.org/r/#/c/361844, that will cause varnishkafka to create a new topic called "webrequest_canary" (only one host will produce to it for the moment, pinkunicorn.w.o aka cp1008) [14:12:33] it would be good to instruct camus to pull from that topic too [14:12:45] but not sure if it is worth it or not [14:20:25] milimetric: wololo batcave [14:37:05] 10Analytics, 10Operations, 10Traffic, 10Patch-For-Review: Implement Varnish-level rough ratelimiting - https://phabricator.wikimedia.org/T163233#3390943 (10debt) [14:38:29] 10Analytics, 10Operations, 10Traffic, 10Patch-For-Review: Implement Varnish-level rough ratelimiting - https://phabricator.wikimedia.org/T163233#3190763 (10debt) [14:52:30] mforns: there is a table with 1.4 BILLION rows [14:53:05] the alter targeting 500M rows had to be stopped because it was trashing db1047 [14:54:22] the top one is MobileWebUIClickTracking_10742159_15423246 [14:54:29] that I am pretty sure needs to be whitelisted [14:54:38] so no chance of dropping [15:00:35] ping mforns [15:00:46] ping joal [15:04:16] elukey: about webrequest canary, I wonder [15:10:26] 10Analytics, 10EventBus, 10Wikimedia-Stream, 10Services (watching): Expose revision-create in EventStreams - https://phabricator.wikimedia.org/T167670#3391159 (10Nuria) 05Resolved>03Open [15:10:29] 10Analytics, 10EventBus, 10Wikimedia-Stream, 10Services (watching), 10User-mobrovac: Bikeshed what events should be exposed in public EventStreams API - https://phabricator.wikimedia.org/T149736#3391161 (10Nuria) [15:10:39] 10Analytics, 10EventBus, 10Wikimedia-Stream, 10Services (watching): Expose revision-create in EventStreams - https://phabricator.wikimedia.org/T167670#3340424 (10Nuria) Reopening until documentation is in place for new stream [15:39:38] good luck with js build milimetric [16:17:31] joal: one question [16:17:44] sure nuria_ [16:18:06] joal: i was looking at the dropping of data and we just file a task for druid to delete the data via teh overlord [16:18:19] joal: the script exists once the task is sucessfully filed [16:18:44] hm, I don't get it nuria_ [16:18:48] joal: how do we now catch errors on tasks ? either in indexing or deleting? [16:21:21] nuria_: We catch errors as in killing tasks failed to lanch, but we have no more feedback from the task itself [16:23:14] joal: i see, so if an indexing job fails or a deletion job fails (say, no permits) [16:23:23] joal: there is no notification sent anywhere [16:23:35] nuria_: no permits hatsoever (yet) in druid [16:23:44] nuria_: but kill tasks might fail yes [16:23:55] nuria_: investigating how to get feedback [16:24:00] (03PS1) 10Nuria: Adding comments to data deletion script for druid storage [analytics/refinery] - 10https://gerrit.wikimedia.org/r/362233 [16:24:16] joal: i will do it no worries, I am going to fiel a task for that [16:24:47] joal: can you CR this one, it is just a comment, for myself mostly : https://gerrit.wikimedia.org/r/#/c/362233/ [16:25:08] sure [16:25:53] 10Analytics, 10Services, 10Documentation: Document revision-create event for EventStreams - https://phabricator.wikimedia.org/T169245#3391503 (10Halfak) [16:26:01] 10Analytics, 10EventBus, 10Wikimedia-Stream, 10Services (watching): Expose revision-create in EventStreams - https://phabricator.wikimedia.org/T167670#3340424 (10Halfak) [16:26:04] 10Analytics, 10Services, 10Documentation: Document revision-create event for EventStreams - https://phabricator.wikimedia.org/T169245#3391518 (10Halfak) [16:26:30] (03CR) 10Joal: "Inline" (031 comment) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/362233 (owner: 10Nuria) [16:26:50] 10Analytics, 10Services, 10Documentation: Document revision-create event for EventStreams - https://phabricator.wikimedia.org/T169245#3391503 (10Halfak) [16:26:55] 10Analytics, 10EventBus, 10Wikimedia-Stream, 10Services (watching): Expose revision-create in EventStreams - https://phabricator.wikimedia.org/T167670#3340424 (10Halfak) 05Open>03Resolved I created a separate task ({T169245}) as this is additional work so this one can be resolved. [16:26:58] 10Analytics, 10EventBus, 10Wikimedia-Stream, 10Services (watching), 10User-mobrovac: Bikeshed what events should be exposed in public EventStreams API - https://phabricator.wikimedia.org/T149736#3391526 (10Halfak) [16:27:59] nuria_: we can request an url to druid with the task id to get either RUNNING, SUCCESS or FAILURE [16:28:07] nuria_: Will provide a patch [16:28:20] joal: one sec [16:28:32] joal: I think is better to have an alarm raised no? [16:28:44] joal: cause data ingestion and deletion are subjected to teh same problem [16:29:10] joal: one approach would be to have the oozie task running until the druid one has completed and thus status is SUCCESS [16:29:35] nuria_: I suggest the scripts wait for task success/failure, and as in error case, fails if kill task fails - therefore cron should send an email (we should double check that with elukey _ [16:29:38] joal: the other would be to file task and have an alarm that raises when tasks in druid are failing [16:29:40] mforns: restarted the alters removing the super huge tables, will research tomorrow again [16:30:10] nuria_: deletion is uncorellated to indexation [16:30:24] the script I wrote is for deletion only [16:30:31] joal: right, right, those were the two (distinct) examples i can think of in which we will have this problem [16:30:33] joal: yes I think that an email in case of failure is enough [16:30:49] nuria_: if the scripts asks foir data deletion and no data exists (for given time frame for instance): success [16:31:00] if druid fails to erase data that exit: failure [16:31:14] at least as first step, we might review it when we'll add alarms etc.. for the other scripts [16:31:56] elukey: would a fail cron (as I wrote it) send an email if the return code of the job is not 0? [16:32:11] joal: if we have a MAILTO in teh crontab yes [16:32:22] --^ [16:32:27] nuria_: puppet manages that cron, so I can't realy say [16:34:07] elukey: do puppet has a default mailto for crons? [16:34:28] i think so [16:34:30] you can add yours if you want, but it will be applied from your cron entry onwards [16:34:33] https://www.irccloud.com/pastebin/aD7iglxv/ [16:34:47] puppet sends to root@wikimedia by default [16:34:51] (the famous cronspam) [16:35:01] Ahhhh, didn't know that [16:35:12] elukey: right, but we also have crons that MAILTO ourselves right? (per crontab above) [16:35:30] From looking at other cron jobs, we have ways to have cron send us emails: environment => 'MAILTO=analytics-alert@wikimedia.org', [16:35:35] yep, it can be configured.. it is tricky since it will be applied to ALL the cron entries after that [16:36:08] elukey: a bug-feature! [16:36:12] nuria_, elukey: Maybe it;s worth keeping that config independent per job, even if it means some dup code? [16:36:47] Meaning 1 line per cron job, but at least it's explicit - Or create a new analytics-cron class, that sends emails just to us [16:38:51] ??? [16:39:11] elukey: creante a cron wrapper for analytics [16:39:21] wait, all teh crons on https://gerrit.wikimedia.org/r/#/c/362148/2/modules/role/manifests/analytics_cluster/refinery/job/data_drop.pp [16:39:28] joal: need to send e-mail to us [16:39:42] nuria_: I'd say so yes, or at least most of them [16:41:06] joal: so then there is no need to have a cron wrapper, adding mailto there should work [16:41:53] nuria_: very feasible, just that there probably are other crons in other places that should send emails to us (like camus for instance, reportupdater maybe, and what not) [16:42:30] Having a wrapper could make sense - maybe not [16:42:32] joal: those are set in different puppet files many of which have MAILTO I think, see: [16:43:13] I think that we could open a task to review how we alarm on various crons (like webrequest-data-drop, etc..) [16:43:19] but for the moment MAILTO is fine [16:43:44] ok, so brut force on the drop file at least [16:44:34] joal: see https://github.com/wikimedia/puppet/blob/production/modules/role/manifests/analytics_cluster/refinery/job/data_check.pp [16:45:18] so if elukey agrees adding a mail_to to the drop scripts cron shoudl be sufficient [16:45:25] +1 [16:46:12] joal: if we do that we wil get notified for all deletions that might be failing and that will include druid ones if we add teh code to the script to wait until task has succeded [16:46:45] Another example is https://github.com/wikimedia/puppet/blob/production/modules/camus/manifests/job.pp - no mailto [16:46:57] correct nuria_ [16:47:57] joal: ya, we can add it to that one too but it does not have to be now [16:48:12] joal: i will put a CR for the camus one on [16:48:28] elukey, nuria_ : I'd really like to see a better way of handling those cron emailing [16:49:08] joal: as in one MAILTO Per cron, which seems pretty reasonable? [16:49:14] I'll go for the brut-force for now, but I think we should really invest into a wrapper (elukey, please tell me I'm wrong if it;s not the propper way) [16:49:42] joal: to be honest we need to invest time and efforts in refactoring and improving our puppet codebase, this is one example.. over the years it has grown bigger and bigger and now it needs sanitization :) [16:50:11] elukey: ok right - Thanks for not destroying my idea ;) [16:50:33] joal: jajajaja! besides the one mailto per cron what do you feel is missing? [16:50:34] no no we need to come up with a good solution but I am still not sure what is the best one :) [16:50:38] nuria_, elukey : I'll provide another patch with mailto for puppet (actually, I'll update the one for druid) [16:50:54] joal: sound good [16:51:26] elukey, nuria_ : Let's postpone the bigger discussion for when puppetmaster-otto is back [16:51:43] ok for you guys? [16:51:47] yep [16:52:09] cool - I'll provide CRs for druid-delete-script and cron tomorrow [16:53:46] Gone for diner, will be back for metrics as-team [16:54:42] 10Analytics-Kanban: Adding MAILTO to crontab for camus job - https://phabricator.wikimedia.org/T169248#3391644 (10Nuria) [16:54:42] nuria_: speaking of which, puppet refactoring to profiles could be an interesting ops goal to have for multiple quarters after Q1 [16:56:13] 10Analytics-Kanban, 10Patch-For-Review: Adding MAILTO to crontab for camus job - https://phabricator.wikimedia.org/T169248#3391644 (10Nuria) a:03Nuria [16:59:42] going afk team! [16:59:43] byeeee [17:57:59] joal: submitted patch for camus so we also have a mailto there [17:59:00] Thanks nuria_ [17:59:22] nuria_: can you add me as reviewer, to follow conventions on drop file? [17:59:23] (03CR) 10Nuria: [C: 031] Add project_family to webrequest normalized_host [analytics/refinery] - 10https://gerrit.wikimedia.org/r/362160 (https://phabricator.wikimedia.org/T168874) (owner: 10Joal) [17:59:35] joal: i thought i did wait [17:59:44] nuria_: didn't check emails [17:59:53] joal; done, sorry [17:59:55] nuria_: Got it sorry P [18:00:00] :S [18:00:34] (03CR) 10Nuria: [C: 031] "Please be so kind as to remind me how do we apply these changes?" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/362160 (https://phabricator.wikimedia.org/T168874) (owner: 10Joal) [18:39:02] (03CR) 10Joal: "See related phab task for how to deploy: https://phabricator.wikimedia.org/T168874" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/362160 (https://phabricator.wikimedia.org/T168874) (owner: 10Joal) [18:42:24] 10Analytics, 10Operations, 10ops-eqiad: Smartctl errors for one kafka1012 disk - https://phabricator.wikimedia.org/T168927#3381297 (10RobH) This system is out of warranty, and will require onsite spare disks to be used as replacement. [19:14:05] Gone for tonight - See you tomorrow a-team [19:14:11] bye joal ! [19:14:38] «Wikipedia narratives about national histories (i) are skewed towards more recent events (recency bias) and (ii) are distributed unevenly across the continents with sig- nificant focus on the history of European countries (Eurocentric bias). » orly https://datorium.gesis.org/xmlui/handle/10.7802/1411 [19:36:29] * MaxSem pokes nuria_ [19:36:40] MaxSem: yessir [19:37:10] Caould you remove me from "Monthly Sync Up Analytics & Discovery" - I'm not in Discovery anymore [19:37:45] MaxSem: ah yes, sorry, i knew that [19:37:55] thanks! [19:39:09] MaxSem: done now [19:39:16] :) [20:08:42] 10Analytics-Kanban, 10Patch-For-Review, 10User-Elukey: Make non-nullable columns in EL database nullable - https://phabricator.wikimedia.org/T167162#3392365 (10mforns) @elukey @Marostegui Hey! I think I have good news. **tl;dr** At least the top 10 biggest EL tables in the log DB do not need to be altered... [20:20:54] 10Analytics-Kanban, 10Page-Previews, 10Reading-Web-Backlog (Tracking): Update purging settings for Schema:Popups - https://phabricator.wikimedia.org/T167449#3392483 (10Jdlrobson) [20:21:45] 10Analytics-Kanban, 10Page-Previews: Update purging settings for Schema:Popups - https://phabricator.wikimedia.org/T167449#3333176 (10Jdlrobson) [20:25:10] bye team!!! [20:57:58] 10Analytics-Kanban, 10Page-Previews, 10Reading-Web-Backlog (Tracking): Update purging settings for Schema:Popups - https://phabricator.wikimedia.org/T167449#3392779 (10Jdlrobson) [20:57:59] 10Analytics-Kanban, 10Page-Previews, 10Reading-Web-Backlog (Tracking): Update purging settings for Schema:Popups - https://phabricator.wikimedia.org/T167449#3392779 (10Jdlrobson) [21:37:27] 10Analytics, 10Analytics-Dashiki: Site for Wikimedia Analytics lacks clear license - https://phabricator.wikimedia.org/T169270#3393022 (10jeblad) [21:37:27] 10Analytics, 10Analytics-Dashiki: Site for Wikimedia Analytics lacks clear license - https://phabricator.wikimedia.org/T169270#3393022 (10jeblad) [22:19:54] (03PS19) 10Nuria: UDF to tag requests [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/353287 (https://phabricator.wikimedia.org/T164021) [22:24:58] (03PS1) 10Nuria: Adding tag column to webrequest [analytics/refinery] - 10https://gerrit.wikimedia.org/r/362310 (https://phabricator.wikimedia.org/T164021) [22:25:54] (03PS2) 10Nuria: Adding "tags" column to webrequest [analytics/refinery] - 10https://gerrit.wikimedia.org/r/362310 (https://phabricator.wikimedia.org/T164021)