[08:59:46] 06Analytics-Kanban, 06Operations, 06WMDE-Analytics-Engineering, 13Patch-For-Review, 15User-Addshore: /a/mw-log/archive/api on stat1002 no longer being populated - https://phabricator.wikimedia.org/T160888#3113734 (10Addshore) >>! In T160888#3140077, @elukey wrote: > @Addshore: I am going to close this ta... [09:26:37] elukey: Hello ! [09:27:41] o/ [09:28:28] I am wondering why some crons have not run on analytics1003 :( [09:28:32] elukey: --^ [09:30:11] which ones? The emails to analytics-alerts? [09:30:25] elukey: no, the sqoop and namespace ones [09:31:29] elukey: no, the sqoop and namespace ones 2017-03-24 [09:33:01] joal: are they defined in puppet? [09:33:54] elukey: I think they are (but I did not check - On an1003: sudo -u hdfs crontab - [09:36:04] so refinery-sqoop-mediawiki etc.. ? [09:36:25] yes [09:38:25] ah because 0 0 2 * * and you are wondering why it didn't run [09:38:32] okok now I am on the same page :) [09:38:36] elukey: Indeed :) [09:41:51] so, almost for sure not related, /srv/deployment/analytics/refinery/bin/download-project-namespace-map seems to fail from the last email on analytics-alerts [09:43:19] elukey: I have not seen that email ! [09:46:29] joal: I can see [09:46:30] /var/log/syslog.2.gz:Apr 2 00:00:01 analytics1003 CRON[30548]: (hdfs) CMD (export PYTHONPATH=${PYTHONPATH}:/srv/deployment/analytics/refinery/python && /usr/bin/python3 /srv/deployment/analytics/refinery/bin/sqoop-mediawiki-tables --job-name sqoop-mediawiki-monthly-$(/bin/date '+) [09:47:50] hm [09:52:27] joal: the weird thing is the --job-name sqoop-mediawiki-monthly-$(/bin/date '+) truncation [09:52:41] the analytics-alert email might be related [09:53:00] it says "bash: /bin/sh: 1: Syntax error: Unterminated quoted string" [09:53:39] elukey: looks like there is a typo in that code :) [09:54:05] anyway it's good elukey - I need to deploy refinery, and there is a patch for the sqoop-mediawiki-tables script [09:55:21] elukey: Really I have not received the email about this job failing - Is that expected? [09:58:50] I am not sure if it failed or not [09:59:09] I'm pretty sure it failed [10:01:29] ah! [10:01:34] I found out the issue [10:01:50] in the hdfs crontab there is no MAILTO, unlike stat1002 [10:01:55] elukey: It's a typo in command, right? [10:02:04] elukey: Ah, emails ! [10:02:05] I received that email via ops cron notifications [10:02:15] makes sense :) [10:04:09] joal: maybe we can try to run manually the sqoop job and see why it fails [10:04:29] elukey: $(/bin/date '+) <-- Typo in here, no ? [10:05:11] elukey: As I said, I'd like to deploy refinery with potential changes in cron - Let me send a patch for this, and then we'll test :) [10:05:38] elukey: ok ? [10:05:40] ahhh sorry I thought you were talking about the other script [10:05:44] sure sure [10:05:58] elukey: yeah, too many things at the same time ;) [10:06:36] Wooo - Just pulling production on puppet: 1035 commit behind ! [10:07:25] 0 0 2 * * export PYTHONPATH=${PYTHONPATH}:/srv/deployment/analytics/refinery/python && /usr/bin/python3 /srv/deployment/analytics/refinery/bin/sqoop-mediawiki-tables --job-name sqoop-mediawiki-monthly-$(/bin/date '+%Y-%m') --labs --jdbc-host labsdb-analytics.eqiad.wmnet --output-dir /wmf/data/raw/mediawiki/tables --wiki-file /mnt/hdfs/wmf/refinery/current/static_data/mediawiki/grouped_wikis/la [10:07:32] bs_grouped_wikis.csv --user s53272 --password-file /user/hdfs/mysql-analytics-labsdb-client-pw.txt --timestamp $(/bin/date '+%Y%m01000000') --snapshot $(/bin/date '+%Y-%m') -k 3 >> /var/log/refinery/sqoop-mediawiki.log 2>&1 [10:07:39] this one is in the crontab [10:07:53] Ah, so maybe no typo ... hm [10:08:24] elukey: lloks like I can't pull production !!! [10:08:30] elukey: The following untracked working tree files would be overwritten by merge [10:08:44] And I have no such working tree files in git status ??? [10:09:51] if you want a brutal clean, git reset --hard origin/production && git pull [10:09:54] :P [10:10:26] trying that elukey [10:12:18] worked elukey, thyanks [10:12:34] * joal hates reset --hard [10:12:43] * joal is afraid of it [10:13:59] it is the nuclear option :P [10:14:56] elukey: That's why I am so afraid of it [10:16:54] (03PS1) 10Joal: Update mediawiki history to overwrite results [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/346121 [10:17:52] elukey: --^ if you have am inute [10:18:25] (03PS2) 10Joal: Update mediawiki history to overwrite results [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/346121 [10:19:19] joal: wait a sec - what I am looking at? it is the job executed for sqoop-mediawiki-monthly? [10:19:35] huhuhu :) [10:20:16] elukey: nope, this is a patch on mediawiki history job, that I'd like to merge before refinery deploy [10:20:55] ahahha okok [10:21:35] it looks good by inspection but I have no idea about that scala code :( [10:21:51] joal: can I re-run the sqoop job and see why it fails? [10:22:05] basically re-running the huge monster above [10:23:31] elukey: you can [10:23:48] elukey: I'll self merge ;) [10:24:20] (03CR) 10Joal: [C: 032] "Self merging for deploy." [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/346121 (owner: 10Joal) [10:30:57] (03Merged) 10jenkins-bot: Update mediawiki history to overwrite results [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/346121 (owner: 10Joal) [10:32:14] (03PS1) 10Joal: Update changelog.md to v0.0.44 [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/346126 [10:32:24] elukey: next one for deploy --^ [10:33:12] (03CR) 10Elukey: [C: 031] Update changelog.md to v0.0.44 [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/346126 (owner: 10Joal) [10:34:52] (03CR) 10Joal: [V: 032 C: 032] "Self-merging for deploy" [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/346126 (owner: 10Joal) [10:35:12] elukey: Deploying refinery-source to archiva with your approval :) [10:35:35] sure! [10:35:41] k thanks [10:35:42] (03PS1) 10Joal: Correct mediawiki oozie jobs [analytics/refinery] - 10https://gerrit.wikimedia.org/r/346127 [10:35:51] !log Deploying refinery-source to archiva [10:35:52] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [10:38:22] elukey: I just double checked the change in sqoop script: no CLI change, we should be just fine rerunning the script after deploy [10:40:08] okok [10:40:26] in the meantime, the sqoop script seems running fine [10:40:33] so no idea why it didn't run [10:43:20] :( [10:43:45] elukey: just checking, you run it as hdfs user, right? [10:44:15] yep [10:44:22] great, thanks :) [10:44:31] elukey: This is weird though ... [10:48:54] joal: you mentioned another job that didn't run right? [10:49:10] correct: download-project-namespace [10:49:17] The one after in the project list [10:49:23] in the dron list sorry elukey [10:50:59] yes this one needs to be fixed [10:51:03] because of the bash error [10:51:05] elukey: ? [10:51:10] bash error? [10:51:18] Arrf, just sent a CR [10:51:44] elukey: can you tell me, I'll update in the same CR I just sent [10:52:35] this is the one that I mentioned this morning [10:52:36] Cron export PYTHONPATH=${PYTHONPATH}:/srv/deployment/analytics/refinery/python && /srv/deployment/analytics/refinery/bin/download-project-namespace-map -x /wmf/data/raw/mediawiki/project_namespace_map -s $(/bin/date '+ [10:52:52] k [10:52:55] the cron mail says /bin/sh: 1: Syntax error: Unterminated quoted string [10:53:13] elukey: I am a bit lost now :) [10:53:43] So the sqoop one didn't run, but we don't know why - And the namespace one didn't run because of bash error [10:54:09] were those crons on an1027 before? [10:54:15] or always been on an1003? [10:54:17] correct elukey [10:54:30] oh actually no, they startted with an1003 [10:54:33] elukey: --^ [10:54:51] elukey: They are brand new, and not regular, so difficult to debug [10:58:00] (03PS2) 10Joal: Correct mediawiki oozie jobs [analytics/refinery] - 10https://gerrit.wikimedia.org/r/346127 [10:58:07] joal: Percent (%) signs have a special meaning in crontabs. They are interpreted as newline characters. [10:58:21] elukey: Woooaaaah ! [10:58:23] * elukey cries in a corner [10:58:26] elukey: I'd never found that :) [10:58:54] elukey: What's the way to escape them? [10:59:16] it seems a simple \ [10:59:23] elukey: more precisely: What is the way to escape them in puppet so that they're still escaped in cron ? [10:59:33] :) [10:59:36] sqoop-mediawiki-tables has the same issue [10:59:44] elukey: I'd have guessed so [11:00:58] elukey: \ or \\ for puppet escaping? [11:01:11] good question, I am checking the syntax [11:01:18] :) [11:03:27] I'd say \\% [11:03:46] elukey: ok, submitting patch [11:18:42] Thanks here as well :) [11:19:04] elukey: if you have a minute, there is this last one: https://gerrit.wikimedia.org/r/346127 [11:19:13] elukey: And after that, DEPLOY ! [11:20:05] joal: crontab updaated on an1003, all good [11:20:10] elukey: awesome [11:20:24] elukey: Can you start a manual namespace run please ? [11:20:38] elukey: for beginning of month update :) [11:21:27] (03CR) 10Elukey: [C: 031] Correct mediawiki oozie jobs [analytics/refinery] - 10https://gerrit.wikimedia.org/r/346127 (owner: 10Joal) [11:21:42] Thanks elukey :) [11:22:08] elukey: sorry it's a morning I bother you - I'll take a break in beginning afternoon, you'll have some rest ;) [11:22:59] (03CR) 10Joal: [V: 032 C: 032] "Self-merging for deploy" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/346127 (owner: 10Joal) [11:23:31] joal: ahhaahh please don't say that, some fixes were needed :) [11:23:36] always glad to help [11:23:46] running namespace [11:23:53] Thanks :) [11:25:29] done! [11:25:37] elukey: with permission, deploying refinery? [11:25:42] awesome :) [11:25:43] sure [11:25:55] !log Deploying refinery [11:25:55] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [11:26:00] this afternoon I'd like to reimage 2/3 workers [11:26:05] elukey: no problem [11:26:07] any pressing job ? [11:26:09] okok [11:26:39] elukey: I actually found that I still no real reason for why sometime my spark jobs failed when workers restarted, and sometimes not [11:26:55] elukey: on friday evening, andrew restarted some workers - Everything went finwe [11:27:07] * joal scratchs his head in wonder [11:27:20] joal: you know that Andrew is Andrew, his restarts doesn't count [11:27:27] :D [11:27:29] elukey: hahahahaha ! [11:27:32] :D [11:27:51] ok, I'll think of that again :) [11:28:40] wow but Andrew reimaged up to 1050 on Friday [11:28:41] \o/ [11:28:53] elukey: yes, he did some of them [11:29:19] elukey: if you reimage 3 per day and he does the same, full change will be done soon ! [11:30:23] !log Deploying refinery to HDFS [11:30:24] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [11:33:45] elukey: I'm sorry I broke deploy (again) [11:33:54] stat1002? [11:34:03] elukey: We forgot to remove an1027 from scap deploy for refinery ( and I didn't check) [11:34:09] ahhhhh [11:34:18] my bad then! sorry! [11:34:34] elukey: I'll say 'no' for rollback (since everything went fine for other nodes) [11:34:37] ok ? [11:35:41] (03PS1) 10Elukey: Remove analytics1027 from the scap targets [analytics/refinery/scap] - 10https://gerrit.wikimedia.org/r/346132 [11:35:42] yep [11:35:48] patch in --^ [11:36:33] hi analytics folks. [11:36:36] (03CR) 10Elukey: [V: 032 C: 032] Remove analytics1027 from the scap targets [analytics/refinery/scap] - 10https://gerrit.wikimedia.org/r/346132 (owner: 10Elukey) [11:36:53] joal: --^ [11:37:01] let me check stat1002's space before re-deploying [11:37:05] elukey: sure [11:37:20] elukey: I didn't rollback, so there is not even a need for re-deploy [11:37:21] milimetric: will you be around in 30-min or so? I want to ask you a few questions about the pointers for researchers to the work y'all have been doing: What data you have released since April 2016 that they should know about and what's in the horizon that can be interesting to them and they should keep an eye on [11:38:19] joal: cleared some space just in case [11:38:28] leila: o/ [11:38:36] !log Restart corrected mediawiki-history oozie job [11:38:37] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [11:38:39] ow hi elukey. [11:38:55] Hi leila ! [11:39:00] leila: early morning ! [11:39:07] I'm in Perth, joal. :D [11:39:13] quite jetlagged. ;) [11:39:20] leila: Ahhh, then good evening :) [11:39:49] thanks. :) you're doing ok joal? :) [11:40:17] leila: Definitely! Spring brings longer days and greener suroundings [11:40:21] * joal likes nature :) [11:40:29] uhu [11:46:01] 06Analytics-Kanban: Create purging script for mediawiki-history data - https://phabricator.wikimedia.org/T162034#3150343 (10JAllemandou) [11:46:04] joal: forgot to ask, can I run sudo -u stats /a/refinery-source/guard/run_all_guards.sh --rebuild-jar ? [11:46:18] I'd like to figure out what is failing in there [11:46:53] elukey: I think you can - I actually don't know this process well [11:48:02] elukey: Taking a break now, enjoy your rest from me ;) [11:48:33] o/ [11:49:44] !log manual run of sudo -u stats /a/refinery-source/guard/run_all_guards.sh --rebuild-jar [11:49:45] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [12:26:46] team I am stopping yarn and hdfs on an1029, an1030 and an1031 as prep step for reimage [12:26:51] this shoudn't kill running containers [12:31:05] PROBLEM - Hadoop NodeManager on analytics1028 is CRITICAL: PROCS CRITICAL: 0 processes with command name java, args org.apache.hadoop.yarn.server.nodemanager.NodeManager [12:31:30] I have silenced them! [12:31:52] ah no this is 1028 [12:31:54] what the hell [12:33:35] wrong command ran :/ [12:34:05] RECOVERY - Hadoop NodeManager on analytics1028 is OK: PROCS OK: 1 process with command name java, args org.apache.hadoop.yarn.server.nodemanager.NodeManager [12:51:57] 06Analytics-Kanban, 06Operations, 15User-Elukey: Reimage all the Hadoop worker nodes to Debian Jessie - https://phabricator.wikimedia.org/T160333#3150500 (10ops-monitoring-bot) Script wmf_auto_reimage was launched by elukey on neodymium.eqiad.wmnet for hosts: ``` ['analytics1029.eqiad.wmnet', 'analytics1030.... [12:52:57] aaan I broke webrequest-load-wf-upload-2017-4-3-11 [12:53:00] !log restart webrequest-load-wf-upload-2017-4-3-11 [12:53:00] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [13:37:36] 06Analytics-Kanban, 06Operations, 06WMDE-Analytics-Engineering, 13Patch-For-Review, 15User-Addshore: /a/mw-log/archive/api on stat1002 no longer being populated - https://phabricator.wikimedia.org/T160888#3150579 (10Ottomata) > we might want to open another one to track down how to move away api-logs fro... [13:44:13] mmm an1030 is stuck in console, I powercycled it but it is showing weird things [13:44:54] hm [13:49:03] hardreset not working too [13:52:10] sorry, having internet trouble [13:57:19] o/ ottomata [14:06:54] ottomata1: A bus fatal error was detected on a component at bus 8 device 0 function 0. [14:06:58] sigh [14:07:09] an1030 took a vacation [14:07:29] haha [14:08:53] 06Analytics-Kanban, 06DC-Ops, 06Operations, 10ops-eqiad: analytics1030 stuck in console while booting - https://phabricator.wikimedia.org/T162046#3150761 (10elukey) [14:10:00] 06Analytics-Kanban, 06Operations, 15User-Elukey: Reimage all the Hadoop worker nodes to Debian Jessie - https://phabricator.wikimedia.org/T160333#3150775 (10elukey) Analytics1030 is refusing to boot, opened a phab task: https://phabricator.wikimedia.org/T162046 [14:14:34] ottomata1: if you've not seen : https://stefie.github.io/vr-wikipedia-heatmap/# [14:17:31] and now the reimage script is stuck.. [14:20:19] whoa crazy joal [14:30:47] whoa, crazy globe [15:50:39] 10Analytics, 10MediaWiki-extensions-WikimediaEvents, 10The-Wikipedia-Library, 10Wikimedia-General-or-Unknown, and 4 others: Implement Schema:ExternalLinksChange - https://phabricator.wikimedia.org/T115119#3151143 (10Milimetric) thanks @Samwalton9, I'll take a look and replicate in vagrant. [15:53:25] http://www.syscrest.com/2013/10/oozie-bundle-monitoring-tapping-into-hadoop-counters/ [15:53:31] a-team: for data quality checks (and probably more !) --^ [15:59:33] we acutally might not even need to do it at oozie level a-team, https://www.cloudera.com/documentation/enterprise/latest/topics/cm_mc_metrics_graphite.html [16:05:08] 06Analytics-Kanban: Create purging script for mediawiki-history data - https://phabricator.wikimedia.org/T162034#3150343 (10Nuria) The reason to purge is space, we do not need to keep redundant information. We can create a new script and execute dropping code via puppet. [16:05:15] 10Analytics, 10Analytics-EventLogging, 10MediaWiki-Vagrant, 06Services (watching): Vagrant git-update error for event logging - https://phabricator.wikimedia.org/T161935#3151198 (10Ottomata) a:03Ottomata [16:08:37] 10Analytics, 10Analytics-EventLogging, 10MediaWiki-Vagrant, 06Services (watching): Vagrant git-update error for event logging - https://phabricator.wikimedia.org/T161935#3147676 (10Nuria) Is puppet on vagrant up to date? seems that some deps need to be added to vagrant-puppet [16:09:46] 10Analytics, 10Analytics-EventLogging, 10MediaWiki-Vagrant, 06Services (watching): Vagrant git-update error for event logging - https://phabricator.wikimedia.org/T161935#3151211 (10Pchelolo) @Nuria Yes, I'm using the latest vagrant puppet. [16:10:27] 10Analytics, 10Analytics-Dashiki: Refactor aqs api and usage for simplicity - https://phabricator.wikimedia.org/T161933#3151213 (10Nuria) p:05Triage>03Normal [16:10:52] 10Analytics, 10Analytics-Dashiki: Refactor aqs api and usage for simplicity - https://phabricator.wikimedia.org/T161933#3147633 (10Nuria) a:05Nuria>03None [16:11:50] 10Analytics: Add new interesting fields ( time_to_user_next_edit and time_to_page_next_edit.) in Mediawiki Denormalized History - https://phabricator.wikimedia.org/T161896#3151223 (10Nuria) [16:12:21] 10Analytics: Add time_to_user_next_edit and time_to_page_next_edit in Mediawiki Denormalized History - https://phabricator.wikimedia.org/T161896#3151225 (10Milimetric) [16:12:30] 10Analytics, 10Analytics-EventLogging, 10MediaWiki-Vagrant, 06Services (watching): Vagrant git-update error for event logging - https://phabricator.wikimedia.org/T161935#3151227 (10Nuria) p:05Triage>03High [16:12:54] 10Analytics: Add time_to_user_next_edit and time_to_page_next_edit in Mediawiki Denormalized History - https://phabricator.wikimedia.org/T161896#3151228 (10Nuria) p:05Triage>03Normal [16:15:21] 10Analytics: Add zero carrier to pageview_hourly data on druid - https://phabricator.wikimedia.org/T161824#3151232 (10Nuria) @DFoy Is there an issue with this data being visible to users with wmf-nda permits? That means users that have either 1) signed an NDA or 2) are wmf employees [16:15:29] 10Analytics: Add zero carrier to pageview_hourly data on druid - https://phabricator.wikimedia.org/T161824#3151234 (10Nuria) p:05Triage>03Normal [16:16:03] 10Analytics-Cluster, 06Analytics-Kanban, 06Operations, 10ops-eqiad, 15User-Elukey: Analytics hosts showed high temperature alarms - https://phabricator.wikimedia.org/T132256#3151240 (10Nuria) [16:22:24] 10Analytics: Update refinery sqoop script to explicitely fail in case a snapshot / destination folder already exists - https://phabricator.wikimedia.org/T161128#3151249 (10Nuria) 05Open>03Resolved [16:23:02] 06Analytics-Kanban: Update refinery sqoop script to explicitely fail in case a snapshot / destination folder already exists - https://phabricator.wikimedia.org/T161128#3122505 (10Nuria) 05Resolved>03Open [16:23:39] 06Analytics-Kanban: Update refinery sqoop script to explicitely fail in case a snapshot / destination folder already exists - https://phabricator.wikimedia.org/T161128#3122505 (10Nuria) a:03JAllemandou [16:23:52] 06Analytics-Kanban: Update refinery sqoop script to explicitely fail in case a snapshot / destination folder already exists - https://phabricator.wikimedia.org/T161128#3122505 (10Nuria) 05Open>03Resolved [16:25:43] 10Analytics, 03Interactive-Sprint: Report updater should support Graphite mapping plugins - https://phabricator.wikimedia.org/T152257#3151256 (10Nuria) p:05Triage>03Normal [16:26:17] 10Analytics, 10Analytics-Cluster, 06Operations, 10hardware-requests: EQIAD: 6 Nodes for Kafka refresh/upgrade - https://phabricator.wikimedia.org/T161636#3151276 (10Nuria) p:05Triage>03Normal [16:27:47] 10Analytics, 07Easy: Don't accept data from automated bots in Event Logging - https://phabricator.wikimedia.org/T67508#3151302 (10Nuria) p:05Low>03Normal [16:37:16] 10Analytics-EventLogging, 06Analytics-Kanban, 13Patch-For-Review: Change userAgent field to user_agent_map in EventCapsule - https://phabricator.wikimedia.org/T153207#3151359 (10Nuria) 05Open>03Resolved [16:37:29] 06Analytics-Kanban: Productionize Edit History Reconstruction and Extraction - https://phabricator.wikimedia.org/T152035#3151361 (10Nuria) [16:37:31] 06Analytics-Kanban, 13Patch-For-Review: Productionise standard metrics from mediawiki denormalized history - https://phabricator.wikimedia.org/T160151#3151360 (10Nuria) 05Open>03Resolved [16:37:51] 06Analytics-Kanban, 13Patch-For-Review: Populate aqs with legacy page-counts - https://phabricator.wikimedia.org/T156388#3151365 (10Nuria) [16:37:53] 06Analytics-Kanban, 13Patch-For-Review: Create AQS endpoint to serve legacy pageviews - https://phabricator.wikimedia.org/T156391#3151364 (10Nuria) 05Open>03Resolved [16:38:03] 06Analytics-Kanban, 10DBA, 13Patch-For-Review: Change length of userAgent column on EL tables - https://phabricator.wikimedia.org/T160454#3151366 (10Nuria) 05Open>03Resolved [16:38:15] 10Analytics-EventLogging, 06Analytics-Kanban, 10DBA, 06Operations, 13Patch-For-Review: Improve eventlogging replication procedure - https://phabricator.wikimedia.org/T124307#3151383 (10Nuria) [16:38:17] 10Analytics-EventLogging, 06Analytics-Kanban, 13Patch-For-Review: Remove autoincrement id from tables [5 pts] - https://phabricator.wikimedia.org/T87661#3151384 (10Nuria) [16:38:22] 10Analytics-EventLogging, 06Analytics-Kanban, 10DBA, 13Patch-For-Review: Add autoincrement id to EventLogging MySQL tables. {oryx} - https://phabricator.wikimedia.org/T125135#3151382 (10Nuria) 05Open>03Resolved [16:38:29] 06Analytics-Kanban, 13Patch-For-Review: Create robots.txt policy for datasets - https://phabricator.wikimedia.org/T159189#3151385 (10Nuria) 05Open>03Resolved [16:39:27] 06Analytics-Kanban, 10Analytics-Wikistats: Visual prototype for community feedback for Wikistats 2.0 iteration 1. - https://phabricator.wikimedia.org/T157827#3017940 (10Nuria) Putting back "inprogress" as we are going to add @Erik_Zachte 's suggestions [16:47:46] ottomata: an1029 and an1031 are back to working, but an1030 is still on holidays [16:48:05] if you want to try some magic via console feel free, I haven't managed to make it work [16:50:08] an30, have you tried anything in there? [16:51:01] yet? [16:51:02] elukey: ? [16:57:49] ottomata: racadm powercycle/hardreset/off-on [16:57:54] nothing worked :( [16:58:28] opened https://phabricator.wikimedia.org/T162046 [17:00:06] brb [17:06:22] ottomata: Chris just told me that the issue with an1030 might require Dell's contact us [17:06:42] ok [17:10:42] ottomata: Heya [17:10:55] ottomata: do you have a minute to test metrics2 with me? [17:12:33] * elukey afk! Talk with you tomorrow team :) [17:12:43] Bye elukey [17:18:06] joal: sure, but i'm about to drop some tables with nuria i 12 minutes :) [17:18:08] batcave! [17:30:08] nuria: hiii [17:30:29] ottomata: will be late for our meeting talking to reading [17:30:29] ottomata: Arf, missed my slot ;) [17:30:42] joal: you didn't! nuria will be late [17:30:44] ottomata: I tested metrics2 on h-0 [17:30:44] i'm still in batcave [17:30:48] joining [18:09:36] unggh [18:09:39] my modem is busted [18:09:40] on phone internet [18:09:45] nuria: let's do this thang [18:10:41] nuria: yt? [18:11:09] yessir [18:11:17] batcave? [18:12:16] ottomata: hola batcave? [18:12:36] k [18:15:49] !log dropping EL tables with really old data [18:15:49] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:24:43] !log starting replication back on Eventlogging 1002/1047/1046 [18:24:44] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [19:32:24] 10Analytics: Add zero carrier to pageview_hourly data on druid - https://phabricator.wikimedia.org/T161824#3152021 (10DFoy) @Nuria No problem, those are the best qualifiers for who should be able to see this. [20:13:58] (03PS1) 10Ottomata: Use hive query instead of parsing non existent sampled TSV files [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/346197 [20:55:10] (03CR) 10Nuria: Use hive query instead of parsing non existent sampled TSV files (031 comment) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/346197 (owner: 10Ottomata) [20:55:55] good morning from tomorrow :) [20:56:31] (03CR) 10EBernhardson: "sorry to not get back to this in some time, this was initially something i was working on in my 10% time, but after proving out the genera" (034 comments) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/327855 (https://phabricator.wikimedia.org/T149047) (owner: 10EBernhardson) [21:03:11] fdans: it is like LOST! [21:03:37] nuria: haha totally, my google calendar looks weeeeeird :D [21:04:32] https://usercontent.irccloud-cdn.com/file/zrDqU3aB/Screen%20Shot%202017-04-04%20at%2006.03.53.png [21:26:22] (03PS5) 10EBernhardson: UDF for extracting primary full text search request [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/327855 (https://phabricator.wikimedia.org/T162054)