[00:19:05] 10Analytics, 10Analytics-Kanban, 10Pageviews-API, 10Patch-For-Review: Add nyc.wikimedia to pageviews whitelist - https://phabricator.wikimedia.org/T194309 (10CommunityTechBot) [00:19:07] 10Analytics, 10Analytics-Kanban, 10Pageviews-API, 10Patch-For-Review: Add nyc.wikimedia to pageviews whitelist - https://phabricator.wikimedia.org/T194309 (10CommunityTechBot) [00:19:09] 10Analytics, 10Analytics-Kanban, 10Pageviews-API, 10Patch-For-Review: Add nyc.wikimedia to pageviews whitelist - https://phabricator.wikimedia.org/T194309 (10CommunityTechBot) [00:19:12] 10Analytics, 10Analytics-Kanban, 10Pageviews-API, 10Patch-For-Review: Add nyc.wikimedia to pageviews whitelist - https://phabricator.wikimedia.org/T194309 (10CommunityTechBot) p:05High>03Normal [03:08:22] 10Analytics, 10Operations, 10SRE-Access-Requests, 10Patch-For-Review, 10Performance-Team (Radar): Requesting access to analytics-privatedata-users for gilles - https://phabricator.wikimedia.org/T195837 (10CommunityTechBot) [03:08:25] 10Analytics, 10Operations, 10SRE-Access-Requests, 10Patch-For-Review, 10Performance-Team (Radar): Requesting access to analytics-privatedata-users for gilles - https://phabricator.wikimedia.org/T195837 (10CommunityTechBot) a:03MoritzMuehlenhoff [03:08:28] 10Analytics, 10Operations, 10SRE-Access-Requests, 10Patch-For-Review, 10Performance-Team (Radar): Requesting access to analytics-privatedata-users for gilles - https://phabricator.wikimedia.org/T195837 (10CommunityTechBot) p:05High>03Normal [03:08:31] 10Analytics, 10Operations, 10SRE-Access-Requests, 10Patch-For-Review, 10Performance-Team (Radar): Requesting access to analytics-privatedata-users for gilles - https://phabricator.wikimedia.org/T195837 (10CommunityTechBot) 05Open>03Resolved [03:08:34] 10Analytics, 10Operations, 10SRE-Access-Requests, 10Patch-For-Review, 10Performance-Team (Radar): Requesting access to analytics-privatedata-users for gilles - https://phabricator.wikimedia.org/T195837 (10CommunityTechBot) [03:09:06] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Elukey: Pageviews-daily broken after move from Pivot to Turnilo - https://phabricator.wikimedia.org/T195819 (10CommunityTechBot) [03:09:08] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Elukey: Pageviews-daily broken after move from Pivot to Turnilo - https://phabricator.wikimedia.org/T195819 (10CommunityTechBot) 05Open>03Resolved [03:09:10] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Elukey: Pageviews-daily broken after move from Pivot to Turnilo - https://phabricator.wikimedia.org/T195819 (10CommunityTechBot) [03:09:12] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Elukey: Pageviews-daily broken after move from Pivot to Turnilo - https://phabricator.wikimedia.org/T195819 (10CommunityTechBot) [03:09:15] 10Analytics, 10Analytics-Kanban, 10CheckUser, 10Gamepress, and 10 others: k6caaaaaaa - https://phabricator.wikimedia.org/T194427 (10CommunityTechBot) [03:20:45] 10Analytics, 10Analytics-Kanban: Look into whitelisting wikimedia Apis with brave - https://phabricator.wikimedia.org/T198544 (10Nuria) [05:58:24] morning! [05:58:37] cleaning up some phab spam.. sigh [06:13:32] 10Analytics-Kanban, 10Patch-For-Review: Renamed geowiki to geoeditors - https://phabricator.wikimedia.org/T194207 (10CommunityTechBot) p:05High>03Normal [06:13:34] 10Analytics-Kanban, 10Patch-For-Review: Renamed geowiki to geoeditors - https://phabricator.wikimedia.org/T194207 (10CommunityTechBot) [06:13:36] 10Analytics-Kanban, 10Patch-For-Review: Renamed geowiki to geoeditors - https://phabricator.wikimedia.org/T194207 (10CommunityTechBot) [06:13:38] 10Analytics-Kanban, 10Patch-For-Review: Renamed geowiki to geoeditors - https://phabricator.wikimedia.org/T194207 (10CommunityTechBot) [06:13:40] 10Analytics-Kanban, 10Patch-For-Review: Renamed geowiki to geoeditors - https://phabricator.wikimedia.org/T194207 (10CommunityTechBot) [06:13:42] 10Analytics-Kanban, 10Patch-For-Review: Renamed geowiki to geoeditors - https://phabricator.wikimedia.org/T194207 (10CommunityTechBot) 05Open>03Resolved [06:16:23] also thanks to the CommunityTechBot most of them have been reverted [07:10:22] :( [07:10:25] Hi elukey [07:10:29] cassandra today? [07:10:34] hello :) [07:10:54] yep but I'd prefer to do it middle of afternoon, so in case of fire Eric might be around.. would it be ok? [07:11:23] No prob - I'm gone around 4pm tfro the kids, back for standup [07:12:38] ack! [07:12:46] !log Restart cassandra bundle [07:12:47] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [07:15:52] I didn't check the geowiki job failure yet ;( [07:26:56] elukey: doing noe [07:34:16] joal: I am about to change the vk alarms for all the instances running on caching hosts with https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/443086/ [07:34:35] ack elukey - Anything I should look specifically after? [07:35:39] nono lemme explain my ideas so you can see if they make sense :) [07:36:13] we alert on delivery error, namely when vk was surely not able to deliver a kafka event [07:36:30] I changed the dashboard during the past days to show the deliver error per second [07:36:43] and checked prev issues in https://phabricator.wikimedia.org/T173492#4377254 [07:36:49] so my overall idea is: [07:37:04] 1) group all the alarms in 5, rather than having one for each cp instance [07:37:21] (so in case of multiple failures we'll have only a few alarms rather than a storm) [07:37:54] 2) use a very low threshold to alarm on, because from past events there were serious issues even with a few delivery errors per second [07:38:11] usually when those happens for more than 20/30 mins it means trouble [07:38:16] it is not anymore a temporary spike [07:38:24] and in that case, we are dropping data [07:38:27] so better safe than sorry [07:39:26] and 3) the new alarms will post errors in here (it is not the case now) [07:39:56] elukey: Summarizing with my own words - Alarm will use sum (lower threshold but more meaningfull, + no storm), don't move threshold even with changing the alarm computation value, to make the thing more sensitive [07:40:21] Is my understanding correct? [07:40:28] elukey: -^ [07:40:52] yep! I only didn't get "don't move threshold even with changing the alarm computation value" [07:41:11] so far, we use a value of 5 per worker - right/ [07:41:12] ? [07:41:33] after the change, we'll use a value of 5 for the sum of all workers [07:42:06] ah no no atm we use a derivative with a very high threshold [07:42:15] Ah ok [07:42:43] I switched to per second metrics and then used a 5 events/s threshold in there [07:44:04] and we alarm only if it happens more than X times in a row? [07:44:07] so for every 20m, if there are delivery failures more than 5 per second than raise a critical (but do not send the notification). Retry three times with 5m of wait, and then notify [07:44:14] this is the new --^ [07:44:43] now we alarm if the 80% of the datapoints in 10m are over 5000 (but using a derivative of delivery error per host) [07:46:03] I don't get the "retry 3 times" [07:46:42] it is an icinga feature, it doesn't notify you about a critical unless it stays in that state for a number of times that you set [07:46:53] get it :) [07:47:47] Makes sense elukey - Maybe not waiting 5m between each "retry" (I'd call those re-check or something), but rather 2m ? [07:48:02] If something is wrong, let's not wait too long? [07:48:44] I'd even retry 10m, because otherwise it might be too sensible and catch a temporary spike due to say a network slowdown (it happens from time to time) [07:49:10] what I'd like to get in here is a regression that stays there for a bit [07:49:22] ok :) [07:50:22] :) all right I'll merge! Hope that this alarm will be ok, it is really tricky to get the right compromise with icinga/graphite [07:52:51] Thanks for raising our level of alarming elukey :) [07:53:20] thanks for listening to my ramblings! [07:54:27] joal: not sure if you saw https://phabricator.wikimedia.org/T198070#4324105, nice failure scenario about vk that I didn't know [07:55:43] if kafka shutdowns abruptively, the current librdkafka might not take it very well [07:56:26] elukey: From what I read in the ticket, bumping to librdkafka 0.11.4 should help? [07:58:55] maybe! [07:59:05] I think the issue is similar [07:59:48] from the failures reported thougth it seems that it happens sometimes [07:59:58] since the instances failing were a few (luckily!) [08:01:02] Thanks again elukey for keeping me on track [08:01:26] elukey: on the hive-failing side - We have a mystery (as of now) [08:01:58] elukey: Data seems clean since I have managed to repair a tmp table over it (currently querying, looks good) [08:02:10] joal: geowiki right ? [08:02:33] Seems however that hive doesn' want to repair the wmf_raw.mediawiki_private_cu_changes with that data :( [08:02:55] yes - cu_changes table (base for geoeditors) [08:04:54] ah so you were able to repair a tmp table based on wmf_raw.mediawiki_private_cu_changes, but not it [08:06:16] elukey: I managed to repair a new (tmp) table on new data only (and query) [08:06:49] elukey: However repairing existing table on new data fail (but succeeds when I move it away) [08:06:52] :( [08:07:01] I'm gonna try to mive it back in place [08:07:07] and repair again [08:08:37] :( [08:13:15] (03PS2) 10Jonas Kress (WMDE): Track number of editors from Wikipedia who also edit on Wikidata over time [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/443069 (https://phabricator.wikimedia.org/T193641) [08:15:11] (03CR) 10Jonas Kress (WMDE): Track number of editors from Wikipedia who also edit on Wikidata over time (031 comment) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/443069 (https://phabricator.wikimedia.org/T193641) (owner: 10Jonas Kress (WMDE)) [08:16:50] going afk for 10/15m! [08:17:03] ack! [08:23:53] 10Analytics, 10Analytics-EventLogging, 10Page-Previews, 10Readers-Web-Backlog, 10Readers-Web-Kanbanana-Board: Some VirtualPageView are too long and fail EventLogging processing - https://phabricator.wikimedia.org/T196904 (10CommunityTechBot) a:03ABorbaWMF [08:23:55] 10Analytics, 10Analytics-EventLogging, 10Page-Previews, 10Readers-Web-Backlog, 10Readers-Web-Kanbanana-Board: Some VirtualPageView are too long and fail EventLogging processing - https://phabricator.wikimedia.org/T196904 (10CommunityTechBot) [09:01:57] addshore: o/ [09:02:22] while reviewing the current analytics firewall rules I found the ones that you added in https://phabricator.wikimedia.org/T120010 [09:02:27] are those still needed? [09:02:34] because the ips in there are very stale [09:02:57] and I suppose that you guys are not using them anymore (otherwise you'd have noticed the breakage) [09:11:43] ok elukey - Found an interesting breaking pattern in hive [09:12:48] !log Rerun mediawiki-geoeditors-load-wf-2018-06 after having fixed the wmf_raw.mediawiki_private_cu_changes table issueb [09:12:49] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [09:13:17] what was that? [09:13:43] Seems that hive doesn't like repairing table with incomplete folders structure [09:14:25] Cleaning job for cu_changes drop the data, but keeps the month= partition level [09:14:59] ah! [09:15:16] And hive tells you: I found 2 different types of partition levels, one with 1 level (month=1, empty), one with 2 levels (motnh=2, wiki_db=XX, full) [09:15:41] So, the cleaning jobs should delete the month partition folder for our stuff not to fail [09:15:50] This is different than with webrequest [09:16:16] Geo-editors load job succeeded :) [09:21:52] \o/ [09:23:35] elukey: any chance you can ping me an email about it so I don't forget and I'll check later today? :) [09:25:05] addshore: already done! [10:23:29] 10Quarry: Define in a single place the pseudoname of unnamed queries - https://phabricator.wikimedia.org/T197029 (10zhuyifei1999) How about a `@property` in the [[https://github.com/wikimedia/analytics-quarry-web/blob/master/quarry/web/models/query.py|Query]] object? [10:38:17] * elukey lunch + errand! brb in ~2h [11:55:12] (03PS1) 10Jonas Kress (WMDE): [WIP] Introduce oozie job to schedule generating metrics for Wikidata co-editors [analytics/refinery] - 10https://gerrit.wikimedia.org/r/443409 (https://phabricator.wikimedia.org/T193641) [12:30:05] helloooo team :] [12:37:47] elukey: looks like it is still used https://github.com/wikimedia/analytics-wmde-scripts/blob/e42e607c6d42d16ee98e33c38ba3427e17db2d6e/src/wikidata/sparql/instanceof.php#L69 [12:38:37] addshore: ahh there you go, you guys use the only host that is still working [12:39:07] hahaa [12:39:11] what are the chance s:P [12:39:13] *chances [12:39:42] elukey: I did actually file this one as a follow up https://phabricator.wikimedia.org/T176875 [12:39:52] so that I wouldn't have to specify a host & have it break when it disappears [12:42:02] addshore: the task is meant for the analytics vlan firewall right? [12:42:31] I guess :P [12:42:39] * addshore has no idea how any of this stuff works ;) [12:42:46] * addshore doesn't know where to look etc [12:43:21] * addshore wants stat1005 to be able to talk to wdqs.svc.eqiad.wmnet on port 8888 ;) [12:44:16] addshore: so basically we have a firewall on our network routers that filters traffic from the hosts belonging to the analytics vlan (stat1005 is one of them, including hadoop etc..) to communicate with the rest of production [12:44:23] so traffic from the analytics hosts towards production [12:45:13] https://gerrit.wikimedia.org/r/#/c/analytics/wmde/scripts/+/380974/1/src/wikidata/sparql/instanceof.php seems to be used only on stat boxes right? [12:45:45] yup [12:46:09] It could in theory run anywhere within the prod network [12:46:34] but when we created all of this stuff one of the analytics boxes made the most sense at the time [12:46:40] yep yep [12:46:46] I've never had anyone tell me a better / more sensible place to run it :) [12:47:16] some of the script require the analytics db replicas, some require other rsynced data etc, this one just happens to need to make a wdqs query [12:52:20] addshore: got it, will work with Arzhel to fix the VLAN firewall and I'll add comments in there so we'll remember about this use case [12:52:40] (will add you to the subscribers' list [13:10:51] elukey: thanks!!! [13:19:36] (03PS2) 10Jonas Kress (WMDE): [WIP] Introduce oozie job to schedule generating metrics for Wikidata co-editors [analytics/refinery] - 10https://gerrit.wikimedia.org/r/443409 (https://phabricator.wikimedia.org/T193641) [13:25:38] joal: o/ [13:25:48] if you have a min I'd prep for the aqs reimage [13:27:09] the best thing in my opinion would be to avoid loading data while we are reimaging [13:27:57] !log suspend cassandra bundle via Hue to ease the reimage of aqs1004 [13:27:58] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [13:31:54] +1 elukey :) [13:32:47] aqs1004 depooled [13:34:49] all right I think we are good to go :) [13:42:25] 10Analytics, 10Patch-For-Review, 10User-Elukey: Upgrade AQS to Debian Stretch - https://phabricator.wikimedia.org/T196138 (10ops-monitoring-bot) Script wmf-auto-reimage was launched by elukey on neodymium.eqiad.wmnet for hosts: ``` ['aqs1004.eqiad.wmnet'] ``` The log can be found in `/var/log/wmf-auto-reimag... [13:45:46] elukey: let's check cqlsh before reenabling everything etc ;) [13:49:41] sure sure [13:49:53] I am currently trying to convince the debian installer to keep the partitions :D [13:50:08] seems rather important :) [13:53:43] joal: ok I hope I got it right, let's see how it goes in 10/15 mins [13:53:53] k - I'll probably be gone [13:53:58] :( [13:54:18] super fine, no rush! [13:57:04] * elukey waves to ottomata [13:57:12] o/ [14:12:19] Linux aqs1004 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 [14:12:22] Debian GNU/Linux 9.4 (stretch) [14:12:25] \o/ [14:12:34] partitions looks good [14:12:40] cassandra stuff was kept [14:14:52] joal: [14:14:53] Connected to Analytics Query Service Storage at aqs1004-a.eqiad.wmnet:9042. [14:14:56] [cqlsh 5.0.1 | Cassandra 2.2.6 | CQL spec 3.3.1 | Native protocol v4] [14:17:03] (03CR) 10Mforns: [V: 032 C: 031] "Nuria, yea, there's this problem, but it's not related to pagecounts. Rather a Wikistats2 bug that the pagecounts data uncovered given tha" [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/442257 (https://phabricator.wikimedia.org/T189619) (owner: 10Sahil505) [14:32:02] aqs1004 reimaged! [14:32:05] * elukey dances [14:32:15] :D [14:41:19] nice! [14:41:27] wow i'm reading the overall annual plan that katherine sent out [14:41:31] this one is cool: [14:41:33] " • Citations and links to external sources cited from Wikipedia articles are immediately and comprehensively archived and remain available readers in perpetuity [14:41:33] " [14:42:55] (03PS11) 10Sahil505: Added pagecounts metric [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/442257 (https://phabricator.wikimedia.org/T189619) [14:44:33] 10Analytics, 10Analytics-Kanban, 10Analytics-Wikistats, 10Patch-For-Review: Shortcut icon is not showing - https://phabricator.wikimedia.org/T197482 (10Ankry) p:05High>03Normal a:03sahil505 [14:51:28] ottomata: just added you to yet another task about firewall rules :P - https://phabricator.wikimedia.org/T198623 [14:51:52] whenever you have time can you triple check that what I wrote makes sense? [14:53:56] elukey: +1, only q would be about wdqs, might want to check with stas on that one [14:54:14] yep yep Gehel is going to do it soon :) [14:54:37] it is amazing how the current known use case is configured with the only host still not decommed [14:55:21] elukey: Super great for cassandra :) [14:56:34] !log resume cassandra bundle via hue [14:56:36] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [14:56:47] joal: so just repooled aqs for aqs1004, logs looks good! [14:56:54] nodetoo-{a,b} looks good as well [14:56:59] *nodetool [14:57:13] Yay elukey ! [15:04:24] ping ottomata [15:14:04] 10Analytics, 10Analytics-Kanban, 10Product-Analytics, 10Patch-For-Review: Partially purge MobileWikiAppiOSUserHistory eventlogging schema - https://phabricator.wikimedia.org/T195269 (10elukey) p:05High>03Normal [15:14:32] 10Analytics, 10Analytics-Kanban, 10Product-Analytics, 10Patch-For-Review: Partially purge MobileWikiAppiOSUserHistory eventlogging schema - https://phabricator.wikimedia.org/T195269 (10elukey) a:03mforns [15:30:30] ottomata: yes! we have very nice AC, come over anytime. Of course, you'd probably evaporate before you got here if you tried to bike [15:33:36] (03CR) 10Nuria: "Were you able to run & test code?" (031 comment) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/443069 (https://phabricator.wikimedia.org/T193641) (owner: 10Jonas Kress (WMDE)) [15:33:41] man i was bike poloing ALL DAY yesterday [15:34:05] i could make it! i'd just have to sit in front of the AC blower for 10 minutes before i could be a human [15:34:57] milimetric: I'm strategizing the way to pack my suitcase so that it's compatible with cape town, where there streets are flooded right now, and texas [15:35:04] it is not going well [15:35:50] hmmm, you need a raft? [17:17:33] 10Analytics, 10EventBus, 10MediaWiki-JobQueue, 10Services (done): Enable EventBus on all wikis - https://phabricator.wikimedia.org/T185170 (10mobrovac) 05Open>03Resolved p:05Triage>03Normal Yup, since wikitech will likely never be part of our normal production environment, so not much we do there. [17:25:06] a-team, is wed july 4 a wmf holiday? [17:25:34] dunno [17:25:38] yup i think it is [17:31:52] joal check out [17:31:52] http://ammonite.io/ [17:39:17] I can't remember if I asked about this before, but is the ApiAction hadoop data planned to land in turnilo at all? [17:39:41] addshore: no current plan for that afaik [17:39:45] ack, thanks [17:39:59] we need to do some work around ApiAction and CirrusSearch stuff, probably as part of Modern Event Platform proposal [17:40:13] the php kafka client sucks, and is blocking the decom of the old analytics kafka cluster [17:40:42] but, it'll be a while... [17:41:11] sorry :( [17:41:27] heheh, its ok not your fault! we encouraged it long ago iirc [17:41:38] * elukey off! [17:41:43] ebernhardson: btw, I have a working prototype for kafka connect and jsonschema [17:41:54] i've been able to run it with Confluent's HdfsConnector with hive support [17:42:04] and get automatically created and imported parquet tables from json [17:42:08] ottomata: cool! that was the kafka->hdfs side right? [17:42:17] the the jsonschema part is generic [17:42:24] HdfsConnecter is ya [17:43:05] gotta just run kafka connect with proper configs, and you get your json data (with a jsonschema reachable via URI in eventbus /v1/schemas), and it works really well [17:43:13] even auto evolves the tables if it can, for things like field additions [17:43:21] https://github.com/ottomata/kafka-connect-jsonschema [17:43:34] excellent [17:48:23] 10Analytics: Count the number of video plays - https://phabricator.wikimedia.org/T198628 (10MusikAnimal) https://tools.wmflabs.org/mediaviews/ is an alternative tool using @Harej's mediaviews API. I think the commons-video-clicks tool probably just isn't using the new API endpoint, which was changed only a week... [18:16:23] ottomata: ammonite looks super fun :) [18:16:29] Will probably try it out :) [18:17:44] hmmmmmm [18:17:50] I guess im doing something slightly wrong? [18:17:50] AND cast(params["maxlag"] as int) > 5 [18:18:17] also, hue is awesome :D [18:18:46] addshore: have you tried jupyter swap with sql_magic? [18:18:59] https://wikitech.wikimedia.org/wiki/SWAP#sql_magic [18:18:59] ottomata: not yet :P [18:19:13] you can get hive results loaded directly into a pandas dataframe [18:19:43] i have no idea what a pandas data frame is ;) but I will learn ;) [18:20:00] oh, then maybe you won't care [18:20:01] python stuff [18:20:10] :D [18:20:15] any idea how to fix my hive query? [18:20:29] I get "contains some special characters: >" [18:20:30] addshore: what are you trying to do? [18:20:34] oh [18:20:44] full query paste? [18:20:52] https://www.irccloud.com/pastebin/loPgIkQX/ [18:22:21] addshore: you are getting that in hue? [18:22:25] yup [18:22:27] i just ran it on the CLI and it seems to be running [18:22:30] oh. [18:22:37] must be some issue with hue! :9 [18:22:55] oh well, I'll run it on CLIU [18:26:02] is there a place to file hue bugs? :P I guess thats a upstream thing [18:27:11] addshore: it probably is? if you filed a phab task that's what we would do... [18:28:23] I'll file a phab task now [18:28:44] wahhaa, hue won't let me save the query either because of the error :P [18:30:10] https://phabricator.wikimedia.org/T198645 [18:30:11] 10Analytics: Hue complains about queries that include a > char - https://phabricator.wikimedia.org/T198645 (10Addshore) [19:01:12] 10Analytics, 10Cleanup, 10Operations, 10User-Elukey: Archive operations/puppet/jmxtrans repository - https://phabricator.wikimedia.org/T198097 (10hashar) 05Open>03Resolved [19:01:15] 10Analytics, 10Analytics-Kanban, 10Operations, 10Patch-For-Review, 10User-Elukey: Import some Analytics git puppet submodules to operations/puppet - https://phabricator.wikimedia.org/T188377 (10hashar) [19:01:20] 10Analytics, 10Cleanup, 10Operations: Archive operations/puppet/varnishkafka repository - https://phabricator.wikimedia.org/T197503 (10hashar) 05Open>03Resolved [19:01:23] 10Analytics, 10Analytics-Kanban, 10Operations, 10Patch-For-Review, 10User-Elukey: Import some Analytics git puppet submodules to operations/puppet - https://phabricator.wikimedia.org/T188377 (10hashar) [19:43:06] 10Analytics-Kanban: Rename column user_name to user_text in user_history for naming coherence - https://phabricator.wikimedia.org/T197926 (10JAllemandou) a:03JAllemandou [19:43:54] (03PS1) 10Joal: Update user-history job from username to userText [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/443483 (https://phabricator.wikimedia.org/T197926) [20:00:27] (03PS1) 10Joal: Update mw-user-history username to user_text [analytics/refinery] - 10https://gerrit.wikimedia.org/r/443487 (https://phabricator.wikimedia.org/T197926) [20:11:37] addshore: for kicks ... did you tried running your query with > rather than ">" [20:48:16] (03PS1) 10Mforns: Fix case insensibility for MapMaskNodes in WhitelistSanitization [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/443508 (https://phabricator.wikimedia.org/T193176) [21:18:23] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Elukey: Update piwik to latest stable - https://phabricator.wikimedia.org/T192298 (10Nuria) The only error i see on ui is: "curl_exec: Failed to connect to plugins.matomo.org port 443: Connection timed out. Hostname requested was: plugins.matomo.org" [21:28:06] 10Analytics: Measure traffic for new wikimedia foundation site - https://phabricator.wikimedia.org/T188419 (10Nuria)