[08:39:40] morning!! [09:42:56] joal_: o/ [09:43:21] when you are around I'd like to have your opinion about https://gerrit.wikimedia.org/r/#/c/389429 [09:47:31] ahhh we don't cache for historical [09:47:37] but only for the broker [09:50:04] now the weird thing is that io.druid.client.cache.CacheMonitor is not enabled for the broker [09:50:08] but I can see cache metrics for it [09:50:39] no sorry I am dumb [09:50:40] elukey@druid1004:/etc/druid/historical$ cat runtime.properties [09:50:40] # NOTE: This file is managed by Puppet. [09:50:40] druid.historical.cache.populateCache=true [09:50:40] druid.historical.cache.useCache=true [09:50:53] so historical is indeed using cache [09:52:06] /var/log/druid/historical-metrics.log:1:Event [{"feed":"metrics","timestamp":"2017-11-06T03:44:50.923Z","service":"druid/historical","host":"druid1004.eqiad.wmnet:8083","metric":"query/cache/total/hits","value":0}] [10:02:49] mmmm so in services/src/main/java/io/druid/cli/CliBroker.java and services/src/main/java/io/druid/cli/CliHistorical it seems that the CacheMonitor is added regardless [10:03:07] (of the io.druid.client.cache.CacheMonitor emitter) [10:45:46] ah! [10:45:46] /var/log/druid/historical.log:70:2017-10-12T12:26:41,159 INFO io.druid.server.metrics.MetricsModule: Adding monitor[io.druid.client.cache.CacheMonitor@45564f9] [10:45:50] /var/log/druid/broker.log:70:2017-10-12T12:26:42,120 INFO io.druid.server.metrics.MetricsModule: Adding monitor[io.druid.client.cache.CacheMonitor@4bb55f97] [10:51:14] ok I'll avoid it [11:00:41] 10Analytics-Kanban, 10Analytics-Wikistats: Wikistats Bug – In tabular data view, format displayed values - https://phabricator.wikimedia.org/T179441#3724893 (10fdans) a:03fdans [11:22:41] (03PS1) 10Fdans: Fixes to table UI [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/389460 [11:23:12] (03PS2) 10Fdans: Fixes to table UI [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/389460 (https://phabricator.wikimedia.org/T179441) [11:42:16] * elukey lunch! [12:21:11] 10Analytics-Tech-community-metrics, 10Developer-Relations (Oct-Dec 2017): Make Qgil a fallback for Bitergia access (lock-in) - https://phabricator.wikimedia.org/T178381#3737106 (10Aklapper) Bitergia has added https://gitlab.com/qgil to the Gitlab group, so @qgil should be able to post the hash in https://gitla... [12:41:27] Heya elukey - Sorry for not answering sooner, did a late start after some work this weekend :) [12:42:12] 10Analytics-Tech-community-metrics, 10Developer-Relations (Oct-Dec 2017): Advertise wikimedia.biterg.io more widely in the Wikimedia community - https://phabricator.wikimedia.org/T179820#3737130 (10Aklapper) [12:44:18] joal_: o. [12:44:19] o/ [12:44:25] \o as well :) [12:44:56] * joal feels better without underscore [12:48:28] yesss [12:48:38] so joal if you are ok I'd merge https://gerrit.wikimedia.org/r/#/c/389429 [12:48:45] and roll restart the historicals [12:49:06] elukey: from backscrolling the chan, it looks you solved everything by yourself :) [12:49:14] So yes, please, let's go :) [12:49:17] we'll get metrics about segments held by historical afaics [12:52:51] 10Analytics, 10Analytics-EventLogging, 10Operations, 10Ops-Access-Requests, 10Patch-For-Review: Requesting Sharvani Haran to be added to researchers group - https://phabricator.wikimedia.org/T179611#3737173 (10herron) 05Open>03Resolved a:03herron Hi @Sharvaniharan, you have been added to group `res... [12:53:15] elukey: I need to take a closer look at those metrics, and try to grab a bit of how druid works with them :) [12:54:21] hopefully you'll do it soon in prometheus :) [12:55:32] Sounds like a plan elukey :) [12:56:04] still wip but https://gerrit.wikimedia.org/r/#/c/389475/1 is finally coming up [13:00:52] [{"feed":"metrics","timestamp":"2017-11-06T12:59:38.126Z","service":"druid/historical","host":"druid1001.eqiad.wmnet:8083","metric":"segment/used","value":221454106087,"dataSource":"mediawiki-history-beta","priority":"0","tier":"_default_tier"}] [13:01:00] \o/ [13:01:26] host":"druid1001.eqiad.wmnet:8083","metric":"segment/count","value":94,"dataSource":"banner_activity_minutely" [13:08:27] I also learned how to restart historicals properly [13:08:41] namely tailing logs and waiting for all segments to be loaded before proceeding [13:09:39] 10Analytics-Kanban, 10Analytics-Wikistats: Wikistats Bug: Menu to select projects doesn't work (sometimes?) - https://phabricator.wikimedia.org/T179530#3727714 (10fdans) Hm, so far I've been able to reproduce on Mac Chrome. Any chance at all you get an error in the console @jmatazzoni ? I'm currently giving so... [13:10:37] 10Analytics-Kanban, 10Analytics-Wikistats: Wikistats Bug: Menu to select projects doesn't work (sometimes?) - https://phabricator.wikimedia.org/T179530#3737195 (10fdans) a:03fdans [13:20:49] aaaand I can see the metrics only on druid1001 [13:20:49] sigh [13:23:45] ah no it seems working, nice [13:24:01] I just ran a query via cumin and I finally found the metrics [13:57:21] the coordinator offers some useful metrics too [13:57:50] elukey --verbose [13:59:03] ahahah [13:59:09] I am reading http://druid.io/docs/0.9.2/operations/metrics.html [13:59:25] for example segment/unavailable/count [13:59:32] Number of segments (not including replicas) left to load until segments that should be loaded in the cluster are available for queries. [13:59:58] Number of segments (not including replicas) left to load until segments that should be loaded in the cluster are available for queries. [13:59:58] elukey: indeed, this is super interesting ! [14:00:01] argh sorry [14:00:18] segment/loadQueue/failed Number of segments that failed to load. [14:00:20] etc.. [14:00:35] I am trying to not add the ones with too many dimensions [14:00:38] * elukey coffee [14:12:15] joal: do you know what a tier is in druid? [14:23:37] * elukey auto-rtfms himself with http://druid.io/docs/0.9.2/querying/multitenancy.html [14:28:13] elukey: Arf, I'm too slow [14:28:58] I don't think that it is worth to add it as prometheus label (the tier info) [14:29:11] elukey: We only use default tier as of now [14:29:34] elukey: and chances we'll use multi-tier is low [14:30:07] in case we can modify the agent [14:30:38] so I can simplify the code and store a simple counter [14:40:04] 10Analytics: Reading Common Crawl data from hadoop / webproxy performance - https://phabricator.wikimedia.org/T179748#3737444 (10Ottomata) Hmm, maybe but not that I know of. Do you need to do this just once, or regularly? If just once...just do it! It's already been +1.5 days plus since you submitted this tas... [14:54:59] Hey milimetric and mforns [15:02:32] (03PS1) 10Joal: [WIP] Update mediawiki-history reduced [analytics/refinery] - 10https://gerrit.wikimedia.org/r/389496 [15:09:28] 10Analytics, 10Analytics-EventLogging: Timestamp format in Hive-refined EventLogging tables is incompatible with MySQL version - https://phabricator.wikimedia.org/T179540#3737504 (10Ottomata) > No, these are not in the utc-millisec format Ya that's true. The [[ https://github.com/wikimedia/eventlogging/blob/m... [15:09:57] heloooo [15:26:19] hi joal [15:26:24] 10Analytics: Reading Common Crawl data from hadoop / webproxy performance - https://phabricator.wikimedia.org/T179748#3737524 (10EBernhardson) I did run it, took just under 48 hours maxing out both proxies. I was mostly curious as if this initial experiment works out would want to run it monthly with the new com... [15:31:40] 10Analytics: Reading Common Crawl data from hadoop / webproxy performance - https://phabricator.wikimedia.org/T179748#3737529 (10Ottomata) Aye ok. 48 hours sounds not too bad for a monthly job. If you want faster, try asking ops folks. I don't think there's much we can do to help you here. [15:33:59] (03PS2) 10Milimetric: Update hostnames to analytics-store [analytics/geowiki] - 10https://gerrit.wikimedia.org/r/265213 [15:34:16] (03CR) 10jerkins-bot: [V: 04-1] Update hostnames to analytics-store [analytics/geowiki] - 10https://gerrit.wikimedia.org/r/265213 (owner: 10Milimetric) [15:36:58] (03CR) 10Milimetric: [V: 032 C: 032] "Gotta merge this, s2 no longer points to the same thing this code was expecting. Jaime, this code is fairly light-weight, it doesn't hit " [analytics/geowiki] - 10https://gerrit.wikimedia.org/r/265213 (owner: 10Milimetric) [15:37:49] !log found geowiki was hitting the wrong databases, updated it to always hit analytics-store [15:37:50] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [15:39:04] 10Analytics, 10ChangeProp, 10EventBus, 10MediaWiki-JobQueue, and 5 others: Select candidate jobs for transferring to the new infrastucture - https://phabricator.wikimedia.org/T175210#3737557 (10mobrovac) [15:42:17] 10Analytics, 10Analytics-EventLogging: Timestamp format in Hive-refined EventLogging tables is incompatible with MySQL version - https://phabricator.wikimedia.org/T179540#3737560 (10Ottomata) > changing it back would again make this documentation confusing and misleading for the users of this data, and also re... [15:45:55] milimetric, do you have 5 mins for druid ingestion woes? [15:46:05] omw mforns [15:46:08] k [15:47:49] 10Analytics-EventLogging, 10Analytics-Kanban, 10Patch-For-Review: Resolve EventCapsule / MySQL / Hive schema discrepancies - https://phabricator.wikimedia.org/T179625#3737566 (10Ottomata) [16:00:35] ping fdans ottomata [16:52:54] 10Analytics: Remove EL capsule from meta and add it to codebase - https://phabricator.wikimedia.org/T179836#3737740 (10Nuria) [16:56:23] ping milimetric [16:56:56] 10Analytics-Kanban: Remove EL capsule from meta and add it to codebase - https://phabricator.wikimedia.org/T179836#3737760 (10fdans) [17:00:06] 10Analytics, 10MediaWiki-Authentication-and-authorization, 10MediaWiki-Platform-Team: Clear site data on MediaWiki log out - https://phabricator.wikimedia.org/T179752#3737761 (10fdans) a:03Nuria [17:08:59] 10Analytics: Provide unqiues estimate/offset breakdowns in AQS - https://phabricator.wikimedia.org/T164593#3239203 (10fdans) We need to change the AQS schema, update jobs and reload data since the beginning of time. We should end up with three numbers: estimate (total), underestimate and offset. [17:10:08] 10Analytics: Provide unqiues estimate/offset breakdowns in AQS - https://phabricator.wikimedia.org/T164593#3737778 (10fdans) [17:11:18] 10Analytics-Kanban: Provide unqiues estimate/offset breakdowns in AQS - https://phabricator.wikimedia.org/T164593#3239203 (10fdans) [17:12:08] 10Analytics-EventLogging, 10Analytics-Kanban: Timestamp format in Hive-refined EventLogging tables is incompatible with MySQL version - https://phabricator.wikimedia.org/T179540#3737780 (10fdans) [17:12:29] 10Analytics-EventLogging, 10Analytics-Kanban: Timestamp format in Hive-refined EventLogging tables is incompatible with MySQL version - https://phabricator.wikimedia.org/T179540#3728047 (10fdans) a:03Ottomata [17:35:04] 10Analytics: Create new table for 'referer' aggregated data - https://phabricator.wikimedia.org/T112284#1630965 (10fdans) The referer table will contain: - Referer (normalised hostname) - Request counts for that referer - Country - Wikimedia project (hostname) - Agent type (user/bot) - Origin (internal/external)... [17:35:40] 10Analytics: Create new table for 'referer' aggregated data - https://phabricator.wikimedia.org/T112284#3737912 (10fdans) [18:17:16] * elukey off! [18:28:27] 10Analytics, 10MediaWiki-Authentication-and-authorization, 10MediaWiki-Platform-Team: Clear site data on MediaWiki log out - https://phabricator.wikimedia.org/T179752#3738181 (10Nuria) >Right now when a user logs out of MediaWiki, a significant amount of state can stay behind spanning both the logged-in and... [18:28:40] 10Analytics, 10MediaWiki-Authentication-and-authorization, 10MediaWiki-Platform-Team: Clear site data on MediaWiki log out - https://phabricator.wikimedia.org/T179752#3738183 (10Nuria) a:05Nuria>03None [18:37:24] bye team, cya tomorrow [19:06:01] Heya milimetric - Would you have some time now? [19:07:36] joal: I would but I had a lunch mishap so now I’m starving, should eat otherwise I might be angry at the algorithm for no reason [19:07:48] :D [19:07:49] I’ll ping you after I get back [19:07:55] No prob :) [19:37:58] 10Analytics: Create new table for 'referer' aggregated data - https://phabricator.wikimedia.org/T112284#3738441 (10Nuria) [19:41:10] 10Analytics-Kanban: Remove EL capsule from meta and add it to codebase - https://phabricator.wikimedia.org/T179836#3737740 (10Tbayer) I don't quite understand the "cannot evolve on its own" argument; isn't that the case for any and all schema pages on Meta? (They are all tied to code, whether generic or instrume... [19:41:28] 10Analytics-EventLogging, 10Analytics-Kanban: Remove EL capsule from meta and add it to codebase - https://phabricator.wikimedia.org/T179836#3738468 (10Tbayer) [19:45:03] ottomata: ok, finally looking at this: https://gerrit.wikimedia.org/r/#/c/388255/ [19:52:29] 10Analytics-EventLogging, 10Analytics-Kanban: Remove EL capsule from meta and add it to codebase - https://phabricator.wikimedia.org/T179836#3737740 (10Ottomata) > Users of EventLogging data do need an up to date documentation of these fields We don't even have this now, as recently we experienced. The eventl... [19:52:43] nuria_: new patch coming in a sec [19:52:44] but ya? [19:52:50] ottomata: ok [19:53:05] ottomata: i think is fine my comments are more about overall [19:54:23] ya k ready! [19:54:24] tell me! [19:54:24] :) [19:58:50] nuria_: ? [19:59:01] ottomata: one sec [20:01:33] fdans: what versions of node and npm do you have? [20:01:42] ottomata: ok, looked at everything [20:02:40] ottomata: did not know about globals [20:03:01] ottomata: i think code is better that way , no question [20:04:02] ottomata: i prefer not having to deploy config + code changes together so i would deploy new code [20:04:11] ottomata: and later deploy config changes [20:04:27] ottomata: after new code has baked 9w/o it being used) [20:04:51] config changes==? [20:04:57] oh on puppet and capsule revision? [20:05:14] ya makes sense, + we have to manage the popups schema crap in hadoop [20:06:02] ottomata: right, our config is on puppet and teh code that takes advantage of it on eventlogging python server [20:06:34] ok joal, if you're still around [20:06:47] aye, ya, so as is no it should be a no op in prod [20:06:52] hmmm [20:06:55] yeah [20:06:55] Heya milimetric - Just back from dinner - I might not be as fast as before :) [20:07:02] ottomata: ok, other comments are about naming [20:07:03] timestamp is handled by the field name now, rather than the format [20:07:06] so that'll be fine [20:07:10] in jrm.py [20:07:10] joal: we can do it tomorrow morning [20:07:21] dt won't exist until we chnage puppet [20:07:25] Let's go for it now, It'll allow me to try to code tomorrow :) [20:07:30] if ok for you milimetric [20:07:31] oh but userAgent will [20:07:32] hm.. [20:08:42] hm [20:08:46] not sure what to do about userAgent [20:09:00] ottomata: in mysql? [20:09:05] ottomata: or hadoop? [20:10:13] both. [20:10:27] the code as is leaves it as a nested object when processor parses it [20:10:33] so it'll be in object in kafka [20:11:05] ya, i see, cause this changes are not backwards compatible w/o a stop-the-world deployment [20:11:21] ottomata: no wait [20:11:30] ottomata: on mysql end [20:11:42] ottomata: 1st deploy (w/o chnages taking effect) will work [20:12:02] ottomata: as the UA will be made into a string via existinf=g code [20:12:06] *existing code [20:12:13] in my change [20:12:21] the userAgent will be an object in Kafka [20:12:39] we need to tell the mysqlconsumer process to use map://...function=wmf_mysql_mapper to convert it to a string [20:13:37] ottomata: cause changes (w/o config) are not backwards compatible [20:13:53] ya [20:14:04] same problem in existing refined hive schemas [20:14:12] refine will fail since they are currently making userAgent a string [20:14:31] maybe i can make another format specifier in parser [20:14:41] that says if the userAgent should be converted to string or not [20:14:45] lemme see... [20:14:52] ottomata: ok, on my opinion we should always aim to make changes backwards compatible, in hive end is less problematic cause other than popups nobody is consuming those [20:14:57] then we can make the change over happen in config instead of during code deploy [20:15:30] ottomata: ok, ya, so the switch happens in config [20:15:49] ping me whne you are back fdans [20:15:58] oh yeah, nuria_ oh yeah, this will be way better anyway [20:16:17] on it... [20:16:18] :) [20:18:17] ottomata: ok, no stop the world deploys good [20:18:20] thank you [20:18:34] it also makes the parse() function way simpler [20:18:42] and removes userAgent specific logic :) [20:18:45] nuria_: node 6.3.1 npm 3.10.3 [20:19:02] fdans: ok, lemme check caus emy build is notr working [20:19:04] *not [20:19:13] * cause my build is not working [20:24:32] 10Analytics-EventLogging, 10Analytics-Kanban: Remove EL capsule from meta and add it to codebase - https://phabricator.wikimedia.org/T179836#3738549 (10Ottomata) For example, in order to deploy EventCapsule schema changes in beta for testing, we have to create a new revision of EventCapsule. blegh! :) [20:29:25] off for today a-team [20:29:28] see you tomorrow [20:29:38] latersss [20:29:44] 10Analytics-Kanban: Use native timestamp types in Data Lake edit data - https://phabricator.wikimedia.org/T161150#3738571 (10Neil_P._Quinn_WMF) 05Resolved>03stalled p:05High>03Low >>! In T161150#3508429, @Neil_P._Quinn_WMF wrote: > Thank you, good to know! Would it make sense to keep this open and stalle... [20:30:09] fdans: so i try npm run dev and it basically doesn't go beyond [20:30:14] 10Analytics-Kanban, 10Contributors-Analysis: Use native timestamp types in Data Lake edit data - https://phabricator.wikimedia.org/T161150#3738586 (10Neil_P._Quinn_WMF) [20:30:16] fdans: [20:30:19] https://www.irccloud.com/pastebin/ks7SU6Dd/ [20:30:59] node is 6.1 [20:31:06] so i might try to upgrade that [20:31:51] if upgrading doesn't do it we can troubleshoot at the cave nuria_ [20:32:09] webpack is always pretty bleeding edge so it requires latest node/npm [20:33:29] fdans: latest node is 9 though [20:33:49] fdans: you have 6.3 right? [20:34:41] ottomata: on this: https://gerrit.wikimedia.org/r/#/c/388255/6/eventlogging/handlers.py [20:34:44] yep [20:35:30] ottomata: how do you define the "function" arg as a function on the config ? Or was your comment meaning "in general" this function could also take a function? [20:35:38] nuria_: that pastebin, does the command end there? [20:35:44] fdans: ya [20:35:57] then the bundle should be generated! [20:36:21] fdans: isn't supposed to continue though or not? [20:36:41] it should watch for changes in the files [20:37:06] fdans: ah i see, then it is just it does not instatiate the server but that shoudl be ok [20:37:15] if you run a server in the dist-dev, is it working? [20:37:30] the dist-dev folder* nuria_ [20:37:58] fdans: ya, it is , sorry forgot about dist-dev [20:38:04] fdans: ok, i think is all good [20:38:08] np! [20:39:06] nuria_: in general, the map_writer function arg can either be a python callable function [20:39:07] or [20:39:09] a string function name [20:39:10] 10Analytics-Kanban, 10User-Elukey, 10cloud-services-team (Kanban): Remove logging from labs for schema https://meta.wikimedia.org/wiki/Schema:CommandInvocation - https://phabricator.wikimedia.org/T166712#3738625 (10Nuria) 05Open>03Resolved [20:39:22] in our usual case, it will be a string function name from the URI query param [20:39:23] e.g. [20:39:30] map://stdout://?function=inject_timestamp [20:40:10] 10Analytics-Kanban, 10Analytics-Wikistats, 10Patch-For-Review: Adapt components for new editing metrics - https://phabricator.wikimedia.org/T178461#3738630 (10Nuria) 05Open>03Resolved [20:40:30] 10Analytics-Kanban, 10Contributors-Analysis: Use native timestamp types in Data Lake edit data - https://phabricator.wikimedia.org/T161150#3738631 (10Nuria) 05stalled>03Resolved [20:41:07] fdans: no chnageset on thi sticket bu i think is done correct? https://phabricator.wikimedia.org/T178797 [20:41:30] ottomata: right, that is why i though function_name was more appropiate [20:41:41] ottomata: but what you are saying is that map can take either [20:42:14] nuria_: yes that's done, was included with the edit metrics fixes [20:42:27] yes [20:43:37] 10Analytics-Kanban, 10Analytics-Wikistats: Handle negative values in charts - https://phabricator.wikimedia.org/T178797#3738650 (10Nuria) 05Open>03Resolved [20:43:43] 10Analytics-Kanban, 10Analytics-Wikistats, 10Patch-For-Review: Stub new mediawiki history-based metrics - https://phabricator.wikimedia.org/T175268#3738651 (10Nuria) 05Open>03Resolved [20:44:02] 10Analytics-Cluster, 10Analytics-Kanban: Provision new Kafka cluster(s) with security features - https://phabricator.wikimedia.org/T152015#3738665 (10Nuria) [20:44:04] 10Analytics-Cluster, 10Analytics-Kanban, 10Patch-For-Review: Mirror topics from main Kafka clusters (from main-eqiad) into jumbo-eqiad - https://phabricator.wikimedia.org/T177216#3738664 (10Nuria) 05Open>03Resolved [20:46:00] (03Draft2) 10Reedy: Add hifwiktionary [analytics/refinery] - 10https://gerrit.wikimedia.org/r/389556 (https://phabricator.wikimedia.org/T173643) [20:46:29] haha nuria_ it looks like neil just posted he wanted to keep that open, and then you closed it! [20:46:46] ottomata: argh, totally missed it [20:47:07] 10Analytics-Kanban, 10Contributors-Analysis: Use native timestamp types in Data Lake edit data - https://phabricator.wikimedia.org/T161150#3738674 (10Nuria) 05Resolved>03Open [20:47:18] ottomata: corrected now [20:49:05] (03CR) 10Nuria: [V: 032 C: 032] Add hifwiktionary [analytics/refinery] - 10https://gerrit.wikimedia.org/r/389556 (https://phabricator.wikimedia.org/T173643) (owner: 10Reedy) [20:49:36] nuria_: this is way better [20:49:36] https://gerrit.wikimedia.org/r/#/c/388255/9/eventlogging/parse.py [20:49:39] %u [20:50:00] joal: FYI , since you have ops week, 1 new value for wiki: https://gerrit.wikimedia.org/r/#/c/389556/ [20:50:05] ottomata: looking [20:51:39] ottomata: ah yes, boy ori tought of everything in this class [20:52:33] 10Analytics-Kanban, 10Analytics-Wikistats: Fonts from fonts.googleapis.com on wikistats - https://phabricator.wikimedia.org/T178317#3688129 (10Nuria) [20:52:39] 10Analytics-Kanban, 10Analytics-Wikistats: Fonts from fonts.googleapis.com on wikistats - https://phabricator.wikimedia.org/T178317#3688129 (10Nuria) a:03Nuria [20:53:13] ottomata: spark task is done right? https://phabricator.wikimedia.org/T158334 [20:53:51] nuria_: hmmm, there are some more TODOs to clean up maybe [20:53:59] adding to comment [20:54:04] ottomata: k thanks [20:54:34] fdans: this one is in CR but has no code? https://phabricator.wikimedia.org/T179441 [20:55:06] 10Analytics-Cluster, 10Analytics-Kanban, 10Patch-For-Review: Make Spark 2.1 easily available on new CDH5.10 cluster - https://phabricator.wikimedia.org/T158334#3738709 (10Ottomata) Remaining TODOs: - remove spark2-beeline(?) - spark-sql logging is too verbose with provided log4j.properties - Make spark2 use... [20:59:45] fdans: and do we use any formatters on sublime? [21:03:22] nuria_: there's a vue linter/code highlighting package, it's the one I have installed [21:04:29] fdans: on sublime? [21:05:19] nuria_: yep, if you search for vue in package control [21:05:22] wow scap in beta never works! [21:05:25] not even with the hammer! [21:06:21] more like scrap riiight [21:06:24] sorry [21:07:01] ottomata: boy [21:07:05] ottomata: i so agree [21:07:05] AGH, its a git version bug [21:07:15] looks like scap wont' work on trusty [21:07:17] filing bug. [21:44:57] ok nuria beta is running https://gerrit.wikimedia.org/r/#/c/388255/ with no config changes [22:38:36] 10Analytics, 10DBA, 10Data-Services, 10Research, 10cloud-services-team (Kanban): Implement technical details and process for "datasets_p" on wikireplica hosts - https://phabricator.wikimedia.org/T173511#3739078 (10bd808)