[02:07:49] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10Product-Infrastructure-Data, 10Patch-For-Review: Streams with empty configs should be rendered as {} in the JSON returned by StreamConfig API - https://phabricator.wikimedia.org/T259917 (10Mholloway) p:05Unbreak!→03High Chatted with @mpopov and thi... [05:01:48] (03PS2) 10Nuria: [WIP] chopping timeseries for noise detection [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/612454 (https://phabricator.wikimedia.org/T257691) [05:03:28] (03CR) 10Nuria: [WIP] chopping timeseries for noise detection (031 comment) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/612454 (https://phabricator.wikimedia.org/T257691) (owner: 10Nuria) [06:30:17] good morning! [06:30:26] Hi elukey :) [06:30:49] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Upgrade schema[12]00[12] to Debian Buster - https://phabricator.wikimedia.org/T255026 (10elukey) All done! [06:31:02] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Upgrade schema[12]00[12] to Debian Buster - https://phabricator.wikimedia.org/T255026 (10elukey) [06:51:47] (03CR) 10Joal: "I like the approach - less flexible but also more robust (less parameterization potential error). Some code comments inline" (036 comments) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/612454 (https://phabricator.wikimedia.org/T257691) (owner: 10Nuria) [06:52:10] elukey: would you have some time for me to talk about hadoop fsimages ? [06:53:02] elukey: with a coffee - of course :) [07:01:11] joal: ok in ~30 mins? [07:01:19] when you want elukey :) [07:01:25] ack :) [07:20:40] (03PS1) 10Joal: Add ja.wikivoyage to pageview allow-list [analytics/refinery] - 10https://gerrit.wikimedia.org/r/622926 [07:21:02] !log Manually add ja.wikivoyage to pageview allowlist to prevent alerts [07:21:04] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [09:16:46] joal: sorryyy too many things :) [09:16:52] I am good now if you are [09:16:58] ok for me elukey :) [09:17:36] elukey: cave? [09:17:44] sure [09:41:15] 10Analytics-Clusters: Establish what data must be backed up before the HDFS upgrade - https://phabricator.wikimedia.org/T260409 (10JAllemandou) Ping @mforns - Could you please review reportupdater data and tell us which should be backedup (if not already done somewhere else) ? Other things not to forget: * `ev... [10:10:51] 10Analytics, 10Analytics-Kanban, 10Operations, 10netops: Add more dimensions in the netflow/pmacct/Druid pipeline - https://phabricator.wikimedia.org/T254332 (10fdans) @CDanis that makes sense. In that case what we propose is adding an intermediate data augmentation step to add these dimensions about 6-7 h... [10:35:14] can we reduce the 6/7 hours of delay for data augmentation to say 2/3 --^ ? [10:35:18] fdans: --^ [10:35:31] it would surely sound more appealing for the SRE team [10:37:27] * elukey lunch! [10:37:32] elukey: joal 6-7 hours is how long it takes for an hour of netflow to be refined, right? [10:39:43] I hope not, it seems a ton of time.. I thought it was more due to scheduling/waiting + refine time [10:39:57] (will read later) [11:11:36] correct elukey [11:11:49] fdans: scheduling says refinement of event happens h+6 [11:19:27] joal: but if we did it likw virtualpageview hourly we could get to druid ingestion faster no? [11:19:42] correct fdans [11:21:02] fdans: I can't recall why we wait so long [11:21:04] for events [11:21:52] I think it's related to late-events/out-of-order events [11:23:02] dfright [12:28:38] 10Analytics, 10Analytics-Kanban, 10Operations, 10netops: Add more dimensions in the netflow/pmacct/Druid pipeline - https://phabricator.wikimedia.org/T254332 (10CDanis) Yes, it would. There's two use cases here: * DoS attack analysis, for which real-time is essential. Here, the augmented data would be hel... [13:10:53] 10Analytics-Clusters: Upgrade Kafka Brokers to Debian Buster - https://phabricator.wikimedia.org/T255123 (10ops-monitoring-bot) Script wmf-auto-reimage was launched by elukey on cumin1001.eqiad.wmnet for hosts: ` ['kafka-jumbo1006.eqiad.wmnet'] ` The log can be found in `/var/log/wmf-auto-reimage/202008281310_el... [13:28:40] 10Analytics, 10Analytics-Kanban, 10Operations, 10netops: Add more dimensions in the netflow/pmacct/Druid pipeline - https://phabricator.wikimedia.org/T254332 (10fdans) Thanks for clarifying. A correction from my end: the extra dimensions would actually take significantly less then 6 hours since they would... [13:34:42] fdans: re: latest update -- sounds great! :) [13:36:32] cdanis: awesome! thanks for the quick helpful feedback [13:36:59] 👍 [13:46:57] 10Analytics-Clusters: Upgrade Kafka Brokers to Debian Buster - https://phabricator.wikimedia.org/T255123 (10ops-monitoring-bot) Completed auto-reimage of hosts: ` ['kafka-jumbo1006.eqiad.wmnet'] ` and were **ALL** successful. [14:02:40] just reimaged jumbo1006 to buster, all good! [14:02:49] doing the others should be easy [14:18:34] !log run kafka preferred-replica-election on jumbo after the reimage of jumbo1006 [14:18:35] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [14:21:03] (03CR) 10Mforns: [V: 03+2 C: 03+2] "LGTM!" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/622926 (owner: 10Joal) [14:21:34] fdans: hiyaa FYi, i don't know exactly why, but I don't get emails if/when you update github PRs [14:21:43] so if you do work on jsonschema-tools ping meeeee ty [14:22:09] ottomata: hellooo I only responded to your last comment this morning :) [14:22:21] i.e. fran was rebellious [14:22:44] thanks joal for pushing the fix to the pageview whitelist [14:22:50] see it [14:22:51] hm [14:22:56] sorry for letting the alerts fire ovenight [14:23:09] hmm [14:23:10] heya teammm [14:23:41] i'm not sure fdans, your point is good, but I'm worried about unexpected behavior. If a user is even adding a min/max, they are probably doing so for some reason [14:23:52] if they are doing it outside of those bounds, they are also probably doing it for some reason [14:24:22] much of what jsonschema-tools does is to aid generation of schema files with shortcuts and tools and defaults [14:24:39] fdans: perhaps [14:24:43] we should add tests [14:24:45] for this too [14:25:01] so, never overwrite if min/max are provided [14:25:01] but [14:25:03] in tests [14:25:15] if enforcedBounds AND min/max are set somewhere outside of those, then fail. [14:25:26] i'm trying to avoid magic modifications [14:25:31] i'm ok with defaults [14:25:43] !log deployed pageview whitelist with new wiki: ja.wikivoyage [14:25:44] but overriding what is in the source file is sneakier [14:25:45] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [14:27:46] hmmmmmm yea adding tests that fail when that happens sounds good! [14:28:08] I'll modify ottomata [14:28:12] joal: is it ok if we use the task description to list the things that need backup? If you agree, I will do it. https://phabricator.wikimedia.org/T260409 [14:29:06] ottomata: kafka-jumbo1006 is on buster, all /srv data preserved! [14:29:35] NIIIIICE! [14:29:39] ok fdans ty [14:36:59] (03PS1) 10Mforns: Pull back start time of pingback reports [analytics/reportupdater-queries] - 10https://gerrit.wikimedia.org/r/623002 (https://phabricator.wikimedia.org/T246154) [14:43:59] (03CR) 10Mforns: [V: 03+2 C: 03+2] "Self-merging to backfill prod." [analytics/reportupdater-queries] - 10https://gerrit.wikimedia.org/r/623002 (https://phabricator.wikimedia.org/T246154) (owner: 10Mforns) [14:47:50] btw ottomata do you think you'll have a chance to look at https://github.com/wikimedia/eventgate/pull/10 today? [14:48:23] AH thanks for reminder [14:48:23] yes [14:48:39] I think we'll also have to enable --cors on the logging-external eventgate [14:48:49] I didn't see it returning those headers when I tried curl -X OPTIONS [14:48:55] can you add a comment there explaining why application/reports+json ? [14:49:00] will do! [14:49:01] like I did for text/plain ? [14:49:02] ty [14:50:08] 10Analytics, 10Analytics-Wikistats: add link to translate wiki in wikistats footer - https://phabricator.wikimedia.org/T261502 (10Nuria) [14:50:14] hm ya it looks like there i no cors speicifc config, but i think it is easy to add with service runner stuff [14:50:23] the default is [14:50:23] cors: false [14:50:27] in eventgate chart [14:51:02] https://github.com/wikimedia/service-template-node/blob/master/config.prod.yaml#L39 [14:51:22] ottomata: done [14:55:33] mforns: sounds good, let's list in the task :) [14:55:43] joal: ok, will do [14:55:48] thanks mforns :) [14:55:56] and as we find more we edit it [15:01:11] brb [15:08:41] (03CR) 10Mforns: "Agree with all comments from Joseph." (032 comments) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/612454 (https://phabricator.wikimedia.org/T257691) (owner: 10Nuria) [15:14:19] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10Platform Team Workboards (Initiatives): reportupdater Pingback reports are broken and need to be refactored - https://phabricator.wikimedia.org/T246154 (10mforns) @CCicalese_WMF The back-filling from 2018-09-02 is running. It will take about 1 more da... [15:16:10] a-team, didn't manage to deploy AQS yesterday (for geoeditors), and next week I'm in vacation. Do you think I can deploy today? [15:16:59] mforns: nah I’ll do it next week no worries, I’m familiar enough [15:30:42] joal: general question that i've wondered about (and now is relevant because i'm considering rewriting some code in Scala to PySpark with which I'm much more familiar). if i'm running a relatively simple query on data that is accessible via Hive, will I see any major difference in computation time or memory usage if I write it in Pyspark vs. Hive vs. Scala? Or are they all just translating what I write to the same back-end process? [15:32:39] milimetric: thanks! :] If something is broken (I tested thoroughly but you know...!), I can fix the week after. [15:35:28] isaacj: just in case he’s not around, I think there shouldn’t be any difference, it’s all happening on spark’s side, after the python client makes the call and sends the spark sql query. Jo did say there were differences but I looked it up and they’re more obscure than what I run into with daily coding. My impression was always that Jo just values the static type analysis [15:37:00] milimetric: thanks! that makes a lot of sense. I definitely not taking advantage of some of the more advanced features so sounds like i'm good to go with what i know best [15:38:16] Hey team, I'm trying to test out my ability to ssh, and I'm getting [15:38:16] $ ssh stat1008.eqiad.wmnet [15:38:17] ssh: Could not resolve hostname stat1008.eqiad.wmnet: nodename nor servname provided, or not known [15:38:46] while joal probably still has a ways to go to get me to be a scala convert, he did win some understanding from me this week of the usefulness of static types because i was working with the wikidata entity table in Hive and realized how valuable a set schema was for accessing / filtering based on deeply nested JSON data [15:39:06] hey razzi, I think that your ssh config is probably not right, there are some SRE docs about it [15:39:36] Oh right https://wikitech.wikimedia.org/wiki/Production_access#Setting_up_your_SSH_config [15:39:49] yes exactly, I was about to link it :) [15:40:42] as for the bastion, bast4002.wikimedia.org is probably the best for you [15:40:50] (it is in San Francisco) [15:42:30] razzi: o/ seems you are finding some answers, am avail to help if you need too [15:43:01] 👍 [15:52:00] milimetric: I did not yet add the changes to https://github.com/wikimedia/analytics-aqs-deploy/blob/master/scripts/insert_monitoring_fake_data.cql for aqs deployment [15:52:09] just learned that today [15:52:28] oh ok, mforns, I'm adding that to the etherpad then [15:52:37] ok [15:52:39] thanks! [15:56:54] heya isaacj - about your question of performance: sql query from pyspark or scala should perform similar - variation may happen depending on how functions are implemented (some pandas vectorized functions make python faster, most of the time scala wins, but the diff should be small) [15:57:31] woohoo! definitely the response I was hoping for :) [15:57:35] joal: thanks [15:57:38] isaacj: For more complex stuff like using custom UDFs, or using datasets with functions, scala is supposedly be faster [15:58:08] however isaacj, it's different in hive - hive will almost always be slower [16:00:15] joal: okay, that makes sense re UDFs. I have a few examples in Scala so I'll turn to those if I'm seeing really poor performance around any I write in PySpark. and you helped me move beyond Hive when I realized that I couldn't write those nice nested `with table as (SELECT...)` queries in Hive :) [16:00:54] isaacj: you can write CTEs in hive! [16:01:13] oh no, don't tell me that! i'll revert back to Hive ;) [16:01:14] isaacj: but moving away from is a good move whatever the reason :) [16:01:36] :-P [16:01:43] (03CR) 10Nuria: [C: 03+2] Add ja.wikivoyage to pageview allow-list [analytics/refinery] - 10https://gerrit.wikimedia.org/r/622926 (owner: 10Joal) [16:03:12] (03CR) 10Joal: [WIP] chopping timeseries for noise detection (031 comment) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/612454 (https://phabricator.wikimedia.org/T257691) (owner: 10Nuria) [16:09:13] a-team: on tue the SRE team will do the switchover to codfw, let's keep it in mind [16:10:05] ack elukey [16:10:05] yep, they said it only took *minutes* in some tests. That’d be amazing [16:10:56] yes last time it took less than half an hour IIRC [16:11:12] and with all the new etcd stuff in place, should be way quicker [16:12:53] 10Analytics-Clusters: Upgrade Kafka Brokers to Debian Buster - https://phabricator.wikimedia.org/T255123 (10elukey) Kafka jumbo 1006 is now on Buster, and data under /srv was preserved. 100[1-5] are still to be done! [16:13:41] razzi: if you have questions about the switchover to codfw let us know [16:14:40] Can I watch it happen? :) [16:15:38] (I found the email about it, and they say to follow #wikimedia-operations) [16:21:36] 10Analytics: Fix TLS certificate location and expire for Hadoop/Presto/etc.. and add alarms on TLS cert expiry - https://phabricator.wikimedia.org/T253957 (10elukey) @MoritzMuehlenhoff @jbond if you have time, I have an idea to discuss (I see that there is something moving for PKI so this could be relevant/not-n... [16:21:48] yes yes I'd also suggest to join #wikimedia-sre [16:23:19] we are currently running mediawiki + dbs + services + etc.. in two DCs, eqiad (virginia) and codfw (dallas) but only one of them is "active" [16:23:38] this is due to a variety of reasons, the most important one are databases :) [16:23:57] there are some services now working in active/active mode, namely serving traffic from both dcs [16:24:10] and eventually we might have even mediawiki in active/active mode [16:24:19] but work is still in progress and very complex :) [16:25:09] the rest of the DCs, esams (Amsterdam) eqsin (Singapore) ulsfo (san francisco) are only running Varnish/ATS/DNS plus other services [16:25:29] (so basically acting as our CDN) [16:25:38] caches are also present in codfw/eqiad of course [16:25:58] now in our use case, it is interesting since all the analytics stuff is only in eqiad [16:26:30] so when we switchover to codfw, we are left "alone" in eqiad :) [16:26:58] it is a good opportunity for SRE to do maintenance that requires downtime (especially network), so we need to be extra vigilant to watch for troubles [16:27:12] razzi: --% [16:27:14] --^ [16:29:11] Cool. I take it since we'll be "alone" in eqiad, our servers might go offline maintenance, unless operations is cued into what we're using [16:29:49] nono there are some tasks with the list of services affected for each maintenance ops etc.. [16:30:02] so we are aware of most of the potential troubles [16:30:12] buuut there might be some unexpected ones :D [16:30:42] lemme get an example [16:31:09] razzi: https://phabricator.wikimedia.org/T256112 [16:31:25] I suppose we could even concurrently migrate our stuff from eqiad to codfw, but that would be a ton of work and we could do that anytime without having to make wikis read-only like they're doing [16:31:49] and https://phabricator.wikimedia.org/T196487 [16:32:07] razzi: yes and we'd need a ton of hardware in codfw too [16:32:16] It does make me wonder what would happen to our data if eqiad simply shut off... bad things :P [16:32:53] yeah that is a disaster scenario :) [16:33:16] so in the first task of the above, they are going to recable the network switches that form "Row D" [16:33:52] in the DCs we have servers in racks (each of them equipped with a switch), and groups of racks form a "row" [16:34:01] we have 4 in eqiad and codfw (A B C D) [16:34:15] and all the rows connect to two main routers [16:34:27] nothing super fancy, very simple [16:35:02] in theory, the recabling should be without impact.. in practice, if it fails, there is the potential for a network outage of row D [16:35:09] in which we have part of the hadoop nodes, etc.. [16:35:15] but we can survive if it happens [16:35:39] the second task is about a specific Rack inside a row, D4 (so row D cabinet 4) [16:35:39] D4: iscap 'project level' commands - https://phabricator.wikimedia.org/D4 [16:36:16] elukey: ping me monday RE pki [16:36:26] jbond42: <3 [16:36:31] im switching of for the weekend now, enjoy :) [16:36:54] of course yes sorry for the late ping, didn't mean to get an answer today :) [16:36:57] have a good weekend! [16:37:10] razzi: https://netbox.wikimedia.org/dcim/devices/?q=&rack_id=38&status=active&role=server (you should be able to access with your uid + password for wikitech) [16:37:15] no not late at all, if anything im knocking off a bit early for me :) [16:37:19] this is the list of servers impacted for D4 [16:37:22] anyway enjoy the weekend :) [16:37:38] thanks! [16:38:54] OH ! [16:38:55] elukey: ! [16:39:04] we need to create razzi's kerberos creds [16:39:26] i can just do that ya? [16:39:31] yep [16:40:00] IIRC his entry in admin's yaml was already having the krb: present flag [16:40:06] so no need for puppet changes [16:40:34] cool [16:40:39] > The protocol was named after the character Kerberos (or Cerberus) from Greek mythology, the ferocious three-headed guard dog of Hades. [16:40:39] Now I understand that elukey shirt [16:40:39] ok running manage_principals.py create [16:40:53] ok razzi you should get an email with instructions [16:41:02] you can also check out [16:41:02] https://wikitech.wikimedia.org/wiki/Analytics/Systems/Kerberos/UserGuide [16:41:32] we can do some sessions over meet to explain various things [16:41:39] like hadoop, kerberos, kafka, etc.. [16:41:55] maybe not 2h every time, something lighter :) [16:43:01] Got email, reset password, opened hive! [16:43:06] nice! [16:43:12] razzi: btw, not sure what OS you're running, but if it's Debian or Ubuntu, you might be interested in https://wikitech.wikimedia.org/wiki/Wmf-sre-laptop [16:43:34] cdanis: that is cool! [16:43:41] i might use that in mw-vagrat! [16:43:52] my pwstore is always borked [16:44:13] I'm on mac for now, but that might still be useful, and I hope to someday have a debian machine [16:46:37] I am going offline folks, have a nice weekend! [16:48:08] cheers elukey :) [16:48:24] ottomata: are the schemas bundled into eventgate's docker image by the deployment pipeline at build time? [16:48:30] from both primary and secondary? [16:49:46] cdanis: [16:49:49] depends on the instance [16:49:49] https://wikitech.wikimedia.org/wiki/Event_Platform/EventGate#EventGate_clusters [16:50:24] i think in all cases they have both primary and secondary bundled, but only some are configured to reach out dynamically if they don't have a schema locally [16:50:39] for eventgate-logging-external, its all local [16:50:41] ok! [16:50:42] no remote schema lookup [16:50:51] yeah, and that one doesn't have secondary configured yet either [16:50:59] (which is where I'm thinking the NEL schema belongs) [16:51:07] do patches to those repos trigger a rebuild of eventgate? [16:51:37] it doesn't have secondary? hm. yes. you can just add it to blubber.yaml and it will rebuild image [16:51:39] oh [16:51:41] to schema repos? [16:51:41] no [16:51:55] you have to update the eventgate-wikimedia image and update the sha at which the git repo is coloned [16:51:56] cloned [16:51:58] in blubber.yaml [16:52:02] ack [16:52:30] ok I'll clean up the schema patch, get that out for review to you, and also start poking at the blubber and helmfile patches [16:52:42] COOO [16:52:44] :) [16:52:47] do you agree that secondary seems like the right place? [16:52:52] yes [16:53:00] 👍 thanks! [16:53:32] hm actually it probably doesn't matter? i could be convinced either way [16:53:33] hmmm [16:53:41] actually, maybe primary is better [16:53:49] the only reason there are two repos is to manage merge rights really [16:54:01] we've got the client error loggign schema in primary [16:54:04] let's put this one in there too? [17:09:26] ah okay [17:09:32] the readme said that primary had to do with user-visible features in MW [17:09:35] but, sure! [17:09:39] happy to put it in either spot [17:19:19] ottomata: another question for you -- how precise does the schema need to be? e.g. if I allow additionalProperties: true (which I think I want to, as this is an external standard that is still evolving), will any additional fields still be sent to / indexed by logstash? [17:26:08] https://wikitech.wikimedia.org/wiki/Event_Platform/Schemas/Guidelines [17:26:12] needs to be precise! [17:28:06] ah [17:28:12] it worked with the dev server :) [17:28:18] but I guess I did specify all the fields that occurred [17:28:20] hmm [17:28:28] optional properties are okay, right? [17:28:45] yes okay [17:28:50] I think this will be workable [17:37:49] so, hm they are ok, but they won't be in hive [17:37:53] they'll be ignored [17:37:57] we try to avoid them [17:38:23] we'd kinda like to make additionalProperties: false the default [17:38:34] optional properties yes [17:38:37] those alre fine [17:38:37] sorry [17:38:43] additionalProperties gets tricky [17:52:51] 10Analytics, 10Analytics-Wikistats, 10I18n: Add link to translatewiki.net in wikistats footer - https://phabricator.wikimedia.org/T261502 (10Nemo_bis) [18:04:03] ottomata: can you help me with some ssh problemssss? [18:09:55] hi mgerlach - A spark job of yours have used more than half of the cluster ram... Can you please reduce it next time? [19:54:22] mforns: still need help? [19:56:32] hi cdanis! yes, can you help me? I'm having problems with ssh-ing into login.tools.wmflabs.org [19:56:48] can you put ssh -v output on a pastebin somewhere? [19:57:01] I *think* I got my ssh config right, looked it up in the docs. [19:57:13] and, did it used to work? :) [19:57:17] But when I ssh -v login.tools.wmflabs.org [19:57:39] it seems to accept my key, but then it closes connection abruptly [19:58:06] cdanis: I'd say yes, but it's been a very long time since I ssh into labs [19:58:13] probably a couple years [19:58:51] actually, I probably never tried to ssh into login though [20:00:21] mforns: is this your public key you're using for labs? https://phabricator.wikimedia.org/P12416 [20:01:19] cdanis: yes [20:02:45] mforns: so, I don't see you listed on horizon.wikimedia.org for the tools project -- so I think what is happening is, the labs SSH bastion accepts your key, and then login.tools turns you away [20:02:58] I see, makes sense [20:03:46] I think you want to apply here: https://toolsadmin.wikimedia.org/tools/membership/apply [20:03:54] (I'm just looking at https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Quickstart ) [20:05:15] ok cdanis thanks a lot, will look into that! [20:11:21] a-team, signing off for vacation! see you on Sep 7th :] have a nice week [20:28:03] * razzi waves bye [20:28:17] * razzi waves bye (to Marcel, I'm still around :) ) [20:32:14] mforns: ciao ciao [20:33:44] byeee :D [20:50:47] (03PS1) 10Paul Kernfeld: reader.py: Get report granularity from defaults [analytics/reportupdater] - 10https://gerrit.wikimedia.org/r/623060 (https://phabricator.wikimedia.org/T193171) [20:50:49] (03CR) 10Welcome, new contributor!: "Thank you for making your first contribution to Wikimedia! :) To learn how to get your code changes reviewed faster and more likely to get" [analytics/reportupdater] - 10https://gerrit.wikimedia.org/r/623060 (https://phabricator.wikimedia.org/T193171) (owner: 10Paul Kernfeld) [20:57:18] (03CR) 10Paul Kernfeld: "Hi! I see that you created T193167 so I thought you might be a good reviewer for this." [analytics/reportupdater] - 10https://gerrit.wikimedia.org/r/623060 (https://phabricator.wikimedia.org/T193171) (owner: 10Paul Kernfeld) [20:59:14] 10Analytics, 10Patch-For-Review, 10good first task: [reportupdater] Allow defaults for all config parameters - https://phabricator.wikimedia.org/T193171 (10paulkernfeld) a:03paulkernfeld I saw this was tagged with "good first task" so I put together a partial implementation of this in Gerrit.