[07:25:49] 10Analytics, 10Analytics-EventLogging, 10Analytics-Kanban, 10EventBus, and 2 others: RFC: Modern Event Platform: Schema Registry - https://phabricator.wikimedia.org/T201643 (10kchapman) TechCom has approved this. [07:26:30] 10Analytics, 10Analytics-EventLogging, 10Analytics-Kanban, 10EventBus, and 2 others: RFC: Modern Event Platform: Stream Intake Service - https://phabricator.wikimedia.org/T201963 (10kchapman) TechCom has approved this. [07:26:38] 10Analytics, 10Analytics-EventLogging, 10Analytics-Kanban, 10EventBus, and 2 others: RFC: Modern Event Platform: Schema Registry - https://phabricator.wikimedia.org/T201643 (10kchapman) [07:26:48] 10Analytics, 10Analytics-EventLogging, 10Analytics-Kanban, 10EventBus, and 2 others: RFC: Modern Event Platform: Stream Intake Service - https://phabricator.wikimedia.org/T201963 (10kchapman) [11:23:45] (03PS5) 10Michael Große: Update metric's items and properties automatically [analytics/wmde/toolkit-analyzer] - 10https://gerrit.wikimedia.org/r/475807 (https://phabricator.wikimedia.org/T209399) [11:31:04] 10Analytics-EventLogging, 10Analytics-Kanban, 10EventBus, 10Core Platform Team Backlog (Watching / External), 10Services (watching): Modern Event Platform: Stream Intake Service: Migrate Mediawiki Eventbus events to EventGate - https://phabricator.wikimedia.org/T211248 (10mobrovac) [11:38:49] * elukey lunch! [12:36:50] addshore: Hi - I'm sorry we completely missed each other those past days [12:37:16] addshore: I'm assuming the need you are expressing is for monitoring duplicates in items? [12:37:26] and actually maybe in/cross items addshore ? [12:38:32] If so, I'm assuming we could easily get a first idea by getting strings duplicated globally, and then check by duplicated-string [12:39:50] addshore: o/ [12:40:04] hi elukey :) [12:40:31] addshore: after you have finished with joal, can I know if you still need logs for https://phabricator.wikimedia.org/T118739 ? We are reviewing rsync usage and this use case is kinda problematic now [12:40:47] elukey: going for minimal duty while Lino sleeps [12:41:31] joal: :) [12:41:36] I am adding worker nodes [12:41:44] and trying to fix puppet weird things [12:41:59] elukey: I see that - I'm gonna make a screenshot when we reach 4Tb RAM :0 [12:42:02] :) [12:42:25] available space 2.5PB [12:42:27] ahahhaha [13:17:00] elukey: how many moar nodes? [13:17:44] 91->95 [13:18:39] joal: o/ [13:18:44] heya [13:18:45] not actually for monitoring duplicated items [13:18:57] just seeing how many duplicated strings are used as tersm (labels aliases descriptions) [13:19:10] elukey: yes we still use that [13:19:17] Right addshore - duplicated strings defining items (desc/alias/label) [13:19:26] James_F: yup [13:19:30] opps >.> [13:19:31] joal: yes [13:19:40] addshore: so you're interested in those string whose count is >1 [13:19:46] joal: yes [13:19:57] great - will try to find that :) [13:19:57] elukey: https://github.com/wikimedia/analytics-wmde-scripts/blob/master/src/wikidata/dumpDownloads.php#L12 [13:20:33] elukey: https://github.com/wikimedia/puppet/blob/b347052863d4d2e87b37d6c2d9f44f833cfd9dc2/modules/statistics/templates/wmde/config.erb#L5 [13:21:01] * addshore is not sure if dump requests end up in hadoop yet? but if so it is possbilly something we could looka t re writing [13:21:10] ahhh okok [13:22:24] elukey: Will wait for the last 4 nodes to get in and screenshot :) [13:22:48] joal: well the actual question is how many bytes of duplication is there across all of those strings [13:23:04] addshore: ok get it [13:23:25] so if all of those strings were "foo", "foo", "bar", "baz", we would have 3 duplcated bytes [13:23:27] addshore: will first get the strings, then the bytes (or chars) should be okish [13:23:32] joal: ack [13:23:47] and infact, i guess not only duplicated bytes, but we need to know how many bytes in total the thing is [13:24:12] so how many bytes are all distinct term strings [13:24:25] and then also how many bytes of duplicate lies within all of the strings [13:24:34] addshore: a bit of context - we got rid of explicit rsync push to stat100X boxes, and we didn't know that labsdb regularly pushes dumps access logs [13:24:42] joal: and yes, these week has been crazy, it always is when im actually in the office :) [13:24:45] addshore: ah - this a bit different :) [13:24:51] :) [13:24:55] elukey: gotcha [13:26:03] elukey: I added a comment to that old ticket with the 2 links to the things I just sent you too, incase you ever need to find them again [13:26:42] super, Ariel created https://phabricator.wikimedia.org/T211330 [13:26:48] addshore: so first thing, I'm gonna try to compute: sum([size_in_bytes(s) for s in (label + desc + alias)] foreach items) [13:26:59] 10Analytics, 10Datasets-General-or-Unknown: cron job rsyncing dumps webserver logs to stat1005 is broken - https://phabricator.wikimedia.org/T211330 (10elukey) [13:27:54] Then: sum([size_in_bytes(s)*(nb_occurrences_s - 1) for s in (label + desc + alias)] foreach items) [13:28:07] joal: that sounds good! [13:28:28] in theroy we care about properties too, but there as so few of them it really doesnt matter [13:28:31] ok addshore, will try to find some time [13:29:32] addshore: Ah, when I said items I actually meant element [13:29:38] So props+items [13:29:42] :D amazing! [13:35:04] addshore: Last confirmation - Should I leave the data-values of string-types out [13:35:07] ? [13:40:48] joal: yes please leave those out [13:41:00] onyl labels descriptions and aliases please :) [13:42:29] ack addshore [13:49:08] joal: all worker nodes up [13:49:19] \o/ elukey - Watching [13:50:34] joal: my idea is to leave the cluster as it is for the weekend, then on Monday schedule a bit of time to move the journal nodes to the new hosts [13:50:46] and then hopefully start the decom process [13:51:10] elukey: 4.88TB for 2141 cores and 2.8PB storage :) [13:51:20] :D [13:51:22] * joal plans for some evilness to do in the weekend :) [13:51:48] going afk for ~30 mins! [13:57:13] joal: I also found out that in theroy the wikibase JSON in the XML dump is already standardized before exported for the dump, which means you could just have a job that loads current wikibase revisions from the all revisions table i guess [13:57:18] i need to double check that though [14:37:15] 10Analytics, 10Operations, 10cloud-services-team, 10ops-eqiad: Degraded RAID on cloudvirtan1001 - https://phabricator.wikimedia.org/T211235 (10jijiki) p:05Triage>03High [14:37:51] 10Analytics, 10Operations, 10cloud-services-team, 10ops-eqiad: Degraded RAID on cloudvirtan1001 - https://phabricator.wikimedia.org/T211235 (10jijiki) The alert from icinga is gone, close this if you believe everything is ok :) [15:01:10] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Elukey: Move turnilo to nodejs 10 - https://phabricator.wikimedia.org/T210705 (10elukey) Turnilo is now running on nodejs 10! [15:08:53] !log turnilo migrated to nodejs 10 [15:08:55] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [16:06:15] 10Analytics, 10Analytics-Kanban, 10Datasets-General-or-Unknown, 10Patch-For-Review: cron job rsyncing dumps webserver logs to stat1005 is broken - https://phabricator.wikimedia.org/T211330 (10elukey) [16:48:13] going afk for a sec people, might be 5 mins late to standup sorry :( [16:57:19] 10Analytics, 10Analytics-Kanban, 10DBA, 10Data-Services, and 3 others: Create materialized views on Wiki Replica hosts for better query performance - https://phabricator.wikimedia.org/T210693 (10Banyek) 150 Gb seems acceptable to me, but - we still have some time factor to think about, because those table... [16:58:11] 10Analytics, 10Analytics-Kanban, 10DBA, 10Data-Services, and 3 others: Create materialized views on Wiki Replica hosts for better query performance - https://phabricator.wikimedia.org/T210693 (10Banyek) @Bstorm if you prepare a depool patch for me for tomorrow I can start create the mat. view on the other... [17:12:35] 10Analytics, 10DBA, 10Data-Services, 10User-Banyek, 10User-Elukey: Hardware for cloud db replicas for analytics usage - https://phabricator.wikimedia.org/T210749 (10Banyek) [17:15:36] 10Analytics, 10DBA, 10Data-Services, 10User-Banyek, 10User-Elukey: Hardware for cloud db replicas for analytics usage - https://phabricator.wikimedia.org/T210749 (10Banyek) >>! In T210749#4795368, @Milimetric wrote: > Ok, done and agreed. But instead of trying to find hardware that will keep up with rep... [17:31:28] randomly curious, did the hadoop cluster get larger or is it in the middle of replacing age-d out servers? [17:32:14] ebernhardson: the second one, we have 67 nodes now :) [17:32:24] next week I'll start the decom of OOW hw :( [17:33:04] ebernhardson: did you see anything weird happening? [17:33:18] elukey: no, i was just surprised at the numbers at the top of yarn.wikimedia.org [17:33:27] just randomly curious :) [17:33:37] super, feel better :D [17:34:20] Joseph checked and now we have 4.88TB of ram, 2141 cores and ~2.8PB of space [17:40:30] 10Analytics, 10Research: POC More efficient Bot filtering on pageview data - https://phabricator.wikimedia.org/T211359 (10Nuria) p:05Triage>03Normal [17:41:42] 10Analytics, 10Analytics-Kanban: Failure while refining webrequest upload 2018-12-01-14 - https://phabricator.wikimedia.org/T211000 (10elukey) a:05elukey>03JAllemandou [17:42:14] 10Analytics, 10Analytics-Kanban, 10Research: Create labeled dataset for bot identification - https://phabricator.wikimedia.org/T206267 (10Nuria) [17:42:17] 10Analytics, 10Research: POC More efficient Bot filtering on pageview data - https://phabricator.wikimedia.org/T211359 (10Nuria) [17:42:55] 10Analytics, 10Analytics-Kanban, 10Research: Create labeled dataset for bot identification - https://phabricator.wikimedia.org/T206267 (10Nuria) [17:42:59] 10Analytics, 10Research: [Open question] Improve bot identification at scale - https://phabricator.wikimedia.org/T138207 (10Nuria) [17:44:34] 10Analytics, 10Analytics-Kanban, 10Research: POC More efficient Bot filtering on pageview data - https://phabricator.wikimedia.org/T211359 (10Nuria) [17:45:05] 10Analytics, 10Analytics-Kanban, 10Research: POC More efficient Bot filtering on pageview data - https://phabricator.wikimedia.org/T211359 (10Nuria) [17:56:36] ping elukey groskin? [18:07:11] 10Analytics, 10Research, 10WMDE-Analytics-Engineering, 10User-Addshore, 10User-Elukey: Phase out and replace analytics-store (multisource) - https://phabricator.wikimedia.org/T172410 (10elukey) To follow up what I wrote (after a chat with the data persistence team): * the proposal in T210478#4794536 wou... [18:24:49] 10Analytics, 10Research, 10Wikidata: Copy Wikidata dumps to HDFs - https://phabricator.wikimedia.org/T209655 (10Ottomata) @Nuria we should fit this in somewhere! Maybe a Q3 goal? :D [18:29:06] 10Analytics, 10Research, 10Wikidata: Copy Wikidata dumps to HDFs - https://phabricator.wikimedia.org/T209655 (10Nuria) Having missed most of goals this quarter due to our mw woes i think this might need to be moved to next quarter (q4?) [18:54:00] Hi addshore - I haz numbers for you :) [18:54:16] addshore: if there is a task somewhere I can document [19:07:11] joal: there is no task yet [19:07:16] But woo, numbers!!!!! [19:07:21] I can make a task next week! [19:07:26] * joal loves to solve tasks before they exist :) [19:10:16] addshore: TL;DR - total-bytes is ~45G over ~100M unique strings. Usefull-bytes (minus duplication) is 1 order of magnitude smaller: 4G, with ~25M duplicates - The idea of creating an indirection table could be very valuable :) [19:33:10] 10Analytics, 10Operations, 10cloud-services-team, 10ops-eqiad: Degraded RAID on cloudvirtan1001 - https://phabricator.wikimedia.org/T211235 (10Ottomata) 05Open>03Resolved a:03Ottomata Assuming this was caused by @andrewbogott reformatting the hosts. Closing. [19:42:17] milimetric: sqoop has been successfull - However we have experienced an error on denormalize- checking that now [19:43:11] hopefully nothing bad... oof that would suck [19:44:33] milimetric: java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Long [19:45:15] of course! [19:45:20] we should've known from our other tests [19:45:31] nothing changed on our side, so the old sqoop was going to have the same problem [19:45:32] milimetric: I was thinking the same thing :( [19:45:41] ok, cave? [19:45:51] milimetric: Will devise a patch to cast on spark side [19:46:03] oh, ok, that works [19:46:11] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Add new wikis to analytics - https://phabricator.wikimedia.org/T209822 (10Nuria) [19:46:17] milimetric: listening to activity metrics - We can cave after [19:46:27] I was going to suggest moving that table from the snapshot and selecting into this snapshot with a cast [19:46:47] it would take longer to execute but it's easier to write [19:47:01] that's ok, I'm just finishing lunch [19:47:26] * elukey off! [19:56:21] (03PS1) 10Joal: Fix mediawiki-history job cast issue in logging [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/478046 [19:56:26] milimetric: --^ [19:57:02] milimetric: If you validate, I'll start a manual sqoop using that patch [19:57:11] * joal is gone for a quick diner [19:59:34] (03CR) 10Milimetric: [V: 032 C: 032] Fix mediawiki-history job cast issue in logging [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/478046 (owner: 10Joal) [20:00:04] (03CR) 10Nuria: [C: 032] Fix mediawiki-history job cast issue in logging [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/478046 (owner: 10Joal) [20:02:03] (03CR) 10Milimetric: [C: 032] "hm, hang on, need to take a second here" [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/478046 (owner: 10Joal) [20:08:34] (03Merged) 10jenkins-bot: Fix mediawiki-history job cast issue in logging [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/478046 (owner: 10Joal) [20:10:42] (03CR) 10Milimetric: [C: 032] "grrr, I was too late, should've removed the +2." [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/478046 (owner: 10Joal) [20:12:46] hm, I might be wrong but I tested just "select log_user from wmf_raw.mediawiki_logging where snapshot= [20:12:49] milimetric: i have run into that avro problem , wait a sec [20:12:54] '2018-10' limit 10 [20:13:07] and it failed with org.apache.avro.AvroTypeException: Found long, expecting union [20:14:47] milimetric: where are the avro schemas from table ? something like hdfs://analytics-hadoop/tmp/PageContentSaveCompleteAutoGeneratedSchema.avsc [20:15:16] milimetric: maybe in thsi case we do not use those at all [20:15:35] I think the metadata is in the files themselves, one sec [20:17:12] milimetric: i think we might not be using the avro schema here [20:17:52] nuria: try doing "head /mnt/hdfs/wmf/data/raw/mediawiki/tables/logging/snapshot=2018-11/wiki_db=etwiki/part-m-00000.avro" [20:18:28] this is the problem: {"name":"log_user","type":["null","string"],"default":null,"columnName":"log_user","sqlType":"3"} [20:19:03] we ran into this while testing the new sqoop, something subtle changed on the db servers so that it messes with sqoop and it detects this and a couple of other fields incorrectly [20:19:33] we couldn't figure out what changed though, because some fields with the same exact type don't have this behavior, they're recognized as Long [20:21:46] milimetric: i had this SAME problem when sqooping eventlogging data from dbstore1002 [20:22:38] interesting [20:22:39] milimetric: in some instances i was able to bypass it modifying avro schema directly, in others i just had to cast when sqooping [20:22:46] like: cast(event_isAPI as Integer) [20:22:51] like from one day to the next, it changed, with no changes in the sqoop code itself [20:23:10] it doesn't work to cast in the sqoop select, you have to set the --map-java-types or whatever [20:23:12] milimetric: but that would mean we need to re-do sqooping [20:23:20] because it's a problem in inferring, not in the select [20:23:53] yeah, I'm looking around for an alternative - worst case we can just manually edit the stupid Avro schema... wouldn't be the worst thing, there's only 800 of them [20:24:13] milimetric: we can do this: [20:24:27] milimetric: TBLPROPERTIES ('avro.schema.url'='hdfs://analytics-hadoop/tmp/PageContentSaveCompleteAutoGeneratedSchema.avsc') [20:24:50] nuria: that would work for the Hive table when using Hive, but the denorm job is spark [20:24:59] so I think it uses the metadata directly [20:25:01] milimetric: argh [20:25:04] yeah, why wouldn't this work: [20:25:05] nuria: [20:25:15] mv part000.avro -> somewhere else [20:25:19] edit the log_user type [20:25:22] mv back [20:25:27] test to see if it works [20:25:36] milimetric: cause you cannot modify data in hive [20:25:38] it'll take a minute to verify [20:25:41] not Hive [20:25:46] I'm talking about the underlying file [20:25:50] it's just a file on HDFS [20:25:54] that I can move and edit [20:25:58] milimetric: ah true [20:26:05] ok, gonna try it, first making a test case [20:26:41] we want to turn this: [20:26:51] {"name":"log_user","type":["null","string"],"default":null,"columnName":"log_user","sqlType":"3"} [20:26:54] into this: [20:26:54] {"name":"log_id","type":["null","long"],"default":null,"columnName":"log_id","sqlType":"-5"} [20:28:32] ok, test case fails: [20:28:33] hive (wmf_raw)> select log_user from mediawiki_logging where snapshot='2018-11' and wiki_db='etwiki' limit 10; [20:31:37] milimetric: wouldn't you need to rebuild hive table? [20:31:56] no, that just tells Hive where to look on HDFS [20:37:58] hm, that didn't seem to work it just gives me a new exception: Failed with exception java.io.IOException:org.apache.avro.AvroRuntimeException: Malformed data. Length is negative: -49 [20:38:03] ottomata: any idea about this issue [20:38:43] can we repair a bunch of files in avro format, if the metadata says one of the fields is "string" but it's actually "long"? [20:39:16] milimetric: i don't actually know that much about avro [20:39:20] but i can brain bounce if you need [20:40:49] ottomata: ok, might help [20:41:35] bc? [20:44:45] milimetric, nuria: Shall I restart a manual job that includes the cast? [20:44:57] joal: come to the cave! [20:45:10] I don't think the cast thing works, but I'm confused [20:45:31] milimetric: I can't imagine why it wouldn't [20:58:24] hello all! It looks like kafka (kafka-jumbo1002 / kafka1003) are sending tons of logs to logstash [20:58:50] it looks like they are sending NOTICE level messages to logstash, which is probably more verbose than what we want [20:59:18] anyone knows something about it? [20:59:49] ottomata: ping --^ [21:00:04] milimetric: going to cave, give me 2 mins [21:00:22] gehel: we enable sending of logs to logstatsh from kafka just recently [21:00:31] ottomata: there is some chat about that in -operations [21:00:46] this might or might not be related to dropped UDP packets on logstash servers [21:01:07] in any case, we might want to dial back verbosity just a notch [21:01:12] ping me if you need help! [21:11:54] nuria: you wanted to talk in the cave? [21:12:10] milimetric: i need a sec to figure this logstash stuff [21:12:42] nuria: everything's ok with the history jobs, joseph explained and it's moving forward, so don't worry about it [21:12:49] I can explain later or tomorrow [21:15:06] 10Analytics, 10Analytics-EventLogging, 10Patch-For-Review: Resurrect eventlogging_EventError logging to in logstash - https://phabricator.wikimedia.org/T205437 (10Nuria) Reopening this change was reverted due to issues with too much verbose logging going into logstash: https://gerrit.wikimedia.org/r/#/c/oper... [21:15:08] 10Analytics, 10Analytics-EventLogging, 10Patch-For-Review: Resurrect eventlogging_EventError logging to in logstash - https://phabricator.wikimedia.org/T205437 (10Nuria) 05Resolved>03Open [21:15:17] 10Analytics, 10Analytics-EventLogging, 10Patch-For-Review: Resurrect eventlogging_EventError logging to in logstash - https://phabricator.wikimedia.org/T205437 (10Nuria) a:05phuedx>03Ottomata [21:18:46] 10Analytics, 10Analytics-EventLogging, 10Patch-For-Review: Resurrect eventlogging_EventError logging to in logstash - https://phabricator.wikimedia.org/T205437 (10Nuria) Messages like these were being logged as many as 500K every 15 mins: `[2018-07-11 19:44:04,942] INFO [ReplicaFetcher replicaId=1001, lead... [21:19:25] milimetric: ok, back, how did you solved the cast issue? [21:20:38] cc joal [21:20:41] nuria: I was querying through spark-sql, thinking it would be a good test for the change [21:20:50] milimetric: and? [21:20:54] nuria: but I was wrong, spark-sql looks at the Hive table definition [21:21:03] or our wrapper does [21:21:21] if you run a spark job, and issue a query with spark.sql('select ...'), then that's going against the actual metadata [21:21:30] milimetric: ah i see, and our job queries using spark directly [21:21:31] so jo's fix was totally fine [21:21:34] yea [21:21:42] milimetric: so MANY things [21:21:46] milimetric: you can avoid that [21:21:49] he's killed the old job and restarted it with his custom jar [21:21:51] if you use the files directly [21:22:04] yeah, we are in the job, using them directly [21:22:06] so we're ok [21:22:08] ok [21:22:09] milimetric: there is another error with the user data: org.apache.avro.AvroRuntimeException: java.io.EOFException [21:22:18] nuria: --^ [21:22:20] :( [21:22:22] joal: hm [21:23:06] joal: any indication of what file exactly? [21:23:12] nope :( [21:23:19] this is where it;ll get difficulkt [21:23:20] I'll restore the etwiki file from backup, maybe I messed it up [21:23:23] I can try to pinpont [21:24:12] joal, milimetric we can go to cave if you want [21:25:02] milimetric: if you've manually changed a file, that could very well be it :) [21:25:31] joal: yes, I see the tails are different, I'll restore, sorry about that! [21:26:03] ok, ya, that would make sense [21:26:08] milimetric: no problem, I prefer that than another new unknown issue :) [21:26:49] joal: done [21:27:04] yeah, it looks like vim added a newline to the end of the file, probably not expected by Avro [21:27:17] note to self: don't edit binary formats if you're not 1337 [21:28:24] (I queried the file after I was done editing, and Hive didn't seem to mind, so I thought it was fine) [21:28:48] Thanks milimetric - Will launch anew [21:31:26] 10Analytics, 10Analytics-EventLogging, 10Patch-For-Review: Resurrect eventlogging_EventError logging to in logstash - https://phabricator.wikimedia.org/T205437 (10Ottomata) Nuria I think you might have the wrong ticket. That change has nothing to do with EventLogging [21:31:45] milimetric: fixed - I breath again :) [21:32:12] joal: you were holding your breath!?! Don't do that, you're not David Blaine!!! Wait... ARE YOU DAVID BLAINE?! [21:32:34] milimetric: Please don't blaine me :) [21:32:46] hahha [21:33:51] Little message for Luca for tomorrow - Since we have a temporary bigger cluster and that a manual launch of MWH had to be done, I have started with a bit more ressources - Just to see ig it makes it run faster :) Thanks Luca :) [21:38:41] 10Analytics, 10Analytics-EventLogging, 10Patch-For-Review: Resurrect eventlogging_EventError logging to in logstash - https://phabricator.wikimedia.org/T205437 (10Nuria) Totally right, I thought it might be related to us sending more logs to kafka due to this EL change but I see it is not related as it happ... [23:02:38] (03PS4) 10Mforns: Allow for custom transforms in DataFrameToDruid [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/477295 (https://phabricator.wikimedia.org/T210099) [23:05:23] (03CR) 10jerkins-bot: [V: 04-1] Allow for custom transforms in DataFrameToDruid [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/477295 (https://phabricator.wikimedia.org/T210099) (owner: 10Mforns)