[09:30:23] (CR) Nuria: Fix user name display in CSV files (3 comments) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/129025 (owner: Milimetric) [11:23:55] (CR) Milimetric: [C: 2] Update MobileWebUploads schema tables [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/129319 (owner: JGonera) [11:31:44] (CR) Milimetric: Fix user name display in CSV files (3 comments) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/129025 (owner: Milimetric) [11:32:08] nuria: will you point me to where I'd have to change puppet? [14:26:20] (CR) Nuria: [V: 2] "Looks good, less clutter." [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/129024 (owner: Milimetric) [14:26:30] (CR) Nuria: [C: 2] "Looks good, less clutter." [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/129024 (owner: Milimetric) [17:45:23] milimetric: hey, do you know if notices were sent out re: eventlogging migration? [17:45:50] not that I saw ori [17:46:14] why is it happening? [17:46:27] sorry: why, comma, is it happening now? [17:46:45] it would if we could [17:47:29] we may be able to do the non-disruptive part now [17:48:43] ori: ping me if you end up doing anything, I'll help if I can [17:49:29] hi springle [17:49:33] hey [17:49:51] so, i'm not sure if this changes anything, but eventlogging's sql writer has the following properties: [17:50:38] it can compute the table schema from the event schema, so its main loop is: attempt to insert row, if that fails because the table doesn't exist, create the table and insert the row [17:51:22] so we could spin up a consumer earlier.. but that's probably complicating things [17:51:38] another thing is that we can drop the zz_* tables [17:52:26] if we spin the consumer up earlier, you're thinking to load the db1047 dump concurrently? [17:52:55] we'd presumably need the schema complete first for the dump, even if the consumer wouldn't need it [17:53:05] yeah, wouldn't we need to sync the data from the new db with the legacy data still? [17:53:09] springle: only if the dump loader has the same property (won't fail if a CREATE TABLE results in a 'table already exists') [17:53:12] and wouldn't a new consumer make that harder? [17:53:27] that's probably right [17:53:35] okay, let's not complicate it. [17:53:57] we could dump schema separately with mysqldump --no-data [17:54:28] start consumer, then mysqldump --no-create-info [17:54:49] that sounds good [17:54:49] but. hmm , yeah. untested :) and m2 master is doing other things i don't want to endanger [17:55:28] what would be the danger? [17:56:30] how's this for a plan: [17:56:49] possibly nothing. the two dump stages are safe. i only say it due to unfamiliarity with the consumer [17:57:51] can the data import stage do a ON DUPLICATE KEY IGNORE? [17:58:22] the eventlogging stream can have an arbitrary number of consumers, so i can spin up a consumer to write to m2 independently of the status of the db1047 consumer [17:59:16] almost. the dump can use INSERT IGNORE [17:59:32] not ON DUPLICATE KEY UPDATE [17:59:50] okay, so i really think that that would work. but let's assume the worst and send out a note telling people to expect some gap in the data [18:00:05] there may not be one in the end but i'd rather surprise people with good news than bad [18:00:27] milimetric: would you like to send it out, or should i? [18:00:48] ok. will save the dump just in case [18:00:54] I can send it ori, but maybe there's a way to avoid it? [18:01:32] sorry, i'll leave the travel info meeting :) [18:01:36] ok, full attention [18:02:05] maybe, but i want to be respectful of sean's time and mindful of risks [18:02:16] so let's say up to 12 hours [18:02:30] question: if the sql consumer is stopped, i was under the impression the data would simply queue on vanadium? if we stopped them, would there actaully be gaps? [18:03:20] (my answer would be yes there would be gaps and only the consumer buffers, but I don't know anything yet) [18:03:24] springle: the producer has a buffer quota per subscriber, and it's deliberately quite small [18:03:30] ah ok [18:03:37] springle: so it can smooth over small gaps, but not long outages [18:03:51] springle: but the consumers that write to an archive file on disk will continue to work and there will not be gaps there [18:04:35] so plan is: load up the bare schema on m2, spin up consumer to write to m2, load up the data from db1047 to m2, stop old consumer, wipe db1047 and start replicating to it from m2? [18:05:01] we would stop the old consumer before loading up the data from db1047 [18:05:24] right [18:05:29] if a consumer is already writing to m2, then there's no point in having a consumer feed data into db1047 that we know will consist only of dupes [18:05:30] yes [18:05:46] ori: writing the email announcing potential outage - to the analytics list and the eventlogging list? [18:06:15] milimetric: + engineering [18:06:15] and we're doing this... now? [18:06:26] i'm beginning the schema dump [18:06:33] springle: some of the zz_ tables are rather large [18:06:39] oh right [18:06:40] springle: so if there's a way to exclude them that'd be good [18:08:24] milimetric: say that the migration will take up to 12 hours and may involve gaps in the event data in the database [18:08:31] yep [18:08:46] milimetric: and that we will follow-up with another e-mail after we're done to give details about when/what/etc [18:10:29] springle: what's the address of the target database? i can have a puppet change for the 2nd db consumer ready [18:10:39] db1029.eqiad.wmnet [18:10:46] * ori nods [18:10:52] thanks very much for your help with this by the way [18:10:52] "log" again? [18:11:08] yeah, not the best of names but probably best to stick with it [18:11:15] ok [18:13:59] i like log :) [18:14:59] ori: email sent [18:15:05] milimetric: many thanks! [18:15:27] psh, we have until 23:15 your time to get this done [18:15:35] or as you call it - early evening [18:16:25] ori: actually, do we consider this miscellaneous or a production mediawiki extension? [18:16:42] springle: misc [18:16:59] in that case, db1048.eqiad.wmnet. sorry. [18:17:02] not db1029 [18:17:25] no problem, i'll amend [18:18:02] springle: actually, let me make sure we're on the same page [18:18:16] ori: go ahead [18:18:31] * springle pauses [18:18:32] eventlogging *is* in part a production mediawiki extension, but it's not one that uses the db apis. it uses UDP to send bits of json to vanadium [18:18:48] and the flow of data is unidirectional [18:18:59] in other words, if the database exploded, it would not affect mediawiki in any way [18:19:23] the extension doesn't even know the db exists; it's much further downstream from it [18:19:30] so: misc, right? [18:19:37] yes, i'd say misc [18:19:39] i'd agree - misc [18:19:54] * ori nods [18:20:04] i simply read "extension" on the wikitech page and thought about the x1 shard (where things like echo and flow live) [18:20:29] but m1 it is, therefore db1048.eqiad.wmnet as master [18:21:03] yep [18:21:33] ori: i'll update the thread? Because I said m2 [18:22:05] milimetric: nah, i don't think anyone cares. let's wait until we're done. [18:22:09] k [18:24:47] ori: log schema and eventlog user are present on db1048 [18:24:48] ok, is ready. i can +2 / puppet-merge / puppetd -tv on vanadium on your signal [18:24:57] ah cool. so going ahead, then? [18:25:01] yep [18:25:03] ori: shouldn't that puppet change stop the old consumer? [18:25:30] no, i can stop it manually and then submit another patch to remove it altogether [18:25:42] let's make sure the new one works first [18:25:46] k, i'll get that other patch ready [18:27:56] springle: consumer mysql-db1048 start/running 28637 [18:29:12] i don't see it on db1048 yet [18:29:25] yeah, me neither. hm. [18:30:06] springle: vanadium is on the analytics vlan; does it need an iptables change to dial out to db1048? [18:30:22] From ae2-1021.cr2-eqiad.wikimedia.org (10.64.21.3) icmp_seq=1 Packet filtered [18:31:00] yeah looks like [18:31:32] sorry, i should have thought of that [18:31:42] i'm not sure what to change, tho [18:31:46] ori: not something i can do. can you, or grab paravoid? [18:31:52] will do [18:34:00] springle: if he's not around, is there anyone else that can do it? [18:34:11] ottomata maybe? [18:34:21] no harm asking [18:34:29] wassup? [18:34:50] oof, yes probalby does [18:34:52] we're migrating the eventlogging database from db1047 to db1048, but because vanadium is on the analytics vlan it needs to have an iptables change to allow it to dial out to db1048 [18:34:52] and I can't really help [18:34:56] you meed mark [18:35:01] its a network switch ACL [18:35:13] its not iptables [18:41:21] ori: call a halt until paravoid appears? [18:41:43] springle: yes, sorry [18:41:53] we made small progress. got a schema :) [18:43:21] i'll update the thread and say that the migration is paused because we're waiting on a network switch ACL change [18:43:26] no no [18:43:30] and that currently there's no outage... [18:43:41] oh but won't paravoid only be available tomorrow? that would put us past our deadline [18:43:56] paravoid appears randomly at all hours [18:44:00] :) [18:44:05] did not know [18:44:09] i guess i should expect that [18:47:52] headed to cafe [18:47:54] back ina bit [18:50:12] ori: https://gerrit.wikimedia.org/r/129490 resubmitted [18:50:21] (to remove old consumer when we're ready) [18:50:40] yep. i think we're about to have the networking issue resolved [18:51:44] there we go [18:52:57] yep, data's going in [18:54:30] springle: ok so going to stop the db1047 writer, yeah? [18:54:35] yep [19:00:46] eventlog gone from db1047. ok to dump? [19:00:59] springle: yep [19:05:16] now we wait [19:05:52] springle: is --skip-add-locks unnecessary? [19:06:31] it's innodb so I'm using --single-transaction. that pretty much handles the locking [19:06:49] ah makes sense [19:06:51] there will be purge lag build up, but that won't affect anything [19:09:41] while we're waiting, another question: what else is on db1048? if there is an influx of incoming events, what would be affected? [19:10:12] keep in mind that the consumer is single threaded and syncronous, so there is only 1 insert at a time [19:10:47] otrs, reviewdb (gerrit), scholarships (some web app) [19:11:05] i watched eventlog load on db1047; don't think it will be a problem [19:11:35] yeah, it's miniscule. but it did happen once that a developer enabled data logging on a large subset of anonymous page views on mobile by mistake [19:11:50] which pushed the rate of events from 20-30/sec to about 150/sec [19:11:51] plus we could switch log tables to tokudb in a pinch, for compression and write throughput [19:12:35] db1047 kept up, iirc [19:12:58] it kept up even while hammered :) which was most days [19:13:09] just replication suffered [19:13:33] yep [19:14:17] ah, and lastly, what did we decide re: replication? will db1047 be a slave of 1048? [19:15:36] yes, i think so. alternatively we could use federated tables, or the new CONNECT engine, but those are slow and untrialed respectively [19:17:37] * ori reads . it looks pretty cool. [19:17:38] I may use federated tables if we go past the 12h window, then do a switcheroo with real tables later [19:18:25] CONNECT uses engine condition pushdown, so it should actually be faster than FEDERATED ever was [19:19:19] also this is possibly fun: https://mariadb.com/kb/en/spider-storage-engine-overview/ [19:19:49] springle: how long do you think it'll take before the export finishes? i ask because once we confirm that the import and live consumer run well in parallel then we can send a followup note; it's okay for the import to take arbitrarily long [19:20:17] we're up to MobileBetaWatchlist_5281061, alpha order [19:20:34] oh that's pretty fast [19:21:17] we're pretty close to getting this data into hadoop too, it's streaming into kafka now already [19:22:34] nice [19:22:37] i always feel compelled to explain that i understand entirely that an rdbms is not always the right solution for log data, but basically analysts familiarity with sql trumps practically everything else [19:24:04] but i think some of the scheduled aggregation queries that are used to build graphs could be ported to hadoop [19:26:15] yeah some stuff isn't fun on an rdbms [19:26:58] tokudb is supposed to be quite good at time-series data [19:27:22] eventlogging might be the ideal test case [19:27:41] do we use HiveQL at all? [19:27:43] yea [19:28:01] people that used it (Ironholds) like it ... mostly :) [19:28:45] I think it's pretty great for what it gives you access to but pretty crappy compared to ANSI [19:28:45] springle: we could spin up an additional consumer that feeds into another logical database on db1048 that uses tokudb [19:30:11] it would double the write load on db1048 but twice the current load is probably still negligible. and we'd presumably get rid of the innodb one if tokudb worked well [19:30:38] or just switch a slave to tokudb [19:30:51] same single-threaded writes [19:31:05] right, yeah [19:31:07] if it keeps up +1. if it queries faster +2 [19:32:10] would it be alright with you guys i disappeared for 30-40 mins? [19:32:32] np [19:32:52] cool, bbiab then. and thanks again for your help with this! [19:32:56] milimetric: you too! [19:33:17] np [19:48:19] yo [19:53:04] hi tnegrin, did you mean to "yo" anyone in particular? [19:53:11] or you just kind of putting that out there, seeing who bites? [19:53:17] in the latter case - damn [19:53:53] :-) [19:54:03] tnegris walked away from his desk! [19:54:12] er tnegrin [19:56:56] heh -- another fumble -- I was trying to reach ottomata [19:57:09] I shouldn't be allowed to use computers [20:00:02] yo! [20:00:11] hello, you have reached ottomata [20:00:27] you may begin typing at the 'beep' [20:00:29] *BEEP* [20:00:50] ottomata, sorry, I had ignore.case set to FALSE [20:00:56] I don't recognise BEEP and beep as equivalent. [20:00:58] Try again ;p [20:01:25] GEEZ [20:01:28] no imagination round here [20:01:34] oh tnegrin is gone! [20:01:51] I imagine him still being around. [20:02:16] * qchris slaps himself with a large trout. [20:02:39] he's still here in our hearts [20:03:30] he has been notified the good ole fashion way by waiving the hand as he was walking past. [20:03:55] he’s still not at his desk tho [20:05:28] back, in a mtng but pingable / interruptable [20:15:53] (PS6) Milimetric: Fix user name display in CSV files [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/129025 [20:17:01] qchris: I'm thinking about trying to run camus every 15 minutes [20:17:48] i'm wondering if it can consume 15 minutes worth of log data in less than 15 minutes [20:17:49] ori: springle: I might have to go away in an hour for a couple of hours. I'll keep IRC on so just ping me if we're all set to send the note or if you need me at all for the migration. [20:17:56] ok [20:18:00] ottomata: there's only one way to find out :) [20:18:12] if it can, I think we could keep the previous 15 minutes of data in pagecache with the amount of ram we have [20:18:29] which would eliminate regular read disk seeks [20:18:33] I think this is the time to run these experiments because later if they fail people will be sad [20:18:36] yeah [20:18:44] well, i mean, this is relatively harmless [20:18:46] if it breaks [20:18:48] then we stop doing it! [20:18:52] kafka will still have the data [20:18:56] as long as we fix within a week :) [20:19:00] right, :) [20:19:02] ottomata: Sounds like a great idea. [20:19:09] i'm also doing this now because i'm doing lots of capacity planning and learning a bunch about disk io stuff [20:19:33] qchris, just running these numbers by you, to double check if they make sense [20:19:40] well yeah, if we can avoid it by running from memory, that's phantastic [20:19:47] we've got about 45 G of page cache to work with [20:19:51] right now (we can ask for more if we need) [20:20:06] we're producing peak about 19 MB/s [20:20:20] if we add upload, we'll be closer to 37G/second [20:20:28] sorry [20:20:28] 1024 bytes or 1000 bytes? [20:20:29] MB [20:20:29] not G [20:20:32] yo ottomata [20:20:41] pretty sure there arer MiB [20:20:43] so 1024 [20:20:43] sorry -- you guys busy? [20:20:45] yoyo tnegrin [20:20:47] (PS1) MarkTraceur: Restore accidentally deleted map graphs [analytics/multimedia/config] - https://gerrit.wikimedia.org/r/129559 [20:20:48] not really, just brain bouncing [20:20:56] Ooh fancy. [20:21:01] so anyway, ja, so rounding that up to 40 MB/sec [20:21:06] I want to find out if Andrew's gonna buy us some gear [20:21:17] ottomata: You're writing than I can think. [20:21:29] tnegrin: If in doubt: Yes. More gear. [20:21:31] hha, ok [20:21:36] i'll make this easy [20:21:37] throw hardware at it! [20:21:43] assume we have 40 MB / sec coming in [20:21:45] s/writing than/writing faster than/ [20:21:51] and we have 45 GB of pagecache [20:22:01] how often do we need to read recent data [20:22:09] to ensure that we read from cache? [20:22:18] ottomata: do you need anything from me or are we good with the capes? [20:22:42] or, in other words: how LONG is our page cache time period? [20:22:46] 17 minutes [20:22:47] tnegrin: i'm cool [20:22:47] :) [20:22:47] if the number that mark gave you is solid then we can figure out the details later [20:22:49] you were looking for me, ja? [20:22:51] (PS1) MarkTraceur: Reorder metrics [analytics/multimedia/config] - https://gerrit.wikimedia.org/r/129560 [20:23:05] yeah -- but just to make sure we were in the budget [20:23:09] if you're good, I'm good [20:23:31] The 19MB/s where is that coming from 200K msg/second would leave 95 bytes per message. That looks low. So we're talking about 19MB/s Snappy compressed? [20:23:37] tnegrin: im' still planning, but i am in touch with both mark and faidon, so we shoudl be ok [20:23:47] nono [20:23:53] 19 MB/s is what we are doing now [20:23:57] ok -- I thought tomorrow was the deadline [20:24:07] oh, they have not mentinoed that to me [20:24:07] that's 150k without upload right? [20:24:11] yeah... [20:24:19] now == these days, or now == at this time of the day? [20:24:20] i'm rounding up to 40 MB/sec for the full stream [20:24:22] maybe it was soft [20:24:25] I'll follow up with them [20:24:27] qchris, now these days [20:24:32] i'm talking about peak [20:24:45] Peak should be ~200 messages / second. Right? [20:24:46] so, 205,000 msgs /sec ~= 37 MB /sec [20:25:06] Ha! 37 MB/s, not 18MB/s [20:25:10] Thats a better number. [20:25:39] (Sorry, I misread above) [20:25:44] ok anyway, yeah so i'm just rounding that up to 40 MB /sec [20:25:50] Yes. [20:25:54] and yeah, milimetric, that's what I got to, around 18 minutes of cache [20:26:05] soooo, if we can consume every 15 minutes or more often [20:26:13] maybe everything will be read from cache! [20:26:37] particularly: [20:26:39] (PS1) MarkTraceur: Only show the data coming from cloudbees [analytics/multimedia] - https://gerrit.wikimedia.org/r/129562 [20:26:42] it is kind of up against the limit [20:26:46] i'm wondering if we run camus every 15 minutes [20:26:46] maybe do every 10 minutes? [20:26:49] or more often!? [20:26:53] yeah sure, milimetric, worth a try [20:26:55] if this will go away: [20:26:55] http://ganglia.wikimedia.org/latest/graph.php?r=day&z=xlarge&hreg[]=analytics102%5B12%5D.%2A&mreg[]=diskstat_%28sdc%7Csdd%7Csde%7Csdf%7Csdg%7Csdh%7Csdi%7Csdj%7Csdk%7Csdl%29_read_bytes_per_sec>ype=stack&title=diskstat_%28sdc%7Csdd%7Csde%7Csdf%7Csdg%7Csdh%7Csdi%7Csdj%7Csdk%7Csdl%29_read_bytes_per_sec&aggregate=1 [20:27:53] we will see! I think i'll try that tomorrow morning [20:27:56] so I can be around to watch it [20:28:18] soudns fun :) [20:29:28] ottomata: Go for the shorter interval. Sounds good. [20:29:45] (PS1) MarkTraceur: Only show the data coming from cloudbees [analytics/multimedia] - https://gerrit.wikimedia.org/r/129565 [20:29:47] (PS1) MarkTraceur: Add more robust specific user agent string [analytics/multimedia] - https://gerrit.wikimedia.org/r/129566 [20:29:52] ...hm [20:29:55] (But the numbers do not fully seem correct, when comparing to average sizes from https://wikitech.wikimedia.org/wiki/Cache_log_format ) [20:30:46] (Abandoned) MarkTraceur: Only show the data coming from cloudbees [analytics/multimedia] - https://gerrit.wikimedia.org/r/129562 (owner: MarkTraceur) [20:35:04] yeah ok, lemme see where I got those [20:37:11] qchris [20:37:15] those are snappy compressed numbers, btw [20:37:21] Ah. Ok. [20:37:22] so i think that is right [20:37:29] Then I believe you :-) [20:37:40] snappy compresses at about 1/4 of original size [20:37:44] Snappy was 1:5 for us, right? [20:37:48] Oh. Ok. [20:39:43] currently, we do [20:39:54] 90K msgs/sec = 16 MB / sec [20:39:54] ottomata: Briefly looked over it. Looks good to me. [20:40:56] 90K is bits+text+mobile. right? (No upload) [20:41:11] 16 MB * 4 compression / 90000 msgs = ~711 bytes per message [20:41:12] ja? [20:41:56] Looks good. [20:42:05] (PS7) Milimetric: Fix user name display in CSV files [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/129025 [20:43:44] TSVs have ~400 bytes / request. [20:43:52] But they have no column labels. [20:43:59] So that roughly agrees. [20:44:12] springle: how's the export? [20:45:27] PersonalBar_7829128 [20:46:03] not bad, there were a lot of big 'M' tables [20:46:49] correct, 90K is bits+text+mobile [20:46:57] thanks qchris, I am glad that you are here to check my numbahs! [20:47:43] ottomata: I am glad you are doing the real work :-D [20:47:53] I am just sitting and watching. [20:55:27] qchris, you have not learnt the art of management-fu [20:55:40] you are not sitting and watching. You are thinking very hard to predict and avoid future problems [20:56:00] as such, you should be paid more and given less work so you can spend more time thinking hard :D [20:59:04] Ironholds: ? [20:59:16] But yes ... keep the money flowing :-D [21:02:06] heh [21:19:12] ori: dump is loading into log_test on db1048. if it appears sane afterward, i'll merge it with log using pt-table-sync [21:19:35] just being paranoid, incase something dies halfway [21:20:11] springle: kk, anything else i can do to help? [21:20:27] ori: nope, we can let it be for a while [21:20:36] how long do you figure the import would take? [21:20:54] there's no rush or anything, just trying to get an order-of-magnitude sense [21:21:01] single-digit hours? [21:21:05] hard to say [21:21:15] that's really great [21:21:33] am off for a while. bbl [21:21:44] springle: thanks again! [21:22:29] np [21:33:19] ori: I should wait for the import or do like you said before and send the update [21:33:39] oh, duh, nvm [21:33:47] why nevermind? [21:33:48] the dump is not yet in 1048 [21:34:08] i think we should probably send an udpate [21:34:29] but 1047 is now paused [21:34:32] it's a public holiday in australia iirc so we shouldn't assume sean will be around exactly when it finishes [21:34:45] and won't be getting new data until after sean finishes [21:35:03] right [21:35:28] i think we can also say that we're cautiously optimistic about not dropping any data [21:35:42] k [21:36:35] so: old data still available for querying on db1047; new data streaming into db1048; old data will be backfilled on db1048 over the course of the next day or so [21:37:05] at which point we'll probably want to give db1047 a break so that sean can set up replication from db1048 [21:41:25] k, sent, now i'm gonna go work on my chicken coop [21:41:27] ping me if you need me [21:41:38] milimetric: looks great, thanks very much [22:10:00] hey milimetric, yt? [23:02:01] (CR) MarkTraceur: [C: 2] "Boy, hacky patch is hacky, but if it's the best we can do..." [analytics/multimedia] - https://gerrit.wikimedia.org/r/129565 (owner: MarkTraceur) [23:02:19] (CR) MarkTraceur: [V: 2] "Oh right, Jenkins cannae run here." [analytics/multimedia] - https://gerrit.wikimedia.org/r/129565 (owner: MarkTraceur) [23:04:43] hi DarTar [23:04:46] what's up [23:07:44] problem solved [23:07:58] I had some questons on th el data migration [23:08:05] got it, i just saw the list [23:08:13] I did reply to maryana that it would slow things down [23:08:21] ha, ok [23:08:31] as we're mysqldump-ing the whole database right now [23:08:37] but that should be done soon-ish [23:08:42] kk makes sense [23:17:17] (CR) MarkTraceur: [C: 2 V: 2] "Hm, this may not get set anywhere but I'm OK with that. Worst thing that can happen is we lose data for a while. It'll be back." [analytics/multimedia] - https://gerrit.wikimedia.org/r/129566 (owner: MarkTraceur) [23:45:10] (CR) MarkTraceur: [C: 2 V: 2] Restore accidentally deleted map graphs [analytics/multimedia/config] - https://gerrit.wikimedia.org/r/129559 (owner: MarkTraceur) [23:45:44] (CR) MarkTraceur: [C: 2 V: 2] Reorder metrics [analytics/multimedia/config] - https://gerrit.wikimedia.org/r/129560 (owner: MarkTraceur)