[08:44:36] 07HTTPS, 10Traffic, 07Browser-Support-Internet-Explorer: Internet Explorer 6 can not reach https://*.wikipedia.org - https://phabricator.wikimedia.org/T143539#2571125 (10Florian) [10:35:58] 10Traffic, 10MediaWiki-extensions-UniversalLanguageSelector, 06Operations, 13Patch-For-Review: ULS GeoIP should not use meta.wm.o/geoiplookup - https://phabricator.wikimedia.org/T143270#2571251 (10Nikerabbit) [11:51:35] 10Traffic, 10MediaWiki-extensions-UniversalLanguageSelector, 06Operations, 13Patch-For-Review: ULS GeoIP should not use meta.wm.o/geoiplookup - https://phabricator.wikimedia.org/T143270#2571316 (10BBlack) Ah, yes, I see now for country_code it does via https://github.com/wikimedia/mediawiki-extensions-Univ... [12:03:20] bblack: godog asked to allow PATCH requests for grafana, tentative patch here https://gerrit.wikimedia.org/r/#/c/305990/ [12:05:05] ema: I'd say just add it to allowed_methods and not restrict it to grafana. On a generic cluster like misc, I'm sure someone will ask for it again in the future, etc... [12:05:21] we're not currently restricting websockets to those backends that use it either (although maybe we should, but that's different than just a method) [12:06:01] ok! [12:06:20] I'll be back in a couple hours (first day of school for kids!), but +1 :) [12:06:33] good luck to the kids! [12:29:18] I'm taking a look at successful vs. failed TFO connection by source IP on cp3043 [12:29:47] 641 successful, 869 failed so far [12:30:11] however, 680 failures came from the same IP [12:31:27] it might be some type of middlebox doing funky stuff [14:30:54] ema: you have any ongoing things you need puppet on cp* for? I want to disable puppet on them and then merge the geoip C changes and puppetize just a few at first [14:31:14] bblack: nope, please go ahead [14:31:17] ok thanks [15:11:30] 10Traffic, 10MediaWiki-extensions-CentralNotice, 06Operations, 13Patch-For-Review: CN: Stop using the geoiplookup HTTPS service (always use the Cookie) - https://phabricator.wikimedia.org/T143271#2571686 (10BBlack) [15:11:34] 10Traffic, 10MediaWiki-extensions-UniversalLanguageSelector, 06Operations, 13Patch-For-Review: ULS GeoIP should not use meta.wm.o/geoiplookup - https://phabricator.wikimedia.org/T143270#2571687 (10BBlack) [15:11:43] 10Traffic, 06MediaWiki-Stakeholders-Group, 06Operations, 07Developer-notice, and 2 others: Get rid of geoiplookup service - https://phabricator.wikimedia.org/T100902#2571690 (10BBlack) [15:11:47] 10Traffic, 10Fundraising-Backlog, 06Operations, 13Patch-For-Review: Switch Varnish's GeoIP code to libmaxminddb/GeoIP2 - https://phabricator.wikimedia.org/T99226#2571684 (10BBlack) 05Open>03Resolved a:03BBlack [15:28:48] 07HTTPS, 10Traffic, 10DBA, 06Operations, and 2 others: dbtree loads third party resources (from jquery.com and google.com) - https://phabricator.wikimedia.org/T96499#2571733 (10jcrespo) Grafana should substitute the graphing library {F4385044}. The only thing left is substituting code to generate a tree (d... [15:39:12] 10Traffic, 06Operations: High number of failed inbound TFO connections in esams Mon-Fri - https://phabricator.wikimedia.org/T143562#2571799 (10ema) [15:40:11] bblack: I "fixed" https://grafana.wikimedia.org/dashboard/db/tcp-fast-open thanks to godog's help [15:40:44] keepLastValue got the job done [15:41:37] awesome [15:45:19] 10Traffic, 06Operations: High number of failed inbound TFO connections in esams Mon-Fri - https://phabricator.wikimedia.org/T143562#2571842 (10ema) p:05Triage>03Normal [15:47:17] 10Traffic, 06Operations: High number of failed inbound TFO connections in esams Mon-Fri - https://phabricator.wikimedia.org/T143562#2571799 (10BBlack) Perhaps this is a mobile carrier doing CGNAT that constantly flips souce IPs for TCP traffic from the same phones, thus constantly breaking otherwise-valid rece... [15:48:21] ema: so with the TFO failures, it's 2x IPs from that AS, mapping to 2x caches? [15:48:50] maybe the CGNAT has 2x outbound IPs it rotates between randomly per-connection (no device stickiness) for a large count of devices [15:49:09] it's hard to imagine a whole carrier having only two outbound IPs *total*, so it could even be a misconfig/breakage on their end they're unaware of. [15:54:01] bblack: yes, 2x IPs mapping to 2x caches [15:55:54] bblack: but why Mon-Fri? :) [15:56:44] I was thinking of broken appliances installed at some big institutions, thus the failure spikes during the work week [15:56:56] maybe [15:57:12] I mean they're not even spikes [16:32:50] 10Traffic, 06Operations: High number of failed inbound TFO connections in esams Mon-Fri - https://phabricator.wikimedia.org/T143562#2572034 (10ema) From https://www1.icsi.berkeley.edu/~barath/papers/tfo-conext11.pdf section 4.3: > some carrier-grade NAT configurations use different public IP addresses for new... [16:35:48] 10Traffic, 06Operations: Stop using persistent storage in our backend varnish layers. - https://phabricator.wikimedia.org/T142848#2572068 (10BBlack) I should add another overall point here about TTLs: One key thing that will become unblocked post-Varnish4 (so, early CY2017) is reducing our normal (other than... [16:47:54] paravoid: is that aboubt using pybal in a no-IPVS mode, just to advertise public IPs from single-hosts that can move from public to private subnet? [16:48:26] no :) [16:49:10] let's sync after the meeting [16:49:52] 10Traffic, 10netops, 06DC-Ops, 06Operations, and 2 others: rack/setup new eqiad lvs machines - https://phabricator.wikimedia.org/T104458#2572147 (10BBlack) [16:49:55] 10netops, 06DC-Ops, 06Operations, 10ops-eqiad: asw-d-eqiad SNMP failures - https://phabricator.wikimedia.org/T112781#2572145 (10BBlack) 05Open>03stalled According to etherpad today, this is "Pending 10G migration with new hardware" (for Row D in eqiad, I think). [16:59:05] bblack: so the TL;DR is [16:59:18] we "maintain" (in quotes) a list of LVS service IPs in the routers [16:59:27] for which we provide fallback static routes to lvs1001 etc. [16:59:35] ok [16:59:36] in case pybal dies in both servers of each pair [16:59:44] due to a software bug or something [16:59:59] or someone failing to be mindful with pybal restart-maintenance :) [17:00:07] yeah heh [17:00:38] so before I took the cr1-eqiad offline for the upgrade, I was checking to see if everything was working okay [17:00:52] and I found a routing bug with those static IPs [17:01:05] that took a while to debug and wasted a bunch of our time in the window [17:01:14] while looking a little deeper into that though [17:01:17] yeah now that I think about it.... [17:01:20] I realized the list hasn't been maintained for a while [17:01:25] we've made multiple changes to the set lately heh [17:01:28] it had mobile-lb and stuff like that [17:01:35] so at minimum we should fix that [17:01:39] but that's probably not enough [17:01:42] it will happen again [17:02:02] isn't there some way to configure the routers to just hold the last-best-route they got from BGP for several minutes if no advertisers, for those subnets? [17:02:15] it's also not very dynamic, which sucks while we move towards an even more dynamic setup for pybal (etcd. etc) [17:02:51] so I wanted to discuss all this a little bit [17:02:54] I don't have any good ideas [17:03:12] well there's the above if it's possible, it's not as strong a fallback though, and would need icinga alerting too [17:03:24] I don't think the above is possible, no [17:03:27] smart though [17:03:37] also, we could potentially re-arrange IPs or LVS sets, etc, so that we can define a subnet per LVS set [17:03:44] hehe [17:03:47] I thought of that too [17:03:50] e.g. lvs1001 (static) + 1003 has n.n.n.n/29 [17:03:53] that's pretty smart too :) [17:04:07] it's already kinda that way, almost [17:04:17] it's a little restricting of course [17:04:42] I think at the highest level, each public LVS subnet is already split to text/mobile (now just text) and upload/misc [17:04:59] high-traffic1 is all just text, it's already a separate subnet [17:05:23] and upload/misc in theory are split that way too, into two smaller subnets for high-traffic2 + low-traffic, basically? I'm not sure, I'd have to dig into the details [17:05:33] (relatedly: do we do static fallback for the various low traffic 10/8 IPs?) [17:06:36] inter-related with all of this is our long-term view on the current high-traffic[12] + low-traffic split in general [17:06:48] yup [17:07:00] I had that on my itemized list for the task I was meaning to open on friday :P [17:07:08] right now high-traffic1 == text, high-traffic2 is upload as well as some misc-web services and maps and such, low-traffic is mostly (maybe all) internal 10/8 stuff [17:08:02] so it may already be naturally split into subnets for per-subnet static routes [17:08:16] hmm [17:08:29] I'd have to check and verify [17:08:34] the other thing we could do of course (with pybal config changes) would be to configure every service IP on every server [17:08:36] in the cache-only DCs it almost certainly is [17:09:00] so yeah that's the other thing I was going to bring up, too :) [17:09:13] config and code changes really [17:09:14] in the long run, we have some design opinion debates to have around that [17:09:32] reasons for current split is mostly failure isolation, not raw traffic load-handling [17:09:35] we could even run them active/active too, but that's going to be problematic due to a couple of different reasons [17:09:44] to separate DDoS on text/upload from each other, and both from the LVS that's handling internal services [17:10:23] but in terms of load, low-traffic could merge into either of the high-traffics, and really the high-traffic could be combined too, even with only a single active LVS [17:10:30] it used to be raw traffic load balancing [17:10:42] and we have the possible desire to (maybe, eventually, for at least some of the traffic) http/2 coalesce text+upload [17:10:58] esams regularly went above the 1G before the 10G upgrade on one of the two pairs [17:11:07] I remember having to deal with that at some point, years ago [17:11:12] I should double-check that esams peaks are still <10G text+upload by a healthy margin, but I think so [17:11:49] so: let's talk about active/active a bit too [17:12:03] sure [17:12:11] if the routers could persist that at L3, it would maybe work the bulk of the time [17:12:23] well, wait [17:12:27] persist what? [17:12:47] the router can do equal cost multipathing with l2/l3/l4 hashing [17:12:51] persist mapping a given L3 connection to one of the active/active set for the life of the connection (host:port<->host:port) [17:13:00] but, pybal doesn't do two BGP sessions [17:13:07] each pybal connects to one of the routers [17:13:12] ok but we could fix that [17:13:15] yeah [17:13:21] we can run it active-active now [17:13:26] and I have done so previously [17:13:38] there was a corner case where it was breaking [17:13:39] and if it's L4 hashing (which I guess in router parlance, L3 is ip-hashing, L4 is ip+port hashing?) [17:13:41] that's hard to solve :) [17:13:45] (yes) [17:13:54] then yes, we could do active-active, which would be nice [17:14:09] the corner case were it was breaking [17:14:10] was [17:14:13] random ISP on the internet [17:14:15] even without it, it would be nice to turn on multicast ipvs sync. with l4 hashing, it's even more valuable of course [17:14:34] used two ISPs and did equal cost multipathing between two different transits [17:14:45] those two different transits had different paths to us [17:14:52] right [17:14:54] one used, say, Telia, the other one used, say, NTT [17:15:03] so half of the packets of the same flow arrived on cr1, the other half on cr2 [17:15:06] but so long as cr1+cr2 make the same L4 hashing decision, and pybal is BGP to both, shouldn't matter? [17:15:31] yes [17:15:33] either that [17:15:44] or we make sure that the two lvs make the same hashing decision [17:15:45] 10Varnish: Sort query parameters on urls - https://phabricator.wikimedia.org/T143574#2572282 (10Jhernandez) [17:15:56] which at the time wasn't possible (wrr), but with sh, it might [17:15:58] I don't think that's enough [17:16:07] 10Traffic, 06Operations: Sort query parameters on urls - https://phabricator.wikimedia.org/T143574#2572294 (10Jhernandez) [17:16:16] but this means that temporary flukes that result in ejecting e.g. a cp* or mw* [17:16:28] unless multicast connection state sync was perfect, you could get packet 1 -> lvs1, packet 2 -> lvs2 (which doesn't know about conn state from packet 1) [17:16:39] for the connection tracking in LVS itself [17:16:51] does it matter? [17:16:56] I think it does [17:17:02] would lvs2 drop the packet? I'm not sure [17:17:05] LVS knows about TCP connection states [17:17:16] it's not just hashing ip+port and ignoring the rest [17:17:33] (or else we wouldn't need multicast sync to avoid dropouts on active/passive failover, either) [17:18:07] yeah, I'm not sure why it needs to know about TCP states though [17:18:24] for sh, that is [17:18:28] in theory, for what we're trying to do here, it may not need to, but I think it does [17:18:42] as in, it does a secondary job of validating traffic [17:18:53] (no random tcp data that didn't see a matching syn before, like conntrack) [17:18:58] I remember some patches from facebook for l4 loadbalancing [17:19:06] let me find them [17:19:14] but why would we care if lvs validates the traffic [17:19:23] the conntrack can't be very advanced though, since it only sees one side of the traffic :) [17:19:44] yeah exactly [17:19:56] but it's advanced enough that it separates them into Active and Inactive states at least, which implies noticing the final fin timeouts or whatever [17:20:21] with sh, in theory it doesn't have to be [17:20:49] historically, I always considered sh to be a temporary solution for us, until we fixed TLS tickets, and because it had the possible downside with large NAT [17:21:10] but in practice it's now been the norm forever, we make several assumptions based on it now for various scenarios, and the NAT problem hasn't been real in practice. [17:21:27] 10Traffic, 06Operations: Sort query parameters on urls - https://phabricator.wikimedia.org/T143574#2572317 (10Jhernandez) I'm out of my depths here, so sorry if it is a stupid question. I'm interested to understand why we're not doing this already. The why is that there's an interest in sending more traffic t... [17:21:53] so, yeah, I could see us moving to a pure L4 w/ SH at the ipvs level, and seeing if we can configure/patch LVS to act as a pure L4-hasher that doesn't need multicast sync, too. [17:22:26] the NAT problem can be fixed with L4 hashing [17:22:30] LVS supports that [17:22:36] facebook patch too I think [17:22:48] oh wait, let's rewind a bit... [17:23:10] the question of course it's whether it's a consistent hash :) [17:23:13] The L4 bits were about router->lvs, on the assumption that moving a connection between LVS is lossy. [17:23:32] for LVS->cache, we do L3 hashing intentionally [17:23:36] (ignore port) [17:23:38] yes [17:23:48] for TLS/TFO [17:23:48] for TLS sessions and TFO, two good reasons [17:23:56] yup [17:24:14] so.... [17:24:22] I'm saying that even if we fixed TLS/TFO and thus the requirement for sh, we could return to L4 hashing, not wrr [17:24:30] s/return/switch/ [17:24:31] we could make this work if we had all the following things fixed/reconfigured/verified: [17:24:40] 1. pybal BGP to both routers [17:24:51] 2. routers L3-hashing to active/active LVSes [17:25:06] 3. LVS L3-hashing to caches without bothering with TCP statefulness [17:25:39] well [17:25:43] and then yeah s/L3/L4/ is a future option for both, if TLS/TFO was fixed and we were worried about NATs [17:25:56] yes, although for (2) L3-hashing isn't required at that point [17:26:14] oh, right [17:26:27] if TCP is stateless in LVS, routers don't have to hash [17:26:30] not strictly required, it would still help for the case where the two LVSes have a different view of realservers [17:26:43] yeah [17:27:00] or to put it another way: if it /was/ required, it would mean that we wouldn't be able to restart pybals that easily anymore [17:27:00] and of course at the LVS level, regardless, it would be nicer if the hash was consistent [17:27:44] because the chash would also help cover the case of backend list mismatch between LVSes if the routers are routing randomly [17:27:55] (hopefully very briefly) [17:27:59] yeah [17:28:08] I donno [17:28:27] etcd->pybal->LVS for 2+ LVSes servicing the same traffic is always going to be slightly async [17:28:52] even with chashing, if the routers are randomizing traffic, connections to the caches are going to fail during the asynchronicity [17:29:09] (some percentage of them, anyways) [17:29:30] so I think it's still beneficial to have the router L3 (or L4) hashing to LVS, too. [17:30:31] is "sh" deterministic? [17:30:35] The patch also adds a flag to make SH include the source port (TCP, UDP, [17:30:38] SCTP) in the hash as well as the source address. This basically allows [17:30:41] for deterministic round-robin load balancing (i.e., where any director [17:30:44] in a cluster of directors with identical config will send the same [17:30:47] packet the same way). [17:30:50] that's from the FB patch that I mentioned above [17:30:52] commit eba3b5a78799d21dea05118b294524958f0ab592 [17:31:01] assuming only 2x LVS for a given service IP, it doesn't matter if the router hashing is consistent. But it might be nice to go to a model of 3-4x LVS in a primary DC that all share active/active on all the IPs, at which point the routers doing consistent-hashing would help too [17:32:00] paravoid: IIRC our sh is deterministic, but not consistent, and only looks at IP currently. [17:32:12] but in practice, it's "kinda-consistent" because of how they lay out the hash buckets, last time I looked [17:32:34] right [17:33:02] looking at the L4 port would be a bad idea for us, because it would break TFO/TLS for us today [17:33:09] yes, I know :) [17:33:58] funny enough the TFO and TLS-ticket problems share common solution infrastructure too (making up a random in-memory key, distributing/rotating securely to all caches, maintaining a window of past/future valid keys, etc) [17:34:21] but it's complexity we really don't need or want if L3 hashing works fine, and so far it does. [17:34:58] however, when TLSv1.3 becomes a reality, we may have to re-think that (we'll probably get forced to do tickets at that time? but it's still a grey area how that will work) [17:35:36] I'm not sure if you can get the routers to do deterministic L3-hashing [17:35:45] last I read the evolving standard on that, it sounded like e.g. nginx would have the option in its code to effectively implement something that functions like today's session cache, but looks more like tickets to the client. [17:35:48] i.e. that both routers will make the same decision [17:36:15] deterministic is easier. i guess they inject a random number or a hash of the router name or something, intentionally? [17:37:02] (a random number from boot time, I mean) [17:37:02] http://www.juniper.net/techpubs/en_US/junos15.1/topics/reference/configuration-statement/per-prefix-edit-forwarding-options.html [17:37:13] per-prefix accepts a hash-seed, per-flow doesn't [17:37:18] neither document what hash-seed does :P [17:37:23] nice [17:37:28] hash-seed—Configure the hash value. Junos OS automatically chooses a value for the hashing algorithm used. You cannot configure a specific hash value for per-flow load balancing. [17:37:40] hash-seed—Per-prefix load-balancing hash function. [17:37:40] number—Hash value. [17:37:41] Range: 0 through 65,534 [17:37:43] Default: 0 [17:37:45] gee thanks :PO [17:38:05] well, until we solve TFO/TLS problems, we want per-prefix anyways right? [17:38:16] pick same seed on both -> win [17:39:10] or does per-prefix not mean L3-hashing, but something else more limited like a static list of prefix mappings? [17:39:29] no, I think it means L3 hashing [17:39:37] http://www.juniper.net/techpubs/en_US/junos15.1/topics/usage-guidelines/policy-configuring-per-prefix-load-balancing.html [17:39:58] By default, Junos OS uses a hashing method based only on the destination address to elect a forwarding next hop when multiple equal-cost paths are available. As a result, when multiple routers or switches share the same set of forwarding next hops for a given destination, they can elect the same forwarding next hop. [17:40:09] sounds good to me [17:40:14] You can enable router-specific or switch-specific load balancing by including a per-prefix hash value [17:40:20] (which we don't want) [17:41:06] well we don't want destination, we want source, I assume that's configurable [17:41:13] hmm [17:41:19] yeah, I skipped over that part, silly me [17:41:31] that makes me wonder now, if maybe some of our TFO failures are due to that, too [17:41:39] that probably won't be configurable :/ [17:41:54] (if LVS is hashing two different service IPs from the same LVS -> same caches, differently, thus causing TFO fail) [17:42:44] yeah, per- source prefix doesn't exist [17:42:52] and makes sense, ASICs deal with destination routes usually [17:42:56] ok [17:43:03] per-flow isn't in the ASIC? [17:43:10] it is, but that's different :P [17:43:12] ok [17:43:14] it's a newer feature [17:43:18] newer ASICs only etc. [17:43:31] is per-flow configurable in terms of ip mask length and whether to ignore port? [17:43:35] nope [17:43:39] nice [17:43:59] so we've basically only got the L4 option at the routers, for source-based hashing [17:44:27] I think so [17:46:23] messy eh [17:48:27] ok so [17:49:09] if we do L4 determinstically at the routers, and LVS does L3 deterministically to the caches, pybal can go active/active and things work. [17:49:17] yeah [17:49:21] (and if pybal can do BGP to both) [17:49:28] the only thing that breaks a little bit, is that for the short window 2x LVS are out of etcd sync on a backend list [17:49:39] TFO/TLS-session resume will break, but active connections won't [17:49:48] oh and that's all assuming LVS is TCP-stateless [17:49:52] yeah [17:50:08] for the original problem (not the text+upload coalescing), there is another solution too btw [17:50:20] sorry, this is going to get chaotic :P [17:50:40] we could configure IPs on every LVS box [17:50:43] we plug our transits into new cards on the LVSes and.... [17:50:45] :P [17:50:58] and instead of picking the active/backup via a routing policy [17:51:09] we code per-route MED support into pybal [17:51:23] that needs some heavy pybal changes I think, mark was looking into that at some point [17:51:28] yeah [17:51:42] so the idea would be lvs1001 has a better MED for text, lvs1002 has a better MED for upload, etc? [17:51:47] yeah [17:51:47] but they can all do all in worst-case [17:51:49] yes [17:51:59] and MED is configurable on the pybal config even [17:52:10] so routers don't have to be reconfigured for traffic to flow differently [17:52:26] basically at this point, I tend to think the anti-DDoS parts and pub/priv LVS separation are "meh" [17:53:02] if inbound traffic total is <10G per site with a healthy margin (I think that's true), why not have a 3-way active/active/active LVS cluster with a setup like discussed above, covering all IPs equally? [17:53:31] maybe not completely "meh", but I think relatively-unimportant in the grand scheme of things [17:53:56] the only reason is that it needs more work and is a little more delicate [17:54:11] but I don't disagree [17:54:15] well [17:54:21] there is a step (0) of course [17:54:29] put new eqiad lvs servers online [17:54:33] even the delicate parts, the worst-cast fallout is that on some bad event all the TCP conns reset, which happens now on LVS failover too. [17:59:38] I now really wonder how much TCP state LVS really looks at [17:59:53] assuming consistent backend lists, do we really even RST TCP on lvs failover with sh? [18:00:33] that's something I could test of course, opening a persistent connection myself and failing over right after [18:00:43] might be easier than digging in LVS docs and code :) [18:01:02] I think the point of what little it knows about TCP state and the related multicast sync is for wrr [18:04:11] yeah [18:04:28] ok, I need to go for dinner now [18:04:57] I'll file a task later or tomorrow about the static IP problem and we can fork from there into active/active pybal endeavours :P [18:05:39] yeah [18:05:59] I'll check up on whether we actually already have traffic-class (LVS-pair) subnets accidentally [18:07:49] this is all something to keep in mind with any future discussion about splitting upload traffic, too [18:08:49] (splitting simple images/thumbs from multimedia and other large files, in terms of URI hostname, thus allowing eventual coalesce of images+text without coalescing (or forcing zero-rating) multimedia. [18:08:57] ) [18:09:05] good thing I'm not a LISP programmer :P [18:09:27] hahahaha [18:10:24] anyway, ttyl :) [18:20:40] yeah [18:21:07] I'm taking a short break, will be back and probably looking first at geoipuplookup meeting/scheduling and writing the chapoly blog post [19:44:53] 10Traffic, 06Operations: Sort query parameters on urls - https://phabricator.wikimedia.org/T143574#2573043 (10BBlack) [19:44:57] 10Traffic, 10MediaWiki-General-or-Unknown, 06Operations, 06Services: Investigate query parameter normalization for MW/services - https://phabricator.wikimedia.org/T138093#2573041 (10BBlack) [19:50:09] 10Traffic, 10MediaWiki-General-or-Unknown, 06Operations, 06Services: Investigate query parameter normalization for MW/services - https://phabricator.wikimedia.org/T138093#2573059 (10BBlack) The default-parameter problem probably deserves it's own separate ticket and solution. Except for egregious legacy c... [22:19:44] 10Traffic, 06Operations, 13Patch-For-Review: Decom bits.wikimedia.org hostname - https://phabricator.wikimedia.org/T107430#2573813 (10BBlack) >>! In T107430#2534974, @Krinkle wrote: >>>! In T107430#2520799, @Krinkle wrote: >> The Commons app for Android (previously by Wikimedia, now community-maintained) als...