[02:22:20] 10netops, 10Operations, 10ops-codfw: Rename of wasat to mwmaint2001 (switch labels et al) - https://phabricator.wikimedia.org/T199530 (10Dzahn) [06:45:32] 10netops, 10Operations, 10ops-codfw: Rename of wasat to mwmaint2001 (switch labels et al) - https://phabricator.wikimedia.org/T199530 (10Smalyshev) [09:32:19] 10Traffic, 10Operations, 10Patch-For-Review: Renew unified certificates 2017 - https://phabricator.wikimedia.org/T178173 (10Krenair) @bblack looks like this one should be closed? [10:14:23] 10HTTPS, 10Traffic, 10Operations, 10fundraising-tech-ops: Re-evaluate use of EV certificates for payments.wm.o? - https://phabricator.wikimedia.org/T204931 (10Krenair) [10:14:52] 10HTTPS, 10Traffic, 10Operations, 10fundraising-tech-ops: Re-evaluate use of EV certificates for payments.wm.o? - https://phabricator.wikimedia.org/T204931 (10Krenair) [10:22:49] 10HTTPS, 10Traffic, 10Operations, 10fundraising-tech-ops: Re-evaluate use of EV certificates for payments.wm.o? - https://phabricator.wikimedia.org/T204931 (10Krenair) Just to emphasise, it's not doing anything special on Chrome on my Android phone, and the article linked above shows similar things on some... [10:36:34] 10HTTPS, 10Traffic, 10Operations, 10fundraising-tech-ops: Re-evaluate use of EV certificates for payments.wm.o? - https://phabricator.wikimedia.org/T204931 (10Krenair) [11:54:23] 10Wikimedia-Apache-configuration, 10Wikidata, 10Wikimedia-Site-requests, 10wikidata-tech-focus, and 4 others: wikidata.org/entity/Q12345 should do content negotiation immediately, instead of redirecting to wikidata.org/wiki/Special:EntityData/Q36661 first - https://phabricator.wikimedia.org/T119536 (10Addsh... [11:55:58] 10Wikimedia-Apache-configuration, 10Wikidata, 10Wikimedia-Site-requests, 10wikidata-tech-focus, and 4 others: wikidata.org/entity/Q12345 should do content negotiation immediately, instead of redirecting to wikidata.org/wiki/Special:EntityData/Q36661 first - https://phabricator.wikimedia.org/T119536 (10Addsh... [14:25:26] gdnsd upgrade 2.99.9 -> 2.99.42: code replacement with zero lost requests and no downtime for the systemd unit: https://phabricator.wikimedia.org/P7574 [14:25:35] (log messages about this stuff cleaned up since last time around) [15:07:29] bblack, I've been wondering about this [15:07:34] isn't the majority of DNS traffic over UDP? [15:12:47] nice [15:14:49] Krenair: it is [15:15:49] bblack, so some packet loss is expected. I guess we're doing this for the TCP queries? [15:16:09] some packet loss is always possible, but the goal is to minimize it [15:16:31] if you lose a UDP DNS request to your authserver, some remote cache somewhere has to time that out and re-send the packet to the same or another authserver, wasting everyone's time. [15:17:40] with a simple stop->start of one of our authdns servers, the process takes ~3s, and in that time we lose ~3.4K UDP requests, and then there's a small spike of excess requests appearing around the same time at the other two, is the normal pattern. [15:19:07] in the pre-systemd world, we had gdnsd's old overlapped restart, which would avoid the major ~2-3s outage of the service, but used SO_REUSEPORT to overlap the sockets, and thus still would lose some UDP requests that were stuck in the buffers of the old sockets. [15:19:41] sytemd broke that mechanism. the new overlapped-replacement mechanism (a) works with systemd and (b) uses SCM_RIGHTS to handoff instead of the SO_REUSEPORT method, so there's no buffer loss either. [15:19:52] :/ [15:19:59] so with this we just see an increase in latency briefly while the handover happens? [15:20:08] no increase in latency [15:20:13] it's like it didn't happen [15:20:27] interesting [15:22:05] all the same applies to TCP too. The old model (even pre-systemd) just immediately closed open/idle/in-progress TCP conns, and lost any new-connection SYNs in the old buffers. [15:22:28] ew [15:22:35] the new stuff finishes in-progress TCP transactions and closes them at their next idle-point between requests, and doesn't lose any pending SYNs. [15:22:39] yeah [15:22:47] this is obviously superior [15:22:52] TCP is rarely used, certain not for browser traffic anyways [15:23:00] I guess I just wondered if the amount of work is worth the gain [15:23:42] eventually though, I think we'll see more TCP DNS usage. Standards are moving in that direction even for the cache->auth leg, for busy authservers and recursors. [15:24:05] (e.g. google dns recursor nodes -> wikimedia authservers, might have long-idle connections that get reused a lot) [15:24:30] right [15:24:35] yeah [15:24:51] but for now TCP DNS usage is statistically tiny, and most connections are single-request connections. [15:25:59] (~0.03% of reqs to our authdns happen over TCP) [15:27:10] the TCP usage we do see is mostly going to be sometimes-automated debug/test traffic, sometimes mail-related things looking at MX + SPF + DMARC, etc, sometimes mail-related or anything else doing ANY-queries (all ANY-queries are forced over to TCP, to avoid amplification) [15:28:26] but anyways, our TCP DNS is now really robust, whenever the world gets around to using it. It may form the basis of our future authserver support for DNS-over-TLS too, whenever that's standardized for authservers. [15:29:09] (and semi-relatedly: the ACME spec recommends CAs query the dns-01 challenge over TCP too, to reduce spoofing, since it's not perf-critical to waste the RTTs setting up the connection) [15:30:25] I'm surprised it doesn't require it [15:31:57] well I think they didn't want to proscibe specifically how to make requests secure, since it might clash with future DNS RFCs [15:33:37] [they currently recommend: DNSSEC if possible, querying from multiple global vantage points, using TCP, and/or using the 0x20 hack to add more entropy to the 16-bit query ID] [15:33:50] I suppose [15:33:57] which is the first time I think I've seen anyone mention the 0x20 hack since the draft for it expired long ago without action. [15:34:44] (which is, in a nutshell: dns names are supposed to be case-insensitive, so use pseudo-random case-bits in the query name and match them in the response, to add more bits of QID entropy vs the standard stupid 16-bit field) [15:34:53] I'm not familiar with the 0x20 hack? [15:34:59] ah [15:35:03] (which works with many/most servers, but there are some dumb ones out there that strip the case-bits, so it never made standard) [15:35:51] presumably somewhere there is regulations that publicly trusted CAs have to follow to secure challenge verification? [15:36:01] eventually, hopefully, DoTLS will be on that list. If DPRIVE gets around to standardizing it, and if the standard doesn't obviate much of its own benefit by relying on DNSSEC itself. [15:36:38] Krenair: no idea. I don't think CA/B mandates specific technology about things like that though [15:36:54] :S [15:37:42] (come to think of it, they probably don't have any standards about ACME specifically, only about DV in general, which are known to be pretty weak) [15:37:50] strikes me as a weak point in the DV process [15:37:53] yeah [15:38:07] but CAA fixes a lot of that too [15:38:50] do we have any monitoring of CT logs for our names? [15:38:55] we do [15:39:13] faidon made a nice script a while back, it emails us when there's new ones [15:39:26] cool [15:39:37] I think it's disabled :/ [15:40:45] oh? [15:41:17] oh yeah there's no mail from it since like April, wth [15:41:26] yeah it was noisy at some point and got disabled or something [15:41:40] I guess we should make a task about cleaning it up [15:41:52] is that because of all the LE stuff renewing? [15:42:03] no, I think older version of certspotter [15:42:08] no, I think a lot of its spam was just CT log server failures being reported as cronspam [15:42:17] certspotter has made a bit of a weird decision of hardcoding the CT log servers into the source [15:42:26] so every time one gets offline for whatever reason [15:42:32] cron kept emailing stuff like: /usr/bin/certspotter: ctlog.wosign.com: 2018/04/14 07:21:34 Error retrieving STH from log: Get https://ctlog.wosign.com/ct/v1/get-sth: dial tcp 36.110.213.36:443: i/o timeout [15:42:32] you need a new version that disables that [15:42:48] yeah that's https://github.com/SSLMate/certspotter/commit/418ef7fd9709eebcc885908d601a9a6e426b2a94 [15:44:17] certspotter 0.9 is in sid, I haven't built a stretch-backports version yet though [15:44:27] slipped my mind really [15:45:05] certspotter has support for running a hook too, could that be integrate with certcentral? [15:45:08] integrated* [15:45:19] a hook to check if the new cert was legitimately requested by us? [15:45:48] well [15:45:49] no, instead of mailing that data, it can just invoke whatever executable you want with that information in the environment [15:45:52] couple of problems around that [15:46:12] so you could feed that in to certcentral and yeah, see if it was requested by us or something? [15:46:16] I assume the idea would be to ignore certs issued by certcentral itself? [15:46:19] I don't know, I know very little about certcentral :) [15:47:50] the email reporting model didn't work very well with LE [15:47:56] one problem would be that one DC's certcentral wouldn't recognise a cert issued by the other DC's certcentral [15:48:05] another problem would be that frack and corp do their own thing [15:48:30] we could have a blacklist for which it sends an email or ignores or whatever [15:48:41] frack has very few certificates, corp has just one I think [15:48:47] but yeah could also blacklist *.corp [15:48:55] well, we could check with both certcentrals [15:49:22] so [15:49:24] one issue is that probably certcentral doesn't keep a log of everything it issued, only current certs? [15:49:33] it would see a new cert be issued [15:49:47] so if we had a quick re-issue over some error in the SAN list, by the time it shows up in certspotter the short-term wrong one might look fake to us [15:49:48] could also have a combined model where if it's something certcentral knows it silences it, if not it emails us [15:50:01] find out from all certcentral instances what certs they've got (live or not) [15:50:11] the problem with certspotter as it was set up was that it was showing all kinds of 3-month renewals for LE certs and it was hard to keep track of that [15:50:12] if the new cert is on the list it's ignored? [15:50:15] right, we'd have to keep any replaced certs for some amount of time [15:50:28] (maybe in some separate archive directory just for checking against) [15:50:37] also the other certs we have in ops/puppet, so we could deploy them all and check against those as well [15:50:50] bblack, I think you're right that it's likely just overwriting current certs [15:51:15] so the script could check if the reported cert is not in the set deployed by puppet, and if it's not in certcentral-eqiad and if it's not in centcentral-codfw, and is not under *.corp and if so, alert :) [15:51:17] (and we do have to match the signatures, or else we'd miss someone doing an illegitimate LE issue of a domain we also LE issue) [15:52:08] "the set deployed by puppet" would be tricky, it would be simpler to just directly check the 2x certcentral's info [15:52:21] I mean digicert [15:52:27] and globalsign and whatever [15:52:31] oh, right [15:52:42] just the unified ones? [15:52:49] well, there should be very few of the manual ones eventually [15:52:50] well everything that we have in there, does it matter :) [15:53:01] if it's in there, we issued it [15:53:08] but those will always be tricky, because the current process has them showing up in CT logs before they get deployed to puppet usually [15:53:27] oh because CAs would publish them as soon as they issue them, right [15:53:34] yeah :/ [15:53:39] good point [15:53:48] (last time I actually waited for that to happen, because I didn't want it going out before CT showed up, and I wanted to wait out reasonable client clock skew too) [15:53:57] I tihnk the latter part is a longer wait than the former [15:54:18] in puppet != deployed though, that we could manage [15:54:24] and something we haven't really considered on the LE side of things yet, because those certs are less-widely relied on as critically, for as many clients with awful clocks [15:54:28] but yeah, there's time between issuance and a puppet commit [15:55:09] [we should delay deployment of new replacement LE certs by some time window to avoid client clock issues. for the big unified we wait ~24h+. maybe not that long, but still] [15:55:09] anyway, it could still email us about those [15:55:19] they're so few and so rare that it doesn't matter much I guess [15:55:29] but the "email me for every log entry" model doesn't play well with LE [15:55:35] right [15:55:39] so would this still be using certspotter? [15:55:43] N amount of certs, each on a 3-month schedule, it's impossible to keep track of that manually [15:55:56] wait till we get certs for all the junk domain wildcards :) [15:56:03] yeah heh [15:56:52] just taking the output of it and checking it against the known good stuff? [15:56:56] this all reminds me, we still have a ticket about Expect-CT outstanding [15:56:59] https://phabricator.wikimedia.org/T193521 [15:57:41] we need to audit that all our current LE certs (issued via the old system) have in fact renewed since LE started embedding SCT, and that they have it (maybe we're missing an attribute to ask for it) [15:58:02] then maybe we can flip on Expect-CT at the caches, and/or via ssl_ciphersuite() for the one-offs, or something [15:58:37] isn't CT expected for every cert nowadays? [15:58:50] kinda, there's a briefly explanation in the task [15:58:59] "As of april 30, google chrome is enforcing certificate transparency on all new certs, but the header is needed to ensure an adversary doesn't backdate the cert to have an issue date prior to April 30." [15:59:03] oh [15:59:03] heh [15:59:09] well [15:59:17] LE validity is 90 days, right? [15:59:32] yeah but that doesn't matter in this case [15:59:35] and we're past 30 Apr + 90 days [15:59:50] the adversary's backdated fake would probably come from some illegitimate CA [15:59:55] no I mean [16:00:04] we can expect all of our certs to be in CT now [16:00:11] yes, but LE isn't required by any standard to embed SCT, only to log CT [16:00:22] I haven't ever confirmed if they embed SCT by default [16:00:29] or we need to request it with some attribute [16:00:34] is Expect-CT expecting SCT? :) [16:01:06] not technically, no [16:01:06] or IOW, why does SCT matter? [16:01:24] SCT solves the same problem for CT that OCSP Stapling solves for CRL lists [16:01:47] oh so you mean it's a perf issue if LE doesn't do SCT [16:01:50] a performant browser can't be secure and fast if it has to check a central 3rd-party resource every time you open a new page to a new domain or whatever. [16:01:58] (and we set Expect-CT) [16:01:59] got it :) [16:02:52] it's a perf issue and technically a privacy leak issue too, much like CRL/OCSP without OCSP Stapling [16:02:56] nod [16:03:16] Certs issued by public CAs have a max validity time right? I presume once that amount of time has passed since the requirement coming into force, browsers can just remove the historical exemption and kill the backdating loophole? Do all other supported browsers require CT? [16:03:34] which brings us around to another related topic: whether we should start adding OCSP Must-Staple to certificates too (for the LE case and/or the unified case) [16:03:48] [yes, the Expect-CT hack is temporary, eventually at some future date it will be unnecc] [16:04:04] CA/B mandates max 2 year cert lifetimes, so it's like Apr 30 2020. [16:04:14] https://cabforum.org/2017/03/17/ballot-193-825-day-certificate-lifetimes/ says "825 days" [16:04:15] until 2020-05-01? [16:04:23] don't ask me why [16:04:28] couldn't make it simple could they [16:04:44] heh, so 2y + 95d [16:04:55] http://www.wolframalpha.com/input/?i=865+days+from+2018-04-30 says 2010-09-11 [16:04:58] 2020-09-11* [16:05:06] well [16:05:09] Subscriber Certificates issued after March 1, 2018 MUST have a Validity Period no greater than 825 days. [16:05:14] anyways, OCSP Must-Staple is a nice-to-have property, but it also means conforming browsers will fail requests if OCSP stapling fails. [16:05:28] that's fine in this case I think [16:05:36] Subscriber Certificates issued after 1 July 2016 but prior to 1 March 2018 MUST have a Validity Period no greater than thirty-nine (39) months. [16:05:42] I think for the big cache clusters and the unified, we're at or near the point where we can risk that. but for LE certs to one-off public service instances we don't even have standardized stapling config. [16:06:11] paravoid: that's like 1190 days heh [16:06:22] so, Expect-CT may be around a while [16:06:55] we can disregard the old 39 month requirement for this purpose though right? [16:07:19] if the CT requirement came into force after it was brought down to 825 days [16:07:24] I don't think so [16:08:01] the CT requirement exempts certs issued before Apr 30 2018 from checking CT, which would include those certs issued just before Mar 1 2018 with a 39 month life. [16:08:26] so Mar 1 2018 + 39 months -> Expect-CT finally becomes useless [16:08:27] oh right [16:09:19] http://www.wolframalpha.com/input/?i=39+months+from+2018-03-01 2021-06-01 [16:10:04] at least, it becomes useless if browsers actually kill the backdating loophole on that day? [16:11:18] yeah I guess, which they should, since CA/B says at that point no legitimate cert with a start date prior to Mar 1 could have been issued [16:11:49] but they might rely on a code update to do it and people might use old browsers and bah [16:11:53] of course at least some browser vendors will do that with a software update instead of embedding the timer for that in current builds, so we might have to wait another couple of years for users to upgrade :/ [16:13:13] safest option is probably to chuck it on and leave it forever, but it is just more stuff to send with every response [16:13:48] we already send plenty of pointless headers, it will be a drop in the bucket! :) [16:13:53] :/ [16:14:44] 1217 bytes of headers emitted in response to an anonymous curl request for https://en.wikipedia.org/ (which is just a 301 to Main_page) [16:17:23] so let's use it for 4-5 years and re-evaluate [16:20:48] bblack, we've covered a ton of different things, should turn this discussion into a bunch of tasks I think [16:32:27] yeah, probably! [16:33:52] off the top of my head: there's already a task for Expect-CT, there might be a task (or not!) about Must-Staple. It should probably have a subtask about puppetizing stapling for all the other one-off apache/nginx servers. certspotter should be fixed up (at least updated to spam less and re-enabled, later integrated to certcentral, etc?) [16:34:37] there should probably be a (future, not this Q) task about using cetspotter to delay deployment of new LE certs to wait out skewed client clocks. [16:35:19] and about certspotter either logging hashes of everything it issued somewhere, or keeping certs it issued then deleted/replaced in an archive dir for a little while, for some future certspotter integration to check [16:35:29] bleh [16:35:37] and about *certcentral* either logging hashes of everything it issued somewhere, or keeping certs it issued then deleted/replaced in an archive dir for a little while, for some future certspotter integration to check [16:36:12] I made that mistake earlier on too [16:36:25] --- restarting, with corrections! [16:36:38] off the top of my head: there's already a task for Expect-CT, there might be a task (or not!) about Must-Staple. It should probably have a subtask about puppetizing stapling for all the other one-off apache/nginx servers. certspotter should be fixed up (at least updated to spam less and re-enabled, later integrated to certcentral, etc?) [16:36:55] there should probably be a (future, not this Q) task about using certcentral to delay deployment of new LE certs to wait out skewed client clocks. [16:36:59] and about *certcentral* either logging hashes of everything it issued somewhere, or keeping certs it issued then deleted/replaced in an archive dir for a little while, for some future certspotter integration to check [16:37:37] s/new LE certs/renewed LE certs/, I guess it doesn't make sense to delay an initial deploy [17:14:33] 10netops, 10Operations, 10ops-eqiad: Rack/setup cr2-eqord - https://phabricator.wikimedia.org/T204170 (10ayounsi) [17:52:48] 10Traffic, 10Operations: Consider adding Must-Staple header to enforce revocation checking - https://phabricator.wikimedia.org/T204987 (10Krenair) [17:53:00] 10Traffic, 10Operations: Consider adding Must-Staple header to enforce revocation checking - https://phabricator.wikimedia.org/T204987 (10Krenair) https://scotthelme.co.uk/designing-a-new-security-header-expect-staple/ [17:53:21] 10HTTPS, 10Traffic, 10Operations: Consider adding Must-Staple header to enforce revocation checking - https://phabricator.wikimedia.org/T204987 (10Krenair) [17:53:47] 10Traffic, 10Operations: Consider adding Must-Staple header to enforce revocation checking - https://phabricator.wikimedia.org/T204987 (10Krenair) [17:54:55] 10Traffic, 10Operations: Puppetise OCSP stapling for all one-off HTTPS servers - https://phabricator.wikimedia.org/T204992 (10Krenair) [17:58:53] 10Traffic, 10Operations: Update certspotter - https://phabricator.wikimedia.org/T204993 (10Krenair) [18:01:36] 10Traffic, 10Operations: Integrate certspotter with certcentral to avoid certspotter notifying us on legitimate certs generated by our certcentral boxes - https://phabricator.wikimedia.org/T204994 (10Krenair) [18:03:10] 10Traffic, 10Operations: Update certspotter - https://phabricator.wikimedia.org/T204993 (10Krenair) [18:03:13] 10Traffic, 10Operations: Integrate certspotter with certcentral to avoid certspotter notifying us on legitimate certs generated by our certcentral boxes - https://phabricator.wikimedia.org/T204994 (10Krenair) [18:03:16] 10Traffic, 10Operations, 10Goal, 10Patch-For-Review: Deploy a scalable service for ACME (LetsEncrypt) certificate management - https://phabricator.wikimedia.org/T199711 (10Krenair) [18:05:58] 10Traffic, 10Operations: Integrate certspotter with certcentral to avoid certspotter notifying us on legitimate certs generated by our certcentral boxes - https://phabricator.wikimedia.org/T204994 (10Krenair) We'd still get stuff being issued from *.corp.wikimedia.org and frack but these are all manual AFAIK (... [18:09:07] 10Traffic, 10Operations: Consider adding expect-CT: header to enforce certificate transparency - https://phabricator.wikimedia.org/T193521 (10Krenair) This was discussed in #wikimedia-traffic today. Even though theoretically the header would be useless past 2021-06-01 (when the last publicly trusted certs issu... [18:11:57] 10Traffic, 10Operations: certcentral: delay deployment of renewed certs to wait out skewed client clocks - https://phabricator.wikimedia.org/T204997 (10Krenair) [18:12:13] 10Traffic, 10Operations: certcentral: delay deployment of renewed certs to wait out skewed client clocks - https://phabricator.wikimedia.org/T204997 (10Krenair) [18:12:16] 10Traffic, 10Operations, 10Goal, 10Patch-For-Review: Deploy a scalable service for ACME (LetsEncrypt) certificate management - https://phabricator.wikimedia.org/T199711 (10Krenair) [18:12:55] bblack, ^ done most [18:12:59] question about the last one [18:13:20] if we do go down the route of logging hashes from certcentral only [18:13:32] don't we still want an archive of previously used certs regardless? [18:13:50] I don't know, good question [18:14:13] I think we can revoke from the account key, so we don't necessarily need any of it for revocation [18:14:23] but it might be helpful to know what we're revoking [18:15:10] maybe when an existing cert is replaced by a new one, it should just be moved off to some archival directory on the certcentral host, and something should clean those up N days after they actually expire [18:15:20] that would serve any weird case where we wanted to manually roll back or something, too [18:15:51] and then the checker can scan the active set + archive, to confirm a new CT log entry [18:17:02] are there any cases where we might want to revoke everything previously issued? [18:17:11] even if we don't happen to have a copy somewhere? [18:18:01] I don't know. Maybe? [18:19:02] we might find that hostX was compromised and we want to revoke any certs+keys issued to it, but we notice the timeline is: 1 - Compromise, 2 - new LE cert issued automatically, 3 - Notice + Revoke. In which case we have both a current+past set to revoke. [18:20:33] so we wouldn't ever need anything other than the last live one [18:24:18] yeah, I guess [18:24:45] so long as they're expired we definitely don't need them, it's a simpler cutoff that doesn't take any operational scenario-thinking into account. [18:27:22] bblack, other thing that can happen is a new cert being issued because we wanted to add a new SAN or remove an old SAN [18:27:51] in such a case we may want to keep more than just the last live one :/ [18:28:08] might be easier to just write them all out to disk in an 'archive' directory with a timestamp bblack :) [18:30:23] the expiry date is inside the cert anyways, a periodic scanner can kill ones that are actually-expired without any additional metadata [18:31:52] true [18:32:09] dunno if we really need that [18:33:16] bblack, so https://phabricator.wikimedia.org/T204994 can be about the hashes for the purposes of certspotter, and I'll make another for having certcentral copy old certs into an archive? [18:34:33] well, having some kind of log of the hashes for certspotter is if we didn't have the certs themselves [18:34:49] if we end up archiving them, the certspotter-checking integration can just scan the archive + live sets [18:35:16] (and in case we're restarting log-scanning and looking at old entries, ignore CT log entries for expired certs) [18:37:01] bblack, so we'd just stick the archive somewhere accessible via the local (to the certcentral machine) webserver? [18:38:05] maybe we have certcentral dump everything into the archive and then an API route that can be used to get all known hashes [18:44:25] yeah, I have no idea what the integration will look like [18:45:08] but a simplistic-yet-inefficient approach would just be to have the certcentral notification hook ssh over to the 2x certcentral machines and run "scan-certs-for.py " that just looks at the local archive+live directories on each [18:45:43] or yeah, we could export a list of hashes periodically to speed it up, or even offer an API to scan the archive+live set for a hash, or whatever [18:49:12] yeah I prefer having an API for it [18:49:52] no need to introduce any additional SSH access into the certcentral machine [18:50:33] can just add code to expose the hashes to certcentral, and give the certspotter machine the URLs [18:58:39] bblack, paravoid: think we're missing anything that was discussed? [18:59:06] he's probably gone for the day [18:59:16] I think if we did miss anything, this is still plenty [18:59:34] defining a bunch of tasks for things we haven't gotten around to doing is half the battle, but we already have lots of them :) [18:59:44] 10Traffic, 10Operations: Consider adding expect-CT: header to enforce certificate transparency - https://phabricator.wikimedia.org/T193521 (10Krenair) we need to audit that all our current LE certs (issued via the old system) have in fact renewed since LE started embedding SCT, and that they have it (... [21:55:01] 10Traffic, 10Community-Tech, 10MediaWiki-Parser, 10Operations: Show SVGs in in wiki language if available - https://phabricator.wikimedia.org/T205040 (10MaxSem) [22:00:44] bblack, btw have you seen https://phabricator.wikimedia.org/T204931 ? [22:00:56] I was wondering what you made of EV certs