bob1029 3 days ago

What's the end game here? I agree with the dissent. Why not make it 30 seconds?

Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours? I am willing to bet money this threshold will never be crossed.

This feels like much more of an ideological mission than a practical one, unless I've missed some monetary/power advantage to forcing everyone to play musical chairs with their entire infra once a month...

  • mcpherrinm 3 days ago

    I'm on the team at Let's Encrypt that runs our CA, and would say I've spent a lot of time thinking about the tradeoffs here.

    Let's Encrypt has always self-imposed a 90 day limit, though of course with this ballot passing we will now have to reduce that under 47 days in the future.

    Shorter lifetimes have several advantages:

    1. Reduced pressure on the revocation system. For example, if a domain changes hands, then any previous certificates spend less time in the revoked state. That makes CRLs smaller, a win for everyone involved.

    2. Reduced risk for certificates which aren't revoked but should have been, perhaps because a domain holder didn't know that a previous holder of that domain had it, or an attack of any sort that led to a certificate being issued that wasn't desired.

    3. For fully short-lived certs (under 7 days), many user-agents don't do revocation checks at all, because that's a similar timeline to our existing revocation technology taking effect. This is a performance win for websites/user-agents. While we advocate for full certificate automation, I recognize there are cases where that's not so easy, and doing a monthly renewal may be much more tractable.

    Going to shorter than a few days is a reliability and scale risk. One of the biggest issues with scale today is that Certificate Transparency logs, while providing great visibility into what certs exist (see points 1 and 2), will have to scale up significantly as lifetimes are cut.

    Why is this happening now, though? I can't speak for everyone, and this is only my own opinion on what I'm observing, but: One big industry problem that's been going on for the last year or two is that CAs have found themselves in situations where they need to revoke certificates because of issues with those certificates, but customers aren't able to respond on an appropriate timeline. So the big motivation for a lot of the parties here is to get these timelines down and really prove a push towards automation.

    • 0xbadcafebee 3 days ago

      So it's being pushed because it'll be easier for a few big players in industry. Everybody else suffers.

      • da_chicken 3 days ago

        It's a decision by Certificate Authorities, the ones that sell TLS certificate services, and web browser vendors. One benefits from increased demand on their product, while the other benefits by increasing the overhead on the management of their software, which increases the minimum threshold to be competitive.

        There are security benefits, yes. But as someone that works in infrastructure management, including on 25 or 30 year old systems in some cases, it's very difficult to not find this frustrating. I need tools I will have in 10 years to still be able to manage systems that were implemented 15 years ago. That's reality.

        Doubtless people here have connected to their router's web interface using the gateway IP address and been annoyed that the web browser complains so much about either insecure HTTP or an unverified TLS certificate. The Internet is an important part of computer security, but it's not the only part of computer security.

        I wish technical groups would invest some time in real solutions for long-term, limited access systems which operate for decades at a time without 24/7 access to the Internet. Part of the reason infrastructure feels like running Java v1.3 on Windows 98 is because it's so widely ignored.

        • bigstrat2003 2 days ago

          It is continuously frustrating to me to see the arrogant dismissiveness which people in charge of such technical groups display towards the real world usage of their systems. It's some classic ivory tower "we know better than you" stuff, and it needs to stop. In the real world, things are messy and don't conform to the tidy ideas that the Chrome team at Google has. But there's nothing forcing them to wake up and face reality, so they keep making things harder and harder for the rest of us in their pursuit of dogmatic goals.

          • cm2187 2 days ago

            An example of that was the dismissal of privacy concerns by the committee with using MAC addresses in ipv6 addresses.

            • dcow 2 days ago

              Aren’t temporary addresses a thing now?

        • kcb 3 days ago

          It astounds me that there's no non-invasive local solution to go to my router or whatever other appliances web page without my browser throwing warnings and calling it evil. Truly a fuck up(purposeful or not) by all involved in creating the standards. We need local TLS without the hoops.

          • 0xbadcafebee 2 days ago

            Simplest possible, least invasive, most secure thing I can think of: QR code on the router with the CA cert of the router. Open cert manager app on laptop/phone, scan QR code, import CA cert. Comms are now secure (assuming nobody replaced the sticker).

            The crazy thing? There is already two WiFi QR code standards, but they do not include the CA cert. There's a "Wi-Fi Easy Connect" standard that is intended to secure the network for an enterprise, and there's a random Java QR code library that made their own standard for just encoding an access point and WPA shared key (and Android and iOS both adopted it, so now it's a de-facto standard).

            End-user security wasn't a consideration for either of them. With the former they only cared about protecting the enterprise network, and with the latter they just wanted to make it easier to get onto a non-Enterprise network. The user still has to fend for themselves once they're on the network.

            • varjag 2 days ago

              This is a terrible solution. Now you require an Internet connection and a (non-abandoned) third party service to configure a LAN device. Not to mention countless industrial devices where operators would typically have no chance to see QR code.

              • 0xbadcafebee 2 days ago

                The solution I just mentioned specifically avoids an internet connection or third parties. It's a self-signed cert you add to your computer's CA registry. 100% offline and independent of anything but your own computer and the router. The QR code doesn't require an internet connection. And the first standard I mentioned was designed for industrial devices.

                • xorcist 2 days ago

                  Not only would that set a questionable precedent if users learn to casually add new trust roots, it would also need support for new certificate extensions to limit validity to that device only. It's far from obvious that would be a net gain for Internet security in general.

                  It might be easier to extend the URL format with support for certificate fingerprints. It would only require support in web browsers, which are updated much faster than operating systems. It could also be made in a backwards compatible way, for example by extending the username syntax. That way old browsers would continue to show the warning and new browsers would accept the self signed URL format in a secure way.

            • lazide 2 days ago

              Have you seen the state of typical consumer router firmwares? Security hasn’t been a serious concern for a decade plus.

              They only stopped using global default passwords because people were being visibly compromised on the scale of millions at a time.

              • cpach 2 days ago

                Good point. There are exceptions though. Eero, for example.

            • dcow 2 days ago

              Your router should use acme with a your-slug.network.home (a communal one would be nice, but more realistically some vendor specific domain suffix that you could cname) domain name and then you should access it via that, locally. your router should run ideally splitbrain dns for your network. if you want you can check a box and make everything available globally via dns-sd.

            • abtinf 2 days ago

              Wouldn't that allow the router to MITM all encrypted data that goes through it?

              • 0xbadcafebee 21 hours ago

                If it were a CA cert yes. It could instead be a self-signed server (non-CA) cert, that couldn't be used for requests to anything else.

          • GabeIsko 3 days ago

            All my personal and professional feelings aside (they are mixed) it would be fascinating to consider a subnet based TLS scheme. Usually I have to bang on doors to manage certs at the load balancer level anyway.

          • hamburglar 2 days ago

            I’ve actually put a decent amount of thought into this. I envision a raspberry pi sized device, with a simple front panel ui. This serves as your home CA. It bootstraps itself witha generated key and root cert and presents on the network using a self-issued cert signed by the bootstrapped CA. It also shows the root key fingerprint on the front panel. On your computer, you go to its web UI and accept the risk, but you also verify the fingerprint of the cert issuer against what’s displayed on the front panel. Once you do that, you can download and install your newly trusted root. Do this on all your machines that want to trust the CA. There’s your root of trust.

            Now for issuing certs to devices like your router, there’s a registration process where the device generates a key and requests a cert from the CA, presenting its public key. It requests a cert with a local name like “router.local”. No cert is issued but the CA displays a message on its front panel asking if you want to associate router.local with the displayed pubkey fingerprint. Once you confirm, the device can obtain and auto renew the cert indefinitely using that same public key.

            Now on your computer, you can hit local https endpoints by name and get TLS with no warnings. In an ideal world you’d get devices to adopt a little friendly UX for choosing their network name and showing the pubkey to the user, as well as discovering the CA (maybe integrate with dhcp), but to start off you’d definitely have to do some weird hacks.

            • varjag 2 days ago

              This is incredibly convoluted scenario for a use case with near zero chance of a MITM attack. Security ops is cancer.

            • meindnoch 2 days ago

              Please tell me this is satire.

              • hamburglar a day ago

                What can I say, I am a pki nerd and I think the state of local networking is significantly harmed by consumer devices needing to speak http (due to modern browsers making it very difficult to use). This is less about increasing security and more about increasing usability without also destroying security by coaching people to bypass cert checks. And as home networks inevitably become more and more crowded with devices, I think it will be beneficial to be able to strongly identify those devices from the network side without resorting to keeping some kind of inventory database, which nobody is going to do.

                It also helps that I know exactly how easy it is to build this type of infrastructure because I have built it professionally twice.

          • UltraSane 2 days ago

            Why should your browser trust the router's self-signed certificate? After you verify that it is the correct cert you can configure Firefox or your OS to trust it.

            • lxgr 2 days ago

              Because local routers by definition control the (proposed?) .internal TLD, while nobody controls the .local mDNS/Zeroconf one, so the router or any local network device should arguably be trusted at the TLS level automatically.

              Training users to click the scary “trust this self-signed certificate once/always” button won’t end well.

              • da_chicken 2 days ago

                Honestly, I'd just like web browsers to not complain when you're connecting to an IP on the same subnet by entering https://10.0.0.1/ or similar.

                Yes, it's possible that the system is compromised and it's redirecting all traffic to a local proxy and that it's also malicious.

                It's still absurd to think that the web browser needs to make the user jump through the same hoops because of that exceptional case, while having the same user experience as if you just connected to https://bankofamerica.com/ and the TLS cert isn't trusted. The program should be smarter than that, even if it's a "local network only" mode.

                • UltraSane 2 days ago

                  Certificates protect against man in the middle attacks and those are a thing on local networks.

          • fiddlerwoaroof 3 days ago

            I wonder what this would look like: for things like routers, you could display a private root in something like a QR code in the documentation and then have some kind of protocol for only trusting that root when connecting to the router and have the router continuously rotate the keys it presents.

            • da_chicken 3 days ago

              Yeah, what they'll do is put a QR code on the bottom, and it'll direct you to the app store where they want you to pay them $5 so they can permanently connect to your router and gather data from it. Oh, and they'll let you set up your WiFi password, I guess.

              That's their "solution".

        • nine_k 2 days ago

          I wonder if a separate CA would be useful for non-public-internet TLS certificates. Imagine a certificate that won't expire for 25 years issued by it.

          Such a certificate should not be trusted for domain verification purposes, even though it should match the domain. Instead it should be trusted for encryption / stream integrity purposes. It should be accepted on IPs outside of publicly routable space, like 192.0.0/24, or link-local IPv6 addresses. It should be possible to issue it for TLDs like .local. It should result in a usual invalid certificate warning if served off a pubic internet address.

          In other words, it should be handled a bit like a self-signed certificate, only without the hassle of adding your handcrafted CA to every browser / OS.

          Of course it would only make sense if a major browser would trust this special CA in its browser by default. That is, Google is in a position to introduce it. I wonder if they may have any incentive though. (To say nothing of Apple.)

          • jabiko 2 days ago

            But what would be the value of such a certificate over a self-signed one? For example, if the ACME Router Corp uses this special CA to issue a certificate for acmerouter.local and then preloads it on all of its routers, it will sooner or later be extracted by someone.

            So in a way, a certificate the device generates and self-signs would actually be better, since at least the private key stays on the device and isn’t shared.

            • nine_k 2 days ago

              The value: you open such an URL with a bog standard, just-installed browser, and the browser does not complain about the certificate being suspicious.

              The private key of course stays within the device, or anywhere the certificate is generated. The idea is that the CA from which the certificate is derived is already trusted by the browser, in a special way.

              • procaryote 2 days ago

                Compromise one device, extract the private key, have a "trusted for a very long time" cert that identifies like devices of that type, sneak it into a target network for man in the middle shenanigans.

                • dcow 2 days ago

                  If someone does that you’ve already been pwned. In reality you limit the CA to be domain scoped. I don’t know why domain-scoped CAs aren’t a thing.

                  • jabiko 2 days ago

                    > If someone does that you’ve already been pwned

                    Then why use encryption at all when your threat model for encrypted communication can't handle a malicious actor on the network?

                    • mjmas 2 days ago

                      Because there are various things in HTML and JS that require https.

                      (Though getting the browser to just assume http to local domains is secure like it already does for http://localhost would solve that)

        • JackSlateur 2 days ago

          Yes

          Old cruft dying there for decades

          That's the reality and that's an issue unrelated to TLS

          Running unmanaged compute at home (or elsewhere ..) is the issue here.

        • benlivengood 2 days ago

          Frankly, unless 25 and 30 year old systems are being continually updated to adhere to newer TLS standards then they are not getting many benefits from TLS.

          Practically, the solution is virtual machines with the compatible software you'll need to manage those older devices 10 years in the future, or run a secure proxy for them.

          Internet routers are definitely one of the worst offenders because originating a root of trust between disparate devices is actually a hard problem, especially over a public channel like wifi. Generally, I'd say the correct answer to this is that wifi router manufacturers need to maintain secure infrastructure for enrolling their devices. If manufacturers can't bother to maintain this kind of infrastructure then they almost certainly won't be providing security updates in firmware either, so they're a poor choice for an Internet router.

        • tptacek 2 days ago

          It is reasonable for the WebPKI of 2025 to assume that the Internet encompasses the entire scope of its problem.

        • ryao 2 days ago

          If the web browsers would adopt DANE, we could bypass CAs and still have TLS.

          • xorcist 2 days ago

            A domain validated secure key exchange would indeed be a massive step up in security, compared to the mess that is the web PKI. But it wouldn't help with the issue at hand here: home router boostrap. It's hard to give these devices a valid domain name out of the box. Most obvious ways have problems either with security or user friendliness.

      • tptacek 3 days ago

        Or, equivalently, it's being pushed because customers of "big players", of which there are a great many, are exposed to security risk by the status quo that the change mitigates.

      • mcpherrinm 3 days ago

        It makes the system more reliable and more secure for everyone.

        I think that's a big win.

        The root reason is that revocation is broken, and we need to do better to get the security properties we demand of the Web PKI.

        • zmmmmm 2 days ago

          > It makes the system more reliable

          It might in theory but I suspect it's going to make things very very unreliable for quite a while before it (hopefully) gets better. I think probably already a double digit fraction of our infrastructure outages are due to expired certificates.

          And because of that it may well tip a whole class of uses back to completely insecure connections because TLS is just "too hard". So I am not sure if it will achieve the "more secure" bit either.

        • fiddlerwoaroof 3 days ago

          It makes systems more reliable and secure for system runners that can leverage automation for whatever reason. For the same reason, it adds a lot of barriers to things like embedded devices, learners, etc. who might not be able to automate TLS checks.

          • thayne 2 days ago

            Putting a manually generated cert on an embedded device is inherently insecure, unless you have complete physical control over the device.

            And as mentioned in other comments, the revocation system doesn't really work, and reducing the validity time of certs reduces the risks there.

            Unfortunately, there isn't really a good solution for many embedded and local network cases. I think ideally there would be an easy way to add a CA that is trusted for a specific domain, or local ip address, then the device can generate its own certs from a local ca. And/or add trust for a self-signed cert with a longer lifetime.

            • fiddlerwoaroof 2 days ago

              This is a bad definition of security, I think. But you could come up with variations here that would be good enough for most home network use cases. IMO, being able to control the certificate on the device is a crucial consumer right

            • throwaway2037 2 days ago

              Real question: What is the correct way to handle certs on embedded devices? I never thought about it before I read this comment.

              • steve_gh 2 days ago

                There are many embedded devices for which TLS is simply not feasible. For remote sensing, when you are relying on battery power and need to maximise device battery life, then the power budget is critical. Telemetry is the biggest drain on the power budget, so anything that means spending more time with the RF system powered up should be avoided. TLS falls into this category.

                • dcow 2 days ago

                  Yes, but the question is about devices that can reasonably run TLS.

                  The answer is local acme with your router issuing certs for your ULA prefix or “home zone” domain.

                  • thayne 2 days ago

                    > The answer is local acme with your router issuing certs for your ULA prefix or “home zone” domain.

                    That would be nice. But most people don't have a router running an ACME server.

                    • dcow 2 days ago

                      It should become a thing

        • dd82 2 days ago

          And rather than fix the issues with revocation, its being shuffled off to the users.

          Good example of enshittification

      • yellowapple 2 days ago

        "Suffer" is a strong word for those of us who've been using things like Let's Encrypt for years now without issue.

      • ignoramous 3 days ago

        Unless I misunderstood, GP mentions that the problem stems from WebPKI's central role in server identity management. Think of these cert lifetimes as forcefully being signed out after 47 days of being signed in.

        > easier for a few big players in industry

        Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).

    • klaas- 2 days ago

      I think a very short lived cert (like 7 days) could be a problem on renewal errors/failures that don't self correct but need manual intervention.

      What will let's encrypt be like with 7day certs? Will it renew them every day(6 day reaction time), or every 3 days (4 days reaction time). Not every org is suited with 24/7 staffing, some people go on holidays, some public holidays extend to long weekends etc :). I would argue that it would be a good idea to give people a full week to react to renewal problems. That seems impossible for short lived certs.

      • iqandjoke 2 days ago

        Like Apple case. Apple already ask their developer to re-sign the app every 7 days. It should not be the problem.

        • kassner 2 days ago

          That’s only a thing if you are not publishing on Apple Store, no?

          • dcow 2 days ago

            Correct. Or if you’re not using an enterprise distribution cert.

    • grey-area 3 days ago

      Since you’ve thought about it a lot, in an ideal world, should CAs exist at all?

      • mcpherrinm 3 days ago

        There's no such thing as an ideal world, just the one we have.

        Let's Encrypt was founded with a goal of rapidly (within a few years) helping get the web to as close to 100% encrypted as we could. And we've succeeded.

        I don't think we could have achieved that goal any way other than being a CA.

        • grey-area 2 days ago

          Sorry was not trying to be snarky, was interested in your answer as to what a better system would look like. The current one seems pretty broken but hard to fix.

      • Ajedi32 3 days ago

        In an ideal world where we rebuilt the whole stack from scratch, the DNS system would securely distribute key material alongside IP addresses and CAs wouldn't be needed. Most modern DNS alternatives (Handshake, Namecoin, etc) do exactly this, but it's very unlikely any of them will be usurping DNS anytime soon, and DNS's attempts to implement similar features have been thus far unsuccessful.

        • tptacek 3 days ago

          People who idealize this kind of solution should remember that by overloading core Internet infrastructure (which is what name resolution is) with a PKI, they're dooming any realistic mechanism that could revoke trust in the infrastructure operators. You can't "distrust" .com. But the browsers could distrust Verisign, because Verisign had competitors, and customers could switch transparently. Browser root programs also used this leverage to establish transparency logs (though: some hypothetical blockchain name thingy could give you that automatically, I guess; forget about it with the real DNS though).

          • ysleepy 3 days ago

            .com can issue arbitrary certificates right now, they control what DNS info is given to the CAs. So I don't quite see the change apart from people not talking about that threat vector atm.

            • tptacek 2 days ago

              Get one to issue a Google.com certificate and see what happens.

              • xorcist 2 days ago

                This is a bad faith argument. Whatever measures Google takes to prevent this (certificate logs and key pinning) could just as well be utilized if registrars delegated cryptographic trust as they delegate domains.

                It is also true that these contemporary prevention methods only help the largest companies which can afford to do things like distributing key material with end user software. It does not help you and me (unless you have outsourced your security to Google already, in which case there is the obvious second hand benefit). Registrars could absolutely help a much wider use of these preventions.

                There is no technical reason we don't have this, but this is one area where the interest of largest companies with huge influence over standards and security companies with important agencies as customers all align, so the status quo is very slow to change. If you squint you can see traces of this discussion all the way from IPng to TLS extensions, but right now there is no momentum for change.

                • tptacek 2 days ago

                  It's easy to tell stories about shadowy corporate actors retarding security on the Internet, but the truth is just that a lot of the ideas people have about doing security at global Internet scale just don't pan out. You can look across this thread to see all the "common sense" stuff people think should replace the WebPKI, most of which we know won't work.

                  Unfortunately, when you're working at global scale, you generally need to be well-capitalized, so it's big companies that get all the experience with what does and doesn't work. And then it's opinionated message board nerds like us that provide the narratives.

              • throwaway2037 2 days ago

                This is a great point. For all of the "technically correct" arguments going on here, this one is the most practical counterpoint. Yes, in theory, Verisign (now Symantec) could issue some insane wildcard Google.com cert and send the public-private key pair to you personally. In practice, this would never happen, because it is a corporation with rules and security policies that forbid it.

                Thinking deeper about it: Verisign (now Symantec) must have some insanely good security, because every black hat nation state actor would love to break into on their cert issuance servers and export a bunch of legit signed certs to run man-in-the-middle attacks against major email providers. (I'm pretty sure this already happened in Netherlands.)

                • codethief 2 days ago

                  > every black hat nation state actor would love to break into on their cert issuance servers and export a bunch of legit signed certs to run man-in-the-middle attacks

                  I might be misremembering but I thought one insight from the Snowden documents was that a certain three-letter agency had already accomplished that?

                  • 9Ljdg6p8ZSzejt 2 days ago

                    This was DigiNotar. The breach generated around 50 certificates, including certificates for Google, Microsoft, MI6, the CIA, TOR, Mossad, Skype, Twitter, Facebook, Thawte, VeriSign, and Comodo.

                    Here is a nice writeup for that breach: https://www.securityweek.com/hacker-had-total-control-over-d...

                    • 9Ljdg6p8ZSzejt 2 days ago

                      Edits: I believe this is what you were referring to. It was around 500, not 50. Dropped a 0.

                      • codethief 2 days ago

                        I do remember that breach but that was before Snowden. I'm relatively sure Snowden published some document about the NSA trying to undermine CAs, too.

                • Ajedi32 2 days ago

                  This isn't about the cert issuance servers, but DNS servers. If you compromise DNS then just about any CA in the world will happily issue you a cert for the compromised domain, and nobody would even be able to blame them for that because they'd just be following the DNS validation process prescribed in the BRs.

                • tptacek 2 days ago

                  Verisign (Symantec) can't do anything, because the browsers distrusted them.

              • ysleepy 2 days ago

                That only works for high profile domains, a CA can just issue a cert, log it to CT and if asked claim they got some DNS response from the authoritative server. Then it's a he said she said problem.

                Or is DNSSEC required for DV issuance? If it is, then we already rely on a trustworthy TLD.

                I'm not saying there isn't some benefit in the implicit key mgmt oversight of CAs, but as an alternative to DV certs, just putting a pubkey in dnssec seems like a low effort win.

                It's been a long time since I've done much of this though, so take my gut feeling with a grain of salt.

                • tptacek 2 days ago

                  DNSSEC isn't required by anything, because almost nobody uses it.

              • Ajedi32 2 days ago

                Right. 'You can't "distrust" .com.' is probably not true in that situation. (If it were actually true, then "what happens" would be absolutely nothing.) I think you're undermining your own point.

          • transfire 3 days ago

            Who cares? What does a certificate tell me other than someone paid for a certificate.

            And what do certificate buyers gain? The ability for their site to be revoked or expired and thus no longer work.

            I’d like to corrected.

            • brazzy 3 days ago

              Are you seriously not aware of Let's Encrypt? https://letsencrypt.org/

              Nobody has really had to pay for certificates for quite a number of years.

              What certificates get you, as both a website owner and user, is security against man-in-the-middle attacks, which would otherwise be quite trivial, and which would completely defeat the purpose of using encryption.

            • squiggleblaz 2 days ago

              A certificate authority is an organisation that pays good money to make sure that their internet connection is not being subjected to MITMs. They put vastly more resources into that than you can.

              A certificate is evidence that the server you're connected to has a secret that was also possessed by the server that the certificate authority connected to. This means that whether or not you're subject to MITMs, at least you don't seem to be getting MITMed right now.

              The importance of certificates is quite clear if you were around on the web in the last days before universal HTTPS became a thing. You would connect to the internet, and you would somehow notice that the ISP you're connected to had modified the website you're accessing.

              • Ajedi32 2 days ago

                > pays good money to make sure that their internet connection is not being subjected to MITMs

                Is that actually true? I mean, obviously CAs aren't validating DNS challenges over coffee shop Wi-Fi so it's probably less likely to be MITMd than your laptop, but I don't think the BRs require any special precautions to assure that the CA's ISP isn't being MITMd, do they?

      • JackSlateur 2 days ago

        No they should not

        DANE is the way (https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...)

        But no browser have support for it, so .. :/

        • talideon 2 days ago

          Much as I like the idea of DANE, it solves nothing by itself and you need to sign the zone from tampering. Right now, the dominant way to do that is DNSSEC, though DNSCurve is a possible alternative, even if it doesn't solve the exact same problem. For DANE to be useful, you'd first need to get that set up on the domain in question, and the effort to get that working is far, far from trivial, and even then, the process is so error prone and brittle that you can easily end up making a whole zone unusable.

          Further, all you've done is replace one authority (the CA authority) with another one (the zone authority, and thus your domain registrar and the domain registry).

          • JackSlateur 2 days ago

            The zone authority already superseeds the CA authority in all ways

            When I manage a DNS zone, I'm free to generate all certificates I want

      • throwaway2037 2 days ago

        This is a great question. If we don't have CAs, how do we know if it OK to trust a cert?

        Are there any reasonable alternatives to CAs in a modern world? I have never heard any good proposals.

        • kbolino 2 days ago

          There are some alternatives.

          Certificate pinning is probably the most widely known way to get a certificate out there without relying on live PKI. However, certificate pinning just shifts the burden of trust from runtime to install time, and puts an expiration date on every build of the program. It also doesn't work for any software that is meant to access more than a small handful of pre-determined sites.

          Web-of-trust is a theoretical possibility, and is used for PGP-signed e-mail, but it's also a total mess that doesn't scale. Heck, the best way to check the PGP keys for a lot of signed mail is to go to an HTTPS website and thus rely on the CAs.

          DNSSEC could be the basis for a CA-free world, but it hasn't achieved wide use. Also, if used in this way, it would just shift the burden of trust from CAs to DNS operators, and I'm not sure people really like those much better.

      • thayne 2 days ago

        In an ideal world we could just trust people not to be malicious, and there wouldn't be any need to encrypt traffic at all.

      • WJW 3 days ago

        How relevant is that since we don't live in such a world? Unless you have a way to get to to such a world, of course, but even then CAs would need to keep existing until you've managed to bring the ideal world about. It would be a mistake to abolish them first and only then start on idealizing the world.

      • klysm 3 days ago

        CAs exist on the intersection of reality (far from ideal) and cryptography.

      • Stefan-H 3 days ago

        What alternatives come to mind when asking that question? Not being in the PKI world directly, web of trust is what comes to mind, but I'm curious what your question hints at.

        • grey-area 2 days ago

          I honestly don’t know enough about it to have an opinion, have vague thoughts that dns is the weak point anyway for identity so can’t certs just live there instead but I’m sure there are reasons (historical and practical).

    • anakaine 2 days ago

      I love the push that LE puts on industry to get better.

      I work in a very large organisation and I just dont see them being able to go to automated TLS certificates for their self managed subdomains, inspection certificates, or anything else for that matter. It will be interesting to see how the short lived certs are adopted into the future.

    • ryao 2 days ago

      Could you explain why Let's Encrypt is dropping OCSP stabling support, instead of dropping it for must-staple only certificates and letting those of us who want must-staple to deal with the headaches? I believe that resolving the privacy concerns involving OCSP raised did not require eliminating must-staple.

    • cm2187 2 days ago

      All of that in case the previous owner of the domain would attempt a mitm attack against a client of the new owner, which is such a remote scenario. In fact has it happened even once?

    • hsbauauvhabzb 3 days ago

      How viable are tls attacks, assuming a signed private cert is compromised, you need network position or other things to trigger routing, no?

      So for a bank, a private cert compromise is bad, for a regular low traffic website, probably not so much?

    • ocdtrekkie 2 days ago

      Are you aware of a single real world not theoretical security breach caused by an unrevoked certificate that lived too long?

      • woodruffw 2 days ago

        A real-world example of this would be Heartbleed, where users rotated without revoking their previously compromised certificates[1].

        [1]: https://en.wikipedia.org/wiki/Heartbleed#Certificate_renewal...

        • ocdtrekkie a day ago

          Was a single certificate actually compromised and/or used maliciously? I am looking for an actual breach, not a theoretical scenario.

          • ferngodfather 20 hours ago

            Based on that Wikipedia article, no. This is just more of the same friendless PKI geeks making the world unnecessarily more complicated. The only other people that benefit are the certificate management companies that sell more software to manage these insane changes.

            • woodruffw 19 hours ago

              Did you read it? There are multiple examples of claimed exploitation right below the section I linked.

              • ferngodfather 19 hours ago

                Which bit says about stealing a certificate/keys and MITMing traffic with the stolen keys - with real world ramifications?

          • woodruffw 19 hours ago

            There are multiple examples of service compromise in the linked Wikipedia page.

    • delfinom 3 days ago

      Realistically, how often are domains traded and suddenly put in legitimate use (that isn't some domain parking scam) that (1) and (2) are actual arguments? Lol

      • zamadatix 3 days ago

        Domain trading (regardless if the previous use was legitimate or not) is only one example, not the sole driving argument for why the revocation system is in place or isn't perfectly handled.

    • Lammy 3 days ago

      > but customers aren't able to respond on an appropriate timeline

      Sounds like your concept of the customer/provider relationship is inverted here.

      • crote 3 days ago

        No. The customer is violating their contract.

        The whole "customer is king" doesn't apply to something as critical as PKI infrastructure, because it would compromise the safety of the entire internet. Any CA not properly applying the rules will be removed from the trust stores, so there can be no exceptions for companies who believe they are too important to adhere to the contract they signed.

        • luckylion 3 days ago

          How would a CA not being able to contact some tiny customer (surely the big ones all can and do respond in less than 90 days?) compromise the safety of the entire internet?

          And if the safety of the entire internet is at risk, why is 47 days days an acceptable duration for this extreme risk, but 90 days is not?

          • detaro 3 days ago

            > surely the big ones all can and do respond in less than 90 days?

            LOL. old-fashioned enterprises are the worst at "oh, no, can't do that, need months of warning to change something!", while also handling critical data. A major event in the CA space last year was a health-care company getting a court order against a CA to not revoke a cert that according to the rules for CAs the CA had to revoke (in the end they got a few days extension, everyone grumbled and the CA got told to please write their customer contracts more clearly, but the idea is out there and nobody likes CAs doing things they are not supposed to, even if through external force).

            One way to nip that in the bud is making sure even you get your court order preventing the CA from doing the right thing, your certificate will expire soon anyways, so "we are too important to have working IT processes" doesn't work anymore.

          • zamadatix 2 days ago

            I have a feeling it'll eventually get down even lower. In 2010 you could pretty easily get a cert for 10 years. Then 5 years. Then 3 years. Then 2 years. then 1 year. Then 3 months. Now less than 2 months .

  • woodruffw 3 days ago

    The "end game" is mentioned explicitly in the article:

    > Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.

    Shorter-lived certificates make OCSP and other revocation mechanisms less of a load-bearing component within the Web PKI. This is a good thing, since neither CAs nor browsers have managed to make timely revocation methods scale well.

    (I don't think there's any monetary or power advantage to doing this. The reason to do it is because shorter lifetimes make it harder for server operators to normalize deviant certificate operation practices. The reasoning there is the same as with backups or any other period operational task: critical processes must be continually tested and evaluated for correctness.)

    • sitkack 3 days ago

      Don't lower cert times also get people to trust certs that were created just for their session to MITM them?

      That is the next step in nation state tapping of the internet.

      • woodruffw 3 days ago

        I don't see why it would; the same basic requirements around CT apply regardless of certificate longevity. Any CA caught enabling this kind of MITM would be subject to expedient removal from browser root programs, but with the added benefit that their malfeasance would be self-healing over a much shorter period than was traditionally allowed.

      • ezfe 3 days ago

        lol no? lower cert times still extend the root certificates that are already trusted. It is not a noticeable thing when browsing the web as a user.

        A MITM cert would need to be manually trusted, which is a completely different thing.

        • Lammy 3 days ago

          I think their point is that a hypothetical connection-specific cert would make it difficult/impossible to compare your cert with anybody else to be able to find out that it happened. A CA could be backdoored but only “tapped” for some high-value target to diminish the chance of burning the access.

          • woodruffw 3 days ago

            > I think their point is that a hypothetical connection-specific cert would make it difficult/impossible to compare your cert with anybody else to be able to find out that it happened.

            This is already the case; CT doesn't rely on your specific served cert being comparable with others, but all certs for a domain being monitorable and auditable.

            (This does, however, point to a current problem: more companies should be monitoring CT than are currently.)

          • roblabla 3 days ago

            Well, the cert can still be compared to what's in the CT Log for this purpose.

          • sitkack 3 days ago

            Yes, precisely.

  • notatoad 3 days ago

    >unless I've missed some monetary/power advantage

    the power dynamic here is that the CAs have a "too big to fail" inertia, where they can do bad things without consequence because revoking their trust causes too much inconvenience for too many people. shortening expiry timeframes to the point where all their certificates are always going to expire soon anyways reduces the harm that any one CA can do by offering bad certs.

    it might be inconvenient for you to switch your systems to accomodate shorter expiries, but it's better to confront that inconvenience up front than for it to be in response to a security incident.

  • michaelt 3 days ago

    > Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours?

    Well you see, they also want to be able to break your automation.

    For example, maybe your automation generates a 1024 bit RSA certificate, and they've decided that 2048 bit certificates are the new minimum. That means your automation stops working until you fix it.

    Doing this with 2-day expiry would be unpopular as the weekend is 2 days long and a lot of people in tech only work 5 days a week.

  • timewizard 3 days ago

    If the service becomes unavailable for 48 straight hours then every certificate expires and nothing works. You probably want a little more room for catastrophic infrastructure problems.

  • fs111 3 days ago

    Load on the underlying infrastructure is a concern. The signing keys are all in HSMs and don't scale infinitely.

    • bob1029 3 days ago

      How does cycling out certificates more frequently reduce the load on HSMs?

      • timmytokyo 3 days ago

        It's all relative. A 47-day cycle increases the load, but a 48-hour cycle would increase it substantially more.

      • woodruffw 2 days ago

        Much of the HSM load within a CA is OCSP signing, not subscriber cert issuance.

  • karlgkk 2 days ago

    > Why not make it 30 seconds?

    This is a ridiculous straw man.

    > 48 hours. I am willing to bet money this threshold will never be crossed.

    That's because it won't be crossed and nobody serious thinks it should.

    Short certs are better, but there are trade-offs. For example, if cert infra goes down over the weekend, it would really suck. TBH, from a security perspective, something in the range of a couple of minutes would be ideal, but that runs up against practical reasons

    - cert transparency logs and other logging would need to be substantially scaled up

    - for the sake of everyone on-call, you really don't want anything shorter than a reasonable amount of time for a human to respond

    - this would cause issues with some HTTP3 performance enhancing features

    - thousands of servers hitting a CA creates load that outweighs the benefit of ultra short certs (which have diminishing returns once you're under a few days, anyways)

    > This feels like much more of an ideological mission than a practical one

    There are numerous practical reasons, as mentioned here by many other people.

    Resisting this without good cause, like you have, is more ideological at this point.

pixl97 4 days ago

Heh, working with a number of large companies I've seen most of them moving to internally signed certs on everything because of ever shortening expiration times. They'll have public certs on edge devices/load balancers but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.

  • plorkyeran 4 days ago

    This is a desired outcome. The WebPKI ecosystem would really like it if everyone stopped depending on them for internal things because it's actually a pretty different set of requirements. Long-lived certs with an internal CA makes a lot of sense and is often more secure than using a public CA.

    • tetha 3 days ago

      Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.

      It's been a huge pain as we have encountered a ton of bugs and missing features in libraries and applications to reload certs like this. And we have some really ugly workarounds in place, because some applications place a "reload a consul client" on the same level of "reload all config, including opening new sockets, adjusting socket parameters, doing TCP connection handover" - all to rebuild a stateless client throwing a few parameters at a standard http client. But oh well.

      But I refuse to back down. Reload your certs and your secrets. If we encounter a situation in which we have to mass-revoke and mass-reissue internal certs, it'll be easy for those who do. I don't have time for everyone else.

      • donnachangstein 3 days ago

        > Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.

        Do you promise to come back and tell us the story about when someone went on vacation and the certs issued on a Thursday didn't renew over the weekend and come Monday everything broke and no one could authenticate or get into the building?

        • kam 3 days ago

          At least that sounds like it would be a more interesting story than the one where the person who quit a year ago didn't document all the places they manually installed the 2-year certificate.

        • tetha 3 days ago

          I will. We've been betting Postgres connectivity for a few hundred applications on this over the past three years. If this fucks up, it'll be known without me.

          • donnachangstein 3 days ago

            I'm curious what requirement drove you to such arbitrarily small TTL, other than "because we can" dick-measuring geekery.

            I applaud you for sticking to your guns though.

            • tetha 2 days ago

              At the end of the day, we were worried about exactly these issues - if an application has to reload certs once every 2 years, it will always end up a mess.

              And the conventional wisdom for application management and deployments is - if it's painful, do it more. Like this, applications in the container infrastructure are forced to get certificate deployment and reloading right on day 1.

              And yes, some older application that were migrated to the infrastructure went ahead and loaded their credentials and certificates for other dependencies into their database or something like that and then ended up confused when this didn't work at all. Now it's fixed.

        • wbl 3 days ago

          Why would the cert renewal be manual?

          • alexchamberlain 3 days ago

            That's how it used to be done. Buy a certificate with a 2 year expiry and manually install it on your server (you only had 1; it was fine).

            • progmetaldev 2 days ago

              I can tell you that there are still quite a few of us out here that are doing the once a year manual renewal. I have suggested a plan to use Let's Encrypt with automated renewal, but for some companies, they are using old technology and/or old processes that "seniors" are comfortable with since they understand them and suggesting a better process isn't always looked favorably upon (especially if your job relies on the manual renewal process as one of those cryptic things only IT can do).

      • OptionOfT 3 days ago

        This has been our issue too. We've had mandates for rotating OAuth secrets (client ID & client secret).

        Except there are no APIs to rotate those. The infrastructure doesn't exist yet.

        And refreshing those automatically does not validate ownership, unlike certificates where you can do a DNS check or an HTTP check.

        Microsoft has some technology that next to these tokens they also have a per-machine certificate that is used to sign requests, and those certificates can't leave the machine.

        • parliament32 3 days ago

          We've also felt the pain for OAuth secrets. Current mandates for us are 6 months.

          Because we run on Azure / AKS, switching to federated credentials ("workload identities") with the app registrations made most of the pain go away because MS manages all the rotations (3 months) etc. If you're on managed AKS the OIDC issuer side is also automagic. And it's free. I think GCP offers something similar.

          https://learn.microsoft.com/en-us/entra/workload-id/workload...

    • rlpb 3 days ago

      Browsers don't design for internal use though. They insist on HTTPS for various things that are intranet only, such as some browser APIs, PWAs, etc

      • akerl_ 3 days ago

        As is already described by the comment thread we're replying in, "internal use" and "HTTPS" are very compatible. Corporations can run an internal CA, sign whatever internal certs they want, and trust that CA on their devices.

        • franga2000 3 days ago

          You use the term "internal use" and "corporations" like they're interchangable, but that's definitely not the case. Lots of small businesses, other organizations or even individuals want to have some internal services and having to "set up" a CA and add the certs to all client devices just to access some app on the local network is absurd!

          • akerl_ 2 days ago

            The average small business in 2025 is not running custom on-premise infrastructure to solve their problems. Small businesses are paying vendors to provide services, sometimes in the form of on-premise appliances but more often in the form of SaaS offerings. And I'm happy to have the CAB push those vendors to improve their TLS support via efforts like this.

            Individuals are in the same boat: if you're running your own custom services at your house, you've self-identified as being in the amazingly small fraction of the population with both the technical literacy and desire to do so. Either set up LetsEncrypt or run your own ACME service; the CAB is making clear here and in prior changes that they're not letting the 1% hold back the security bar for everybody else.

          • JimBlackwood 3 days ago

            I don't think it's absurd and personally it feels easier to setup an internal CA than some of the alternatives.

            In the hackiest of setups, it's a few commands to generate a CA and issue a wildcard certificate for everything. Then a single line in the bootstrap script or documentation for new devices to trust the CA and you're done.

            Going a few steps further, setting up something like Hashicorp Vault is not hard and regardless of org size; you need to do secret distribution somehow.

            • lucb1e 3 days ago

              > it's a few commands to generate a CA

              My dad still calls my terminals a "DOS window" and doesn't understand why I don't use GUIs like a normal person. He has his own business. He absolutely cannot just roll out a CA for secure comms with his local printer or whatever. He literally calls me to help with buying a PDF reader

              Myself, I'm employed at a small business and we're all as tech savvy as it gets. It took me several days to set it up on secure hardware (smartcard, figuring out compatibility and broken documentation), making sure I understand what all the options do and that it's secure for years to come and whatnot, working out what the procedure for issuing should be, etc. Eventually got it done, handed it over to the higher-up who gets to issue certs, distribute the CA cert to everyone... it's never used. We have a wiki page with TLS and SSH fingerprints

              • JimBlackwood 3 days ago

                > My dad still calls my terminals a "DOS window" and doesn't understand why I don't use GUIs like a normal person. He has his own business. He absolutely cannot just roll out a CA for secure comms with his local printer or whatever. He literally calls me to help with buying a PDF reader

                This is fair. I assumed all small businesses would be tech startups, haha.

              • Retric 3 days ago

                The vast majority of companies operate just fine without understanding anything about building codes or vehicle repair etc.

                Paying experts (Ed: setting up internal infrastructure) is a perfectly viable option so the only real question is the amount of effort involved not if random people know how to do something.

                • lucb1e 3 days ago

                  Paying an expert to come set up a local CA seems rather silly when you'd normally outsource operating one to the people who professionally run a CA

                  • Retric 3 days ago

                    You’d only need internal certificates if someone had set up internal infrastructure. Expecting that person to do a good job means having working certificates be they internal or external.

                • nilslindemann 3 days ago

                  > Paying experts is a perfectly viable option

                  Congrats for securing your job by selling the free internet and your soul.

                  • Retric 3 days ago

                    I’m not going to be doing this, but I care about knowledge being free not labor or infrastructure.

                    If someone doesn’t want to learn then nobody needs to help them for free.

            • disiplus 3 days ago

              We have this, it's not trivial for some small team, and you have to deal with stuff like conda env coming with it's own set of certs so you have to take care of that. It's better then the alternative of fighting with browsers but still it's not without extra complexity

              • JimBlackwood 3 days ago

                For sure, nothing is without extra complexity. But, to me, it feels like additional complexity for whoever does DevOps (where I think it should be) and takes away complexity from all other users.

            • msie 3 days ago

              Wow, amazing how out of touch this is.

              • JimBlackwood 3 days ago

                Can you explain? I don't see why

                • Henchman21 3 days ago

                  You seem to think every business is a tech startup and is staffed with competent engineers.

                  Perhaps spend some time outside your bubble? I’ve read many of your comments and you just do seem to be caught in your own little world. “Out of touch” is apt and you should probably reflect on that at length.

                  • JimBlackwood 3 days ago

                    > You seem to think every business is a tech startup and is staffed with competent engineers.

                    If we’re talking about businesses hosting services on some intranet and concerned about TLS, then yes, I assume it’s either a tech company or they have at least one competent engineer to host these things. Why else would the question be relevant?

                    > “Out of touch” is apt and you should probably reflect on that at length.

                    That’s a very weird personal comment based on a few comments on a website that’s inside a tech savvy bubble. Most people here work in IT, so I talk as if most people here work in IT. If you’re a mechanic at a garage or a lawyer at a law firm, I wouldn’t tell you rolling your own CA is easy and just a few commands.

                    • Henchman21 2 days ago

                      You know, your perspective is valuable; I often operate as if the context is “all people everywhere”, which is rarely true and is definitely not true here. So I will take the error as mine and thank you for pointing it out :)

          • acedTrex 3 days ago

            Sounds like there is a market for a browser that is intranet only and doesnt do various checks

            • jillyboel 3 days ago

              Good luck getting that distributed everywhere including the iOS app store and random samsung TVs that stopped receiving updates a decade ago.

              Not to mention the massive undertaking that even just maintaining a multi-platform chromium fork is.

            • JimBlackwood 3 days ago

              Why would you want this? Then on production, you'll run into issues you did not encounter on staging because you skipped various checks.

        • jillyboel 3 days ago

          Getting my parents to add a CA to their android, iphone, windows laptop and macbook just so they can use my self hosted nextcloud sounds like an absolute nightmare.

          The nightmare only intensifies for small businesses that allow their users to bring their own devices (yes, yes, sacrilege but that is how small businesses operate).

          Not everything is a massive enterprise with an army of IT support personnel.

          • crote 3 days ago

            Rolling out LetsEncrypt for a self-hosted Nextcloud instance is absolutely trivial. There are many reasons corporations might want to roll their own internal CA, but simple homelab scenarios like these couldn't be further from them.

            • jillyboel 3 days ago

              Sure, which is what I do. But the point is that this is very much internal use and rolling my own CA for it is a nightmare.

            • GabeIsko 3 days ago

              Would you suggest something? I do this, but I'm not sure I would call maintaining my setup trivial. Got in trouble recently because my domain registrar deprecated an API call and it ends up that broke the camel's back in my automation setup. Or at least it did 90 days later.

              • andrewmackrodt 2 days ago

                I'm not a nextcloud user but have a homelab and use traefik for my reverse proxy which is configured to use letsencrypt dns challenges to issue wildcard certificates. I use cloudflares free plan to manage dns for my domains, although the registrar is different. This has been a set it and forgot solution for the last several years.

                • GabeIsko 2 days ago

                  Let's Encrypt cert renewal comes out of the box on traefik? I haven't kept up with it. I'm on a similar set and forget schedule with configured nginx and some crowdsec stuff, but the API change ended up killing off an afternoon of my time.

          • mysteria 3 days ago

            I actually do this for my homelab setup. Everyone basically gets the local CA installed for internal services as well as a client cert for RADIUS EAP-TLS and VPN authentication. Different devices are automatically routed to the correct VLAN and the initial onboarding doesn't take that long if you're used to the setup. Guests are issued a MSCHAP username and password for simplicity's sake.

            For internal web services I could use just Let's Encrypt but I need to deploy the client certs anyways for network access and I might as well just use my internal cert for everything.

            • jillyboel 3 days ago

              Personally I'd absolutely refuse to install your CA as your guest. That would give you far too much power to mint certificates for sites you have no business snooping on.

              • mysteria 3 days ago

                Guests don't install my CA as they don't need to access my internal services. If I wanted to set up an internal web server that's accessible to both guests and family members I'd use Let's Encrypt for that.

          • richardwhiuk 3 days ago

            Why are your parents on a corporations internal network?

            • jillyboel 3 days ago

              What corporation are you talking about? Have you never heard of someone self hosting software for their family and friends? You know, an intranet.

              • smw 3 days ago

                Just buy a domain and use dns verification to get real certs for whatever internal addresses you want to serve? Caddy will trivially go get certs for you with one line of config

                Or cheat and use tailscale to do the whole thing.

              • DiggyJohnson 3 days ago

                Self hosting doesn’t usually apply connecting on a private network usually.

        • stefan_ 3 days ago

          Do I add the root CA of my router manufacturer so I can visit its web interface on my internal network without having half the page functionality broken because of overbearing browser manufacturers who operate the "web PKI" as a cartel? This nowadays includes things such as basic file downloads.

        • ClumsyPilot 3 days ago

          > Corporations can run an internal CA

          Having just implemented an internal CA, I can assure you, most corporations can’t just run an internal CA. Some struggle to update containers and tie their shoe laces.

        • lxgr 2 days ago

          Yeah, but essentially every home user can only do so after jumping through extremely onerous hoops (many of which also decrease their security when browsing the public web).

          I’ve done it in the past, and it was so painful, I just bit the bullet and started accessing everything under public hostnames so that I can get auto-issued Letsencrypt certificates.

        • rlpb 3 days ago

          Indeed they are compatible. However HTTPS is often unnecessary, particularly in a smaller organisation, but browsers mandate significant unnecessary complexity there. In that sense, brwosers are not suited to this use in those scenarios.

          • freeopinion 3 days ago

            If only browsers could understand something besides HTTPS. Somebody should invent something called HTTP that is like HTTPS without certificates.

            • noselasd 3 days ago

              There’s enough APIs limited to secure contexts that many internal apps become unfeasible.

            • SoftTalker 3 days ago

              Modern browsers default to trying https first.

          • tedivm 3 days ago

            I really don't see many scenarios where HTTPS isn't needed for at least some internal services.

            • donnachangstein 3 days ago

              Then, I'm afraid, you work in a bubble.

              A static page that hosts documentation on an internal network does not need encryption.

              The added overhead of certificate maintenance (and investigating when it does and will break) is simply not worth the added cost.

              Of course the workaround most shops do nowadays is just hide the HTTP servers behind a load balancer doing SSL termination with a wildcard cert. An added layer of complexity (and now single point of failure) just to appease the WebPKI crybabies.

              • progmetaldev 2 days ago

                Unfortunately, for a small business, there are many software packages that can cause all sorts of havoc on an internal network, and are simple to install. Even just ARP cache poisoning on an internal network can force everyone offline, while even a reboot of all equipment can not immediately fix the problem. A small company that can't handle setting up a CA won't ever be able to handle exploits like this (and I'm not saying that a small company should be able to setup their own CA, just commenting on how defenseless even modern networks are to employees that like to play around or cause havoc).

                Of course, then there are the employees who could just intercept HTTP requests, and modify them to include a payload to root an employee's machine. There is so much software out there that can destroy trust in a network, and it's literally download and install, then point and click with no knowledge. Seems like there is a market for simple and cheap solutions for internal networks, for small business. I could see myself making quite a bit off it, which I did in the mid-2000's, but I can't stand doing sales any more in my life, and dealing with support is a whole issue on it's own even with an automated solution.

              • imroot 3 days ago

                What overhead?

                Just about every web server these days supports ACME -- some natively, some via scripts, and you can set up your own internal CA using something like step-ca that speaks ACME if you don't want your certs going out to the transparency log.

                The last few companies I've worked at had no http behind the scenes -- everything, including service-to-service communications was handled via https. It's a hard requirement for just about everything financial, healthcare, and sensitive these days.

                • donnachangstein 3 days ago

                  > What overhead?

                  [proceeds to describe a bunch of new infrastructure and automation you need to setup and monitor]

                  So when ACME breaks - which it will, because it's not foolproof - the server securely hosting the cafeteria menus is now inaccessible, instead of being susceptible to interception or modification in transit. Because the guy that has owned your core switches is most concerned that everyone will be eating taco salad every day.

              • brendoelfrendo 3 days ago

                Sure it does! You may not need confidentiality, but what about integrity?

                • donnachangstein 3 days ago

                  It's a very myopic take.

                  Someone that has seized control of your core network such that they were capable of modifying traffic, is not going to waste precious time or access modifying the flags of ls on your man page server. They will focus on more valuable things.

                  Just because something is possible in theory doesn't make it likely or worth the time invested.

                  You can put 8 locks on the door to your house but most people suffice with just one.

                  Someone could remove a piece of mail from your unlocked rural mailbox, modify it and put it back. Do you trust the mail carrier as much as the security of your internal network?

                  But it's not really a concern worth investing resources into for most.

                  • growse 3 days ago

                    > Someone that has seized control of your core network such that they were capable of modifying traffic, is not going to waste precious time or access modifying the flags of ls on your man page server. They will focus on more valuable things.

                    Ah, the "both me and my attackers agree on what's important" fallacy.

                    What if they modify the man page response to include drive-by malware?

              • tedivm 3 days ago

                I'm afraid you didn't read my response. I explicitly said I can't see a case where it isn't needed for some services. I never said it was required for every service. Once you've got it setup for one thing it's pretty easy to set it up everywhere (unless you're manually deploying, which is an obvious problem).

          • therealpygon 3 days ago

            And it is even more trivial in a small organization to install a Trusted Root for internally signed certificates on their handful of machines. Laziness isn’t a browser issue.

            • rlpb 3 days ago

              How is that supposed to work for an IoT device that wants to work out of the box using one of these HTTPS-only browser APIs?

              • metanonsense 3 days ago

                I am not saying I‘d do this, but in theory you could deploy a single reverse proxy in front of your HTTP-only devices and restrict traffic accordingly.

    • Spooky23 3 days ago

      Desired by who?

      There’s nothing stopping Apple and Google from issuing themselves certificates every 10 minutes. I get no value for doing this. Building out or expanding my own PKI for my company or setting up the infrastructure to integrate with Digicert or whomever gets me zero security and business value, just cost and toil.

      Revocation is most often an issue when CAs fuck up. So now we collectively need to pay to cover their rears.

      • crote 3 days ago

        CAs fucking up every once in a while is inevitable. It is impossible to write guaranteed bug-free software or train guaranteed flawless humans.

        The big question is what happens when (not "if") that happens. Companies have repeatedly shown that they are unable to rotate certs in time, to the point of even suing CAs to avoid revocation. They've been asked nicely to get their shit together, and it hasn't happened. Shortening cert lifetime to force automation is the inevitable next step.

        • Spooky23 3 days ago

          Silly me, I’m just a customer, incapable of making my own risk assessments or prioritizing my business processes.

          You’re portraying people suing CAs to get injunctions to avoid outages as clueless or irresponsible. The fact is Digicert’s actions, dictated by this CA/Browser forum were draconian and over the top responses to a minor risk. This industry trade group is out of control.

          End of the day, we’re just pushing risk around. Running a quality internal PKI is difficult.

    • christina97 3 days ago

      What do you mean “WebPKI … would like”. The browser vendors want one thing (secure, ubiquitous, etc), the CAs want a very different thing (expensive, confusing, etc)…

    • ozim 3 days ago

      Problem is browsers will most likely follow the enforcement of short certificates so internal sites will be affected as well.

      Non browser things usually don’t care even if cert is expired or trusted.

      So I expect people still to use WebPKI for internal sites.

      • akerl_ 3 days ago

        The browser policies are set by the same entities doing the CAB voting, and basically every prior change around WebPKI has only been enforced by browsers for CAs in the browser root trust stores. Which is exactly what's defined in this CAB vote as well.

        Why would browsers "most likely" enforce this change for internal CAs as well?

      • ryao 3 days ago

        Why would they? The old certificates will expire and the new ones will have short lifespans. Web browsers do not need to do anything.

        That said, it would be really nice if they supported DANE so that websites do not need CAs.

      • nickf 3 days ago

        'Most likely' - with the exception of Apple enforcing 825-day maximum for private/internal CAs, this change isn't going to affect those internal certificates.

  • jiggawatts 3 days ago

    I just got a flashback to trying to automate the certificate issuance process for some ESRI ArcGIS product that used an RPC configuration API over HTTPS to change the certificate.

    So yes, you had to first ignore the invalid self-signed certificate while using HTTPS with a client tool that really, really didn't want to ignore the validity issue, then upload a valid certificate, restart the service... which would terminate the HTTPS connection with an error breaking your script in a different not-fun way, and then reconnect... at some unspecified time later to continue the configuration.

    Fun times...

  • rsstack 4 days ago

    > I've seen most of them moving to internally signed certs

    Isn't this a good default? No network access, no need for a public certificate, no need for a certificate that might be mistakenly trusted by a public (non-malicious) device, no need for a public log for the issued certificate.

    • pavon 4 days ago

      Yes, but it is a lot more work to run an internal CA and distribute that CA cert to all the corporate clients. In the past getting a public wildcard cert was the path of least resistance for internal sites - no network access needed, and you aren't leaking much info into the public log. That is changing now, and like you said it is probably a change for the better.

      • pkaye 4 days ago

        What about something like step-ca? I got the free version working easily on my home network.

        https://smallstep.com/docs/step-ca/

        • simiones 3 days ago

          Not everything that's easy to do on a home network is easy to do on a corporate network. The biggest problem with corporate CAs is how to emit new certificates for a new device in a secure way, a problem which simply doesn't exist on a home network where you have one or at most a handful of people needing new certs to be emitted.

      • bravetraveler 4 days ago

        > A lot more work

        'ipa-client-install' for those so motivated. Certificates are literally one among many things part of your domain services.

        If you're at the scale past what IPA/your domain can manage, well, c'est la vie.

        • Spivak 3 days ago

          I think you're being generous if you think the average "cloud native" company is joining their servers to a domain at all. They've certainly fallen out of fashion in favor of the servers being dumb and user access being mediated by an outside system.

          • bravetraveler 3 days ago

            Why not? The actual clouds do.

            I think folks are being facetious wanting more for 'free'. The solutions have been available for literal decades, I was deliberate in my choice.

            Not the average, certainly the majority where I've worked. There are at least two well-known Clouds that enroll their hypervisors to a domain. I'll let you guess which.

            My point is, the difficulty is chosen... and 'No choice is a choice'. I don't care which, that's not my concern. The domain is one of those external things you can choose. Not just some VC toy. I won't stop you.

            The devices are already managed; you've deployed them to your fleet.

            No need to be so generous to their feigned incompetence. Want an internal CA? Managing that's the price. Good news: they buy!

            Don't complain to me about 'your' choices. Self-selected problem if I've heard one.

            Aside from all of this, if your org is being hung up on enrollment... I'm not sure you're ready for key management. Or the other work being a CA actually requires.

            Yes, it's more work. Such is life and adding requirements. Trends - again, for decades - show organizations are generally able to manage with something.

            Literal Clouds do this, why can't 'you'?

            • Spivak 3 days ago

              Adding machines to a domain is far far more common on bare-metal deployments which is why I said "cloud native." Adding a bunch of cloud VMs to a domain is not very common in my experience because they're designed to be ephemeral and thrown away and IPA being stateful isn't about that.

              You're managing your machine deployments with something so of course you just use that that to include your cert which isn't particularly hard but there's a long-tail of annoying work when dealing with containers and vms you aren't building yourself like k8s node pools. It can be done but it's usually less effort to just get public certs for everything.

              • bravetraveler 3 days ago

                To be honest, with "cloud-init" and the ability for SSSD to send record updates, I could make a worthwhile cloudy deployment

                To your point, people don't, but it's a perfectly viable path.

                Containers/kubernetes, that's pipeline city, baby!

  • maccard 3 days ago

    I’ve unfortunately seen the opposite - internal apps are now back to being deployed over VPN and HTTP

  • tomjen3 2 days ago

    I would love to do that for my homelab, but not all docker containers trust root certs from the system so getting it right would have been a bigger challenge than dns hacking to get a valid certificate for something that can’t be accessed from outside the network.

    I am not willing to give credentials to alter my dns to a program. A security issue there would be too much risk.

  • xienze 4 days ago

    > but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.

    Introduce them to the idea of having something like Caddy sit in front of apps solely for the purpose of doing TLS termination... Caddy et al can update the certs automatically.

    • pixl97 4 days ago

      Unless they are web/tech companies they aren't doing that. Banks, finance, large manufacturing are all terminating at F5's and AVI's. I'm pretty sure those update certs just fine, but it's not really what I do these days so I don't have a direct answer.

      • xienze 4 days ago

        Sure. The point is, don't bother letting the apps themselves do TLS termination. Too much work that's better handled by something else.

        • hedora 4 days ago

          Also, moving termination off the endpoint server makes it much easier for three letter agencies to intercept + log.

          • qmarchi 3 days ago

            Most responsible orgs do TLS termination on the public side of a connection, but will still make a backend connection protected by TLS, just with a internal CA.

      • tikkabhuna 3 days ago

        F5s don't support ACME, which has been a pain for us.

        • xorcist 2 days ago

          F5 sells expensive boxes intended for larger installations where you can afford not to do ACME in the external facing systems.

          Giving the TLS endpoint itself the authority to manage certificates kind of weakens the usefulness of rotating certificates in the first place. You probably don't let your external facing authoritative DNS servers near zone key material, so there's no reason to let the external load balancers rotate certificates.

          Where I have used F5 there was never any problem letting the backend configuration system do the rotation and upload of certificates together with every other piece of configuration that is needed for day to day operations.

        • cpach 3 days ago

          It might be possible to run an ACME client on another host in your environment. (IMHO, the DNS-01 challenge is very useful for this.) Then you can (probably) transfer the cert+key to BIG IP, and activate it, via the REST API.

          I haven’t used BIG IP in a long while, so take this with a grain of salt, but it seems to me that it might not be impossible to get something going – despite the fact that BIG IP itself doesn’t have native support for ACME.

          Two pointers that might be of interest:

          https://community.f5.com/discussions/technicalforum/upload-l...

          https://clouddocs.f5.com/api/icontrol-rest/APIRef_tm_sys_cry...

          • dijit 3 days ago

            Sounds suspiciously similar to a rube goldberg machine.

            Those tend to be quite brittle in reality. What’s the old adage about engineering vs architecture again?

            Something like this I think: https://www.reddit.com/r/PeterExplainsTheJoke/comments/16141...

            • cpach 2 days ago

              Obviously it would be much better if BIG IP had native support for ACME. And F5 might implement it some day, but I wouldn’t hold my breath.

              For some companies, it might be worth it to throw away a $100000 device and buy something better. For others it might not be worth it.

        • EvanAnderson 3 days ago

          Exactly. According to posters here you should just throw them away and buy hardware from a vendor who does. >sigh<

          Don't expect firmware / software updates to enable ACME-type functionality for tons of gear. At best it'll be treated as an excuse by vendors to make Customers forklift and replace otherwise working gear.

          Corporate hardware lifecycles are longer than the proposed timeline for these changes. This feels like an ill thought-out initiative by bureaucrats working in companies who build their own infrastructure (in their white towers). Meanwhile, we plebs who work in less-than-Fortune 500 companies stuck with off-the-shelf solutions will be forced to suffer.

    • cryptonym 4 days ago

      You now have to build and self-shot a complete CA/PKI.

      Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.

      • stackskipton 4 days ago

        You could always ask for wildcard for internal subdomain and use that instead so you will leak your internal FQDN but not individual hosts.

        • pixl97 4 days ago

          I'm pretty sure every bank will auto fail wildcard certs these days, at least the ones I've worked with.

          Key loss on one of those is like a takeover of an entire chunk of hostnames. Really opens you up.

      • JoshTriplett 3 days ago

        > Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.

        That doesn't seem like the end of the world. It means you shouldn't have `secret-plans-for-world-takeover.example.com`, but it's already the case that secret projects should use opaque codenames. Most internal domain names would not actually leak any information of value.

  • lokar 3 days ago

    I’ve always felt a major benefit of an internal CA is making it easy to have very sort TTLs

    • SoftTalker 3 days ago

      Or very long ones. I often generate 10 year certs because then I don't have to worry about renewing them for the lifetime of the hardware.

      • lokar 3 days ago

        In a production environment with customer data?

    • formerly_proven 3 days ago

      I'm surprised there is no authorization-certificate-based challenge type for ACME yet. That would make ACME practical to use in microsegmented networks.

      The closest thing is maybe described (but not shown) in these posts: https://blog.daknob.net/workload-mtls-with-acme/ https://blog.daknob.net/acme-end-user-client-certificates/

      • benburkert 3 days ago

        It's 100% possible today to get certs in segmented networks without a new ACME challenge type: https://anchor.dev/docs/public-certs/acme-relay

        (disclamer: i'm a founder at anchor.dev)

        • webprofusion 2 days ago

          Does your hosted service know the private keys or are they all on the client?

          • benburkert 2 days ago

            No, they stay on the client, our service only has access to the CSR. From our docs:

            > The CSR relayed through Anchor does not contain secret information. Anchor never sees the private key material for your certificates.

      • bigp3t3 3 days ago

        I'd set that up the second it becomes available if it were a standard protocol. Just went through setting up internal certs on my switches -- it was a chore to say the least! With a Cert Template on our internal CA (windows), at least we can automate things well enough!

        • formerly_proven 3 days ago

          Yeah it's almost weird it doesn't seem to exist, at least publicly. My megacorp created their own protocol for this purpose (though it might actually predate ACME, I'm not sure), and a bunch of in-house people and suppliers created the necessary middlewares to integrate it into stuff like cert-manager and such (basically everything that needs a TLS certificate and is deployed more than thrice). I imagine many larger companies have very similar things, with the only material difference being different organizational OIDs for the proprietary extension fields (I found it quite cute when I learned that the corp created a very neat subtree beneath its organization OID).

  • Pxtl 3 days ago

    At this point I wish we could just get all our clients to say "self-signed is fine if you're connecting to a .LOCAL domain name". https is intrinsically useful over raw http, but the overhead of setting up centralized certs for non-public domains is just dumb.

    Give us a big global *.local cert we can all cheat with, so I don't have to blast my credentials in the clear when I log into my router's admin page.

  • shlant 3 days ago

    this is exactly what I do because mongo and TLS is enough of a headache. I am not dealing with rotating certificates regularly on top of that for endpoints not exposed to the internet.

    • SoftTalker 3 days ago

      Yep letsencrypt is great for public-facing web servers but for stuff that isn't a web server or doesn't allow outside queries none of that "easy" automation works.

      • procaryote 3 days ago

        Acme dns challenge works for things that aren't webservers.

        For the other case perhaps renew the cert at a host allowed to do outside queries for the dns challenge and find some acceptable automated way to propagate an updated cert to the host that isn't allowed outside queries.

        • Yeroc 3 days ago

          Last time I checked there's no standardized API/protocol to deal with populating the required TXT records on the DNS side. This is all fine if you've out-sourced your DNS services to one of the big players with a supported API but if you're running your own DNS services then doing automation against that is likely not going to be so easy!

          • procaryote 2 days ago

            One pretty easy way to do it while running your own DNS is to put the zone files, or some input that you can build to zone files, in version control.

            There are lots of systems that allow you to set rules for what is required to merge a PR, so if you want "the tests pass, it's a TXT record, the author is whitelisted to change that record" or something, it's very achievable

        • SoftTalker 3 days ago

          I don't have an API or any permission to add TXT records to my DNS. That's a support ticket and has about a 24-hour turnaround best case.

          • Yeroc 3 days ago

            I was just digging into this a bit and discovered ACME supports a something called DNS alias mode (https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo...) which allows you to add a static DNS TXT record on your core domain that delegates to a second domain. This would allow you to setup a second domain with DNS API (if permitted by company policy!)

          • immibis 3 days ago

            Is this just because your DNS is with some provider, or is it something that leads from your organizational structure?

            If it's just because your DNS is at a provider, you should be aware that it's possible to self-host DNS.

            • SoftTalker 2 days ago

              It’s internal policy. We do run our own DNS.

              • procaryote 2 days ago

                But that's pretty much self-inflicted damage.

          • JackSlateur 3 days ago

            You have people paid to create DNS records ? Haha

            • dijit 3 days ago

              its’ not practical to give everyone write access to the google.com root zone.

              Someone will fuck up accidentally, so production zones are usually gated somehow, sometimes with humans instead of pure automata.

              • JackSlateur 3 days ago

                Why not ?

                Giving write access does not mean giving unrestricted write access

                Also, another way (which I built in a previous compagny) is to create a simple certificate provider (API or whatever), integrated with whatever internal authentication scheme you are using, and are able to sign csr for you. A LE proxy, as you might call it

            • SoftTalker 2 days ago

              Yes we do. That’s not the only thing they do of course.

              • xorcist 2 days ago

                It also sounds like the right people to handle certificate issuance?

                If you are not in a good position in the internal organization to control DNS, you probably shouldn't handle certificate issuance either. It makes sense to have a specific part of the organization responsible.

          • procaryote 3 days ago

            That's not great, sorry to hear

      • bsder 3 days ago

        And may the devil help you if you do something wrong and accidentally trip LetsEncrypt's rate limiting.

        You can do nothing except twiddle your thumbs while it times out and that may take a couple of days.

  • JackSlateur 3 days ago

    Haa, yes ! We have that, too ! Accepted warning in browsers ! curl -k ! verify=False ! Glorious future to the hacking industry !

greatgib 4 days ago

As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain. Only the big one embedded in browser will have the receive to have their own CA certificate with whatever period they want...

And in term of security, I think that it is a double edged sword:

- everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it

- Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time. If ever Digicert or Letsencrypt server, or the "cert updating client" is rooted or has a security issue, most servers around the world could be compromised in a very very short time.

As a side note, I'm totally laughing at the following explanation in the article:

   47 days might seem like an arbitrary number, but it’s a simple cascade:
   - 47 days = 1 maximal month (31 days) + 1/2 30-day month (15 days) + 1 day wiggle room
So, 47 is not arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...
  • lolinder 3 days ago

    > everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it

    I'm a computing professional in the tiny slice of internet users that actually understands what a cert is, and I never look at a cert by hand unless it's one of my own that I'm troubleshooting. I'm sure there are some out there who do (you?), but they're a minority within a minority—the rest of us just rely on the automated systems to do a better job at security than we ever could.

    At a certain point it is correct for systems engineers to design around keeping the average-case user more secure even if it means removing a tiny slice of security from the already-very-secure power users.

  • gruez 4 days ago

    >As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain.

    like, private CA? All of these restrictions are only applied for certificates issued under the webtrust program. Your private CA can still issue 100 year certificates.

    • greatgib 4 days ago

      Let's suppose that I'm a competitor of Google and Amazon, and I want to have my Public root CA for mydomain.com to offer my clients subdomains like s3.customer1.mydomain.com, s3.customer2.mydomain.com,...

      • tptacek 3 days ago

        If you want to be a public root CA, so that every browser in the world needs to trust your keys, you can do all the lifting that the browsers are asking from public CAs.

      • gruez 4 days ago

        Why do you want this when there are wildcard certificates? That's how the hyperscalers do it as well. Amazon doesn't have a separate certificate for each s3 bucket, it's all under a wildcard certificate.

        • vlovich123 3 days ago

          Amazon did this the absolute worst way - all customers share the same flat namespace for S3 buckets which limits the names available and also makes the bucket names discoverable. Did it a bit more sanely and securely at Cloudflare where it was namespaced to the customer account, but that required registering a wildcard certificate per customer if I recall correctly.

        • zamadatix 3 days ago

          The only consideration I can think is public wildcard certificates don't allow wildcard nesting so e.g. a cert for *.example.com doesn't offer a way for the operator of example.com to host a.b.example.com. I'm not sure how big of a problem that's really supposed to be though.

    • anacrolix 3 days ago

      No. Chrome flat out rejects certificates that expire more than 13 months away, last time I tried.

  • nickf 3 days ago

    Certificate pinning to public roots or CAs is bad. Do not do it. You have no control over the CA or roots, and in many cases neither does the CA - they may have to change based on what trust-store operators say. Pinning to public CAs or roots or leaf certs, pseudo-pinning (not pinning to a key or cert specifically, but expecting some part of a certificate DN or extension to remain constant), and trust-store limiting are all bad, terrible, no-good practices that cause havoc whenever they are implemented.

    • szszrk 2 days ago

      Ok, but what's the alternative?

      Support for cert and CA pinning is in a state that is much better than I thought it will be, at least for mobile apps. I'm impressed by Apple's ATS.

      Yet, for instance, you can't pin a CA for any domain, you always have to provide it up front to audit, otherwise your app may not get accepted.

      Doesn't this mean that it's not (realistically) possible to create cert pinning for small solutions? Like homelabs or app vendors that are used by onprem clients?

      We'll keep abusing PKI for those use cases.

  • precommunicator 3 days ago

    > everyone will be so used to certificates changing all the time, and no certificate pinning anymore

    Browser certificate pinning is deprecated since 2018. No current browsers support HPKP.

    There are alternatives to pinning, DNS CAA records, monitoring CT logs.

    • blincoln 2 days ago

      Cert pinning is a very common practice for mobile apps. I'm not a fan of it, but it's how things are today. Seems likely that that will have to change with shorter cert lifetimes.

  • lucb1e 3 days ago

    > 47 [is?] arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...

    Not related to certificates specifically, and the specific number of days is in no way a security risk, but it reminded me of NUMS generators. If you find this annoyingly arbitrary, you may also enjoy: <https://github.com/veorq/numsgen>. It implements this concept:

    > [let's say] one every billion values allows for a backdoor. Then, I may define my constant to be H(x) for some deterministic PRNG H and a seed value x. Then I proceed to enumerate "plausible" seed values x until I find one which implies a backdoorable constant. I can begin by trying out all Bible verses, excerpts of Shakespeare works, historical dates, names of people and places... because for all of them I can build a story which will make the seed value look innocuous

    From http://crypto.stackexchange.com/questions/16364/why-do-nothi...

  • jeroenhd 2 days ago

    > As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain

    Only if browsers enforce the TLS requirements for private CAs. Usually, browsers exempt user or domain controlled CAs from all kinds of requirements, like certificate transparancy log requirements. I doubt things will be different this time.

    If they do decide to apply those limits, you can run an ACME server for your private CA and point certbot or whatever ACME client you prefer at it to renew your internal certificates. Caddy can do this for you with a couple of lines of config: https://caddyserver.com/docs/caddyfile/directives/acme_serve...

    Funnily enough, Caddy defaults to issueing 12 hour certificates for its local CA deployment.

    > no certificate pinning anymore

    Why bother with public certificate authorities if you're hardcoding the certificate data in the client?

    > Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time.

    Those hosts needed a bastion host or proxy of sorts to connect to the outside yearly, so they can still do that today. But I don't see the advantage of using the public CA infrastructure in a closed system, might as well use the Microsoft domain controller settings you probably already use in your network to generate a corporate CA and issue your 10 year certificates if you're in control of the network.

  • yjftsjthsd-h 4 days ago

    If you're in a position to pin certs, aren't you in a position to ignore normal CAs and just keep doing that?

ghusto 4 days ago

I really wish encryption and identity weren't so tightly coupled in certificates. If I've issued a certificate, I _always_ care about encryption, but sometimes do not care about identity.

For those times when I only care about encryption, I'm forced to take on the extra burden that caring about identity brings.

Pet peeve.

  • tptacek 4 days ago

    There's minimal security in an unauthenticated encrypted connection, because an attacker can just MITM it.

    • SoftTalker 3 days ago

      Trust On First Use is the normal thing for these situations.

      • asmor 3 days ago

        TOFU equates to "might as well never ask" for most users. Just like Windows UAC prompts.

        • superkuh 3 days ago

          You're right most of the time. But there are two webs. And it's only in the later (far more common) case that things like that matter.

          There is the web as it always has been on http/1.1 that is a hyperlinked set of html documents hosted on a mishmash of random commercial and personal servers. Then there is modern http/2 http/3 CA TLS only web hosted as a service on some other website or cloud; mostly to do serious business and make money. The modern web's CA TLS-only ID scheme is required due to the complexity and risk of automatic javascript execution in browsers.

          I wish we could have browsers that could support both use cases. But we can't because there's too much money and private information bouncing around now. Can't be whimsical, can't 'vibe code' the web ID system (ie, self signed not feasible in HTTP/3). It's all gotta be super serious. For everyone. And that means bringing in a lot of (well hidden by acme2 clients) complexity and overhead and centralization (everyone uses benevolent US based Lets Encrypt). This progressive lowering of the cert lifetimes is making the HTTP-only web even more fragile and hard to create lasting sites on. And that's sad.

          TOFU works for the old web just great. It's completely incompatible with the modern web because major browsers will only ever compile their HTTP/* libs with flags that prevent TOFU and self-signed. You could host a http/1.1 self-signed and TOFU but everyone (except geeks) would be scared away or incapable of loading it.

          So, TOFU works if you just want to do something like "gemini" protocol but instead of a new protocol just stick to original http and have a demographic of retro-enthusiasts and poor people. It's just about as accessible as gemni for most people (ie, not very) except for two differences. 1. Bots still love http/1.1 and don't care if it's plain text. 2. There's still a giant web of http/1.1 websites out there.

          • TheJoeMan 3 days ago

            Not to mention the usage of web browsers for configuring non-internet devices! I mean such as managing a router from the LAN side built-in webserver, how many warnings you have to click through in Firefox nowadays. Hooking an iPhone to an IoT device, the iPhone hates that there's no "internet" and constantly tries to drop the WiFi.

    • steventhedev 4 days ago

      There is a security model where MITM is not viable - and separating that specific threat from that of passive eavesdropping is incredibly useful.

      • tptacek 4 days ago

        MITM scenarios are more common on the 2025 Internet than passive attacks are.

        • steventhedev 4 days ago

          MITM attacks are common, but noisy - BGP hijacks are literally public to the internet by their nature. I believe that insisting on coupling confidentiality to authenticity is counterproductive and prevents the development of more sophisticated security models and network design.

          • orev 4 days ago

            You don’t need to BGP hijack to perform a MITM attack. An HTTPS proxy can be easily and transparently installed at the Internet gateway. Many ISPs were doing this with HTTP to inject their own ads, and only the move to HTTPS put an end to it.

            • steventhedev 4 days ago

              Yes. MITM attacks do happen in reality. But by their nature they require active participation which for practical purposes means leaving some sort of trail. More importantly is that by decoupling confidentionality from authenticity, you can easily prevent eavesdropping attacks at scale.

              Which for some threat models is sufficiently good.

              • tptacek 4 days ago

                This thread is dignifying a debate that was decisively resolved over 15 years ago. MITM is a superset of the eavesdropper adversary and is the threat model TLS is designed to risk.

                It's worth pointing out that MITM is also the dominant practical threat on the Internet: you're far more likely to face a MITM attacker, even from a state-sponsored adversary, than you are a fiber tap. Obviously, TLS deals with both adversaries. But altering the security affordances of TLS to get a configuration of the protocol that only deals with the fiber tap is pretty silly.

                • pyuser583 3 days ago

                  As someone who had to set up monitoring software for my kids, I can tell you MITM are very real.

                  It’s how I know what my kids are up to.

                  It’s possible because I installed a trusted cert in their browsers, and added it to the listening program in their router.

                  Identity really is security.

                • steventhedev 3 days ago

                  TLS chose the threat model that includes MITM - there's no good reason that should ever change. All I'm arguing is that having a middle ground between http and https would prevent eavesdropping, and that investment elsewhere could have been used to mitigate the MITM attacks (to the benefit of all protocols, even those that don't offer confidentiality). Instead we got OpenSSL and the CA model with all it's warts.

                  More importantly - this debate gets raised in every single HN post related to TLS or CAs. Answering with a "my threat model is better than yours" or somehow that my threat model is incorrect is even more silly than offering a configuration of TLS without authenticity. Maybe if we had invested more effort in 801.x and IPSec then we would get those same guarantees that TLS offers, but for all traffic and for free everywhere with no need for CA shenanigans or shortening lifetimes. Maybe in that alternative world we would be arguing that nonrepudiation is a valuable property or not.

                  • simiones 3 days ago

                    It is literally impossible to securely talk to a different party over an insecure channel unless you have a shared key beforehand or use a trusted third-party. And since the physical medium is always inherently insecure, you will always need to trust a third party like a CA to have secure communications over the internet. This is not a limitation of some protocol, it's a fundamental law of nature/mathematics (though maybe we could imagine some secure physical transport based on entanglement effects in some future world?).

                    So no, IPSec couldn't have fixed the MITM issue without requiring a CA or some equivalent.

                    • YetAnotherNick 3 days ago

                      The key could be shared in DNS records or could even literally be in the domain name like Tor. Although each approach has its pros and cons.

                      • tptacek 3 days ago

                        On this arm of the thread we're litigating whether authentication is needed at all, not all the different ways authentication can be provided. I'm sure there's another part of the thread somewhere else where people are litigating CAs vs Tor.

        • BobbyJo 4 days ago

          What does their commonality have to do with the use cases where they aren't viable?

    • jchw 4 days ago

      I mean, we do TOFU for SSH server keys* and nobody really seems to bat an eye at that. Today if you want "insecure but encrypted" on the web the main way to go is self-signed which is both more annoying and less secure than TOFU for the same kind of use case. Admittedly, this is a little less concerning of an issue thanks to ACME providers. (But still annoying, especially for local development and intranet.)

      *I mistakenly wrote "certificate" here initially. Sorry.

      • tptacek 4 days ago

        SSH TOFU is also deeply problematic, which is why cattle fleet operators tend to use certificates and not piecewise SSH keys.

        • jchw 4 days ago

          I've made some critical mistakes in my argument here. I am definitely not referring to using SSH TOFU in a fleet. I'm talking about using SSH TOFU with long-lived machines, like your own personal computers, or individual long-running servers.

          Undoubtedly it is not best practice to lean on TOFU for good reason, but there are simply some lower stakes situations where engaging the CA system is a bit overkill. These are systems with few nodes (maybe just one) that have few users (maybe just one.) I have some services that I deploy that really only warrant a single node as HA is not a concern and they can easily run off a single box (modern cheap VPSes really don't sweat handling ~10-100 RPS of traffic.) For those, I pre-generate SSH server keys before deployment. I can easily verify the fingerprint in the excessively rare occasion it isn't already trusted. I am not a security expert, but I think this is sufficient at small scales.

          To be clear, there are a lot of obvious security problems with this:

          - It relies on me actually checking the fingerprint.

          - SSH keys are valid and trusted indefinitely, so it has to be rotated manually.

          - The bootstrap process inevitably involves the key being transmitted over the wire, which isn't as good as never having the key go over the wire, like you could do with CSRs.

          This is clearly not good enough for a service that needs high assurance against attackers, but I honestly think it's largely fine for a small to medium web server that serves some small community. Spinning up a CA setup for that feels like overkill.

          As for what I personally would do instead for a fleet of servers, personally I think I wouldn't use SSH at all. In professional environments it's been a long time since I've administered something that wasn't "cloud" and in most of those cloud environments SSH was simply not enabled or used, or if it was we were using an external authorization system that handled ephemeral keys itself.

          That said, here I'm just suggesting that I think there is a gap between insecure HTTP and secure HTTPS that is currently filled by self-signed certificates. I'm not suggesting we should replace HTTPS usage today with TOFU, but I am suggesting I see the value in a middle road between HTTP and HTTPS where you get encryption without a strong proof of what you're connecting to. In practice this is sometimes the best you can really get anyway: consider the somewhat common use case of a home router configuration page. I personally see the value in still encrypting this connection even if there is no way to actually ensure it is secure. Same for some other small scale local networking and intranet use cases.

          • tptacek 4 days ago

            I don't understand any of this. If you want TOFU for TLS, just use self-signed certificates. That makes sense for your own internal stuff. For good reason, the browser vendors aren't going to let you do it for public resources, but that doesn't matter for your use case.

            • jchw 4 days ago

              Self-signed certificates have a terrible UX and worse security; browsers won't remember the trusted certificate so you'd have to verify it each time if you wanted to verify it.

              In practice, this means that it's way easier to just use unencrypted HTTP, which is strictly worse in every way. I think that is suboptimal.

              • tptacek 4 days ago

                Just add the self-signed certificate. It's literally a TOFU system.

                • jchw 4 days ago

                  But again, you then get (much) worse UX than plaintext HTTP, it won't even remember the certificate. The thing that makes TOFU work is that you at least only have to verify the certificate once. If you use a self-signed certificate, you have to allow it every session.

                  A self-signed certificate has the benefit of being treated as a secure origin, but that's it. Sometimes you don't even care about that and just want the encryption. That's pretty much where this argument all comes from.

                  • tptacek 4 days ago

                    Yes, it will.

                    • jchw 3 days ago

                      I checked and you seem to be correct, at least for Firefox and Chromium. I tried using:

                      https://self-signed.badssl.com/

                      and when I clicked "Accept the risk and continue", the certificate was added to Certificate Manager. I closed the browser, re-opened it, and it did not prompt again.

                      I did the same thing in Chromium and it also worked, though I'm not sure if Chromium's are permanent or if they have a lifespan of any kind.

                      I am absolutely 100% certain that it did not always work that way. I remember a time when Firefox had an option to permanently add an exception, but it was not the default.

                      Either way, apologies for the misunderstanding. I genuinely did not realize that it worked this way, and it runs contrary to my previous experience dealing with self-signed certificates.

                      To be honest, this mostly resolves the issues I've had with self-signed certificates for use cases where getting a valid certificate might be a pain. (I have instead been using ACME with DNS challenge for some cases, but I don't like broadcasting all of my internal domains to the CT log nor do I really want to manage a CA. In some cases it might be nice to not have a valid internet domain at all. So, this might just be a better alternative in some cases...)

                      • tptacek 3 days ago

                        Every pentester that has ever used Burp (or, for the newcomers, mitmproxy) has solved this problem for themselves. My feeling is that this is not a new thing.

                • PhilipRoman 2 days ago

                  Not a TLS expert, but last time I checked, the support for limiting what domains a certificate is allowed to sign was questionable. I wouldn't want my router to be able to MITM any https connection just to be able to connect to it's web interface securely.

      • arccy 4 days ago

        ssh server certificates should not be TOFU, the point of SSH certs is so you can trust the signing key.

        TOFU on ssh server keys... it's still bad, but less people are interested in intercepting ssh vs tls.

        • tptacek 4 days ago

          Intercepting and exploiting first-contact SSH sessions is a security conference sport. People definitely do it.

        • jchw 4 days ago

          I just typed the wrong thing, fullstop. I meant to say server keys; fixed now.

          Also, I agree that TOFU in its own is certainly worse than having robust verification via the CA system. OTOH, SSH-style TOFU has some advantages over the CA system, too, at least without additional measures like HSTS and certificate pinning. If you are administering machines that you yourself set up, there is little reason to bother with anything more than TOFU because you'll cache the key shortly after the machine is set up and then get warned if a MITM is attempted. That, IMO, is the exact sort of argument in favor of having an "insecure but encrypted" sort of option for the web; small scale cases where you can just verify the key manually if you need to.

      • pabs3 4 days ago

        You don't have to TOFU SSH server keys, there is a DNSSEC option, or you can transfer the keys via a secure path, or you can sign the keys with a CA.

      • gruez 4 days ago

        >I mean, we do TOFU for SSH server certificates and nobody really seems to bat an eye at that.

        Mostly because ssh isn't something most people (eg. your aunt) uses, and unlike with https certificates, you're not connecting to a bunch of random servers on a regular basis.

        • jchw 4 days ago

          I'm not arguing for replacing existing uses of HTTPS here, just cases where you would today use self-signed certificates or plaintext.

      • hedora 4 days ago

        TOFU is not less secure than using a certificate authority.

        Both defend against attackers the other cannot. In particular, the number of machines, companies and government agencies you have to trust in order to use a CA is much higher.

        • tptacek 4 days ago

          TOFU is less secure than using a trust anchor.

          • hedora 4 days ago

            That’s only true if you operate the trust anchor (possible) and it’s not an attack vector (impossible).

            For example, TOFU where “first use” is a loopback ethernet cable between the two machines is stronger than a trust anchor.

            Alternatively, you could manually verify + pin certs after first use.

            • tptacek 4 days ago

              There are a couple of these concepts --- TOFU (key continuity) is one, PAKEs are another, pinning a third --- that sort of float around and captivate people because they seem easy to reason about, but are (with the exception of Magic Wormhole) not all that useful in the real world. It'd be interesting to flesh out the complete list of them.

              The thing to think in comparing SSH to TLS is how frequent counterparty introductions are. New counterparties in SSH are relatively rare. Key continuity still needlessly exposes you to an grave attack in SSH, but really all cryptographic protocol attacks are rare compared to the simpler, more effective stuff like phishing, so it doesn't matter. New counterparties in TLS happen all the time; continuity doesn't make any sense there.

              • hedora 3 days ago

                There are ~ 200 entries in my password manager. Maybe 25 are important. Pinning their certs would meaningfully reduce the transport layer attack surface for those accounts.

                • tptacek 3 days ago

                  Yes, these ideas bubble around because they all seem reasonable on their face. I was a major fan of pinning!

    • panki27 4 days ago

      How is an attacker going to MITM an encrypted connection they don't have the keys for, without having rogue DNS or something similar, i.e. faking the actual target?

      • Ajedi32 4 days ago

        It's an unauthenticated encrypted connection, so there's no way for you to know whose keys you're using. The attacker can just tell you "Hi, I'm the server you're looking for. Here's my key." and your client will establish a nice secure, encrypted connection to the malicious attacker's computer. ;)

        • notTooFarGone 2 days ago

          There are enough example where this is just a bogus scenario. There are a lot of IoT cases that fall apart anyway when the attacker is able to do a MITM attack.

          For example if the MITM requires you to have physical access to the machine, you'd also have to cover the physical security first. As long as that is not the case who cares for some connection hijack. If the data you are actually communicating is in addition just not worth the encryption but has to be because of regulation you are just doing the dance without it being worth it.

      • oconnor663 4 days ago

        They MITM the key exchange step at the beginning, and now they do have the keys. The thing that prevents this in TLS is the chain of signatures asserting identity.

        • 2mlWQbCK 3 days ago

          You can have TLS with TOFU, like in the Gemini protocol. At least then, in theory, the MTIM has to happen the first time you connect to a site. There is also the possibility for out of band confirmation of some certificate's fingerprint if you want to be really sure that some Gemini server is the one you hope it is.

        • panki27 4 days ago

          You can not MITM a key that is being exchanged through Diffie-Hellman, or have I missed something big?

          • Ajedi32 4 days ago

            Yes, Mallory just pretends to be Alice to Bob and pretends to be Bob to Alice, and they both establish an encrypted connection to Mallory using Diffie-Hellman keys derived from his secrets instead of each other's. Mallory has keys for both of their separate connections at this point and can do whatever he wants. That's why TLS only uses Diffie-Hellman for perfect forward secrecy after Alice has already authenticated Bob. Even if the authentication key gets cracked later Mallory can't reach back into the past and MITM the connection retroactively, so the DH-derived session key remains protected.

          • oconnor663 3 days ago

            If we know each other's DH public key in advance, then you're totally right, DH is secure over an untrusted network. But if we don't know each other's public keys, we have to get them over that same network, and DH can't protect us if the network lies about our public keys. Solving this requires some notion of "identity", i.e. some way to verify that when I say "my public key is abc123" it's actually me who's saying that. That's why it's hard to have privacy without identity.

      • simiones 4 days ago

        Connections never start as encrypted, they always start as plain text. There are multiple ways of impersonating an IP even if you don't control DNS, especially if you are in the same local network.

        • Gigachad 3 days ago

          Double especially if it's the ISP or government involved. They can just automatically MITM and reencrypt every connection if there is no identity checks.

        • gruez 4 days ago

          >Connections never start as encrypted, they always start as plain text

          Not "never", because of HSTS preload, and browsers slowly adding scary warnings to plaintext connections.

          https://preview.redd.it/1l4h9e72vp981.jpg?width=640&crop=sma...

          • simiones 4 days ago

            TCP SYN is not encrypted, and neither is Client Hello. Even with TCP cookies and TLS session resumption, the initial packet is still unencrypted, and can be intercepted.

            • haiku2077 4 days ago
              • simiones 3 days ago

                Oh, right, thanks for the correction!

                However, ECH relies on a trusted 3rd party to provide the key of the server you are intending to talk to. So, it won't work if you have no way of authenticating the server beforehand the way GP was thinking about.

              • EE84M3i 4 days ago

                Yes but this still depends on identity. It's not unauthenticated.

          • Ajedi32 4 days ago

            GP means unencrypted at the wire level. ClientHelloOuter is still unencrypted even with HSTS.

        • jiveturkey 3 days ago

          Chrome started doing https-first since April 2021 (v90).

          Safari did some half measures starting in Safari 15 (don't know the year) and now fully defaults to https first.

          Firefox 136 (2025) now does https first as well.

          • simiones 3 days ago

            That is irrelevant. All TCP connections start as a TCP SYN, that can be trivially intercepted and MITMd by anyone. So, if you don't have an out-of-band reason to trust the server certificate (such as trust in the CA that PKI defines, or knowing the signature of the server certificate), you can never be sure your TLS session is secure, regardless of the level of encryption you're using.

            • gruturo 3 days ago

              After the TCP handshake, the very first payload will be the HTTPS negotiation - and even if you don't use encrypted client hello / encrypted SNI, you still can't spoof it because the certificate chain of trust will not be intact - unless you somehow control the CAs trusted by the browser.

              With an intact trust chain, there is NO scenario where a 3rd party can see or modify what the client requests and receives beyond seeing the hostname being requested (and not even that if using ECH/ESNI)

              Your "if you don't have an out-of-band reason to trust the server cert" is a fitting description of the global PKI infrastructure, can you explain why you see that as a problem? Apart from the fact that our OSes and browser ship out of the box with a scary long list of trusted CAs, some from fairly dodgy places?

              let's not forget that BEFORE that TCP handshake there's probably a DNS lookup where the FQDN of the request is leaked, if you don't have DoH.

            • jiveturkey 2 days ago

              well yes! that is the entire point / methodology of TLS. Because you have a trust anchor, you can be sure that at the app layer the connection is "secure".

              of course the L3/L4 can be (non) trivially intercepted by anyone, but that is exactly what TLS protects you against.

              if simple L4 interception were all that is required, enterprises wouldn't have to install a trust root on end devices, in order to MITM all TLS connections.

              the comment you were replying to is

              > How is an attacker going to MITM an encrypted connection they don't have the keys for

              of course they can intercept the connection, but they can't MITM it in the sense that MITM means -- read the communications. the kind of "MITM" / interception that you are talking about is simply what routers do anyway!

    • IshKebab 3 days ago

      I disagree. Think about every time you use a service (website, email, etc.) you've used before via a network you don't trust (e.g. free WiFi).

      On the other hand providing the option may give a false sense of security. I think the main reason SSH isn't MitM'd all over the place is it's a pretty niche service and very often you do have a separate authentication method by sending your public key over HTTPS.

      • saurik 3 days ago

        When I use a service over TLS on a network I don't trust, the premise is that I only will trust the connection if it has a certificate from a handful of companies trusted by the people who wrote the software I'm using (my browser/client and/or my operating system) to only issue said certificates to people who are supposed to have them (which these days is increasingly defined to be "who are in control of the DNS for the domain name at a global level", for better or worse, not that everyone wants to admit that).

        But like, no: the free Wi-Fi I'm using can't, in fact, MITM the encryption used by my connection... it CAN do a bunch of other shitty things to me that undermine not only my privacy but even undermine many of the things people expect to be covered by privacy (using traffic analysis on the size, timing, or destination of the packets that I'm sending), but the encryption itself isn't subject to the failure mode of SSH.

        • ongy 2 days ago

          The encryption itself may not be.

          Establishing the initial exchange of crypto key material can be.

          That's where certificates are important because they add identity and prevent spoofing.

          With TOFU, if the first use is on an insecure network, this exchange is jeopardized. And in this case, the encryption is not with the intended partner and thus does not need to be attacked.

      • woodruffw 3 days ago

        > I disagree. Think about every time you use a service (website, email, etc.) you've used before via a network you don't trust (e.g. free WiFi).

        Hm? The reason I do use those services over a network I don't trust is because they're wrapped in authenticated, encrypted channels. The authenticated encryption happens at a layer above the network because I don't trust the network.

      • tikkabhuna 3 days ago

        But isn't that exactly the previous posters point? Free WiFI someone can just MITM your connection, you would never know and you think its encrypted. Its the worst possible outcome. At least when there's no encryption browsers can tell the user to be careful.

        • IshKebab 3 days ago

          They could still tell the user to be careful without authentication.

          He wasn't proposing that encryption without authentication gets the full padlock and green text treatment.

  • Ajedi32 4 days ago

    In what situation would you want to encrypt something but not care about the identity of the entity with the key to decrypt it? That seems like a very niche use case to me.

    • xyzzy123 4 days ago

      Because TLS doesn't promise you very much about the entity which holds the key. All you really know is that they they control some DNS records.

      You might be visiting myfavouriteshoes.com (a boutique shoe site you have been visiting for years), but you won't necessarily know if the regular owner is away or even if the business has been sold.

      • Ajedi32 4 days ago

        It tells you the entity which holds the key is the actual owner of myfavouriteshoes.com, and not just a random guy operating the free Wi-Fi hotspot at the coffee shop you're visiting. If you don't care about that then why even bother with encryption in the first place?

        • xyzzy123 4 days ago

          True.

          OK I will fess up. The truth is that I don't spend a lot of time in coffee shops but I do have a ton of crap on my LAN that demands high amounts of fiddle faddle so that the other regular people in my house can access stuff without dire certificate warnings, the severity of which seems to escalate every year.

          Like, yes, I eat vegetables and brush my teeth and I understand why browsers do the things they do. It's just that neither I nor my users care in this particular case, our threat model does not really include the mossad doing mossad things to our movie server.

          • yjftsjthsd-h 4 days ago

            If you really don't care, sometimes you can just go plantext HTTP. I do this for some internal things that are accessed over VPN links. Of course, that only works if you're not doing anything that browsers require HTTPS for.

            Alternatively, I would suggest letsencrypt with DNS verification. Little bit of setup work, but low maintenance work and zero effort on clients.

            • smw 3 days ago

              Or just run tailscale and let it take care of the certs for you. I hate to sound like a shill, but damn does it make it easier.

          • akerl_ 3 days ago

            It seems like you have two pretty viable options:

            1. Wire up LetsEncrypt certs for things running on your LAN, and all the "dire certificate warnings" go away.

            2. Run a local ACME service, wire up ACME clients to point to that, make your private CA valid for 100 years, trust your private CA on the devices of the Regular People in your house.

            I did this dance a while back, and things like acme.sh have plugins for everything from my Unifi gear to my network printer. If you're running a bunch of servers on your LAN, the added effort of having certs is tiny by comparison.

      • arccy 4 days ago

        at least it's not evil-government-proxy.com that decided to mitm you and look at your favorite shoes.

        • xyzzy123 4 days ago

          Indeed and the system is practically foolproof because the government cannot take over DNS records, influence CAs, compromise cloud infrastructure / hosting, or rubber hose the counter-party to your communications.

          Yes I am being snarky - network level MITM resistance is wonderful infrastructure and CT is great too.

    • pizzafeelsright 3 days ago

      Seems logical.

      If we encrypt everything we don't need AuthN/Z.

      Encrypt locally to the target PK. Post a link to the data.

      • lucb1e 3 days ago

        What? I work in this field and I have no idea what you mean. (I get the abbreviations like authz and pk, but not how "encrypting everything" and "posting links" is supposed to remove the need for authentication)

  • mannyv 3 days ago

    All our door locks suck, but everyone has a door lock.

    The goal isn't to make everything impossible to break. The goal is to provide Just Enough security to make things more difficult. Legally speaking, sniffing and decrypting encrypted data is a crime, but sniffing and stealing unencrypted data is not.

    That's an important practical distinction that's overlooked by security bozos.

  • charcircuit 4 days ago

    Having them always coupled disincentivizes bad ISP's from MITM the connection.

  • silverwind 3 days ago

    I agree, there needs to be a TLS without certificates. Pre-shared secrets would be much more convenient in many scenarios.

    • ryao 3 days ago

      How about TLS without CAs? See DANE. If only web browsers would support it.

      • pornel 3 days ago

        DANE is a TLS with too-big-to-fail CAs that are tied to the top-level domains they own, and can't be replaced.

        Separation between CAs and domains allows browsers to get rid of incompetent and malicious CAs with minimal user impact.

        • ryao 2 days ago

          DANE lets the domain owner manage the certificates issued for the domain.

          • pornel 15 hours ago

            This delegation doesn't play the same role as CAs in WebPKI.

            Without DNSSEC's guarantees, the DANE TLSA records would be as insecure as self-signed certificates in WebPKI are.

            It's not enough to have some certificate from some CA involved. It has to be a part of an unbroken chain of trust anchored to something that the client can verify. So you're dependent on the DNSSEC infrastructure and its authorities for security, and you can't ignore or replace that part in the DANE model.

  • panki27 4 days ago

    Isn't this excatly the reason why LetsEncrypt was brought to life?

  • grishka 4 days ago

    I want a middle ground. Identity verification is useful for TLS, but I really wish there was no reliance on ultimately trusted third parties for that. Maybe put some sort of identity proof into DNS instead, since the whole thing relies on DNS anyway.

    • immibis 3 days ago

      Makes it trivial for your DNS provider to MITM you, and you can't even use certificate transparency to detect it.

      • grishka 2 days ago

        You can use multiple DNS providers at once to catch that situation. You can have some sort of signing scheme where each authoritative server would sign something in turn to establish a chain of trust up to the root servers. You can use encrypted DNS, even if it is relying on traditional TLS certificates, but it can also use something different for identity verification, like having you use a config file with the public key embedded in it, or a QR code, instead of just an address.

  • Vegenoid 3 days ago

    Isn't identity the entire point of certificates? Why use certificates if you only care about encryption?

  • ryao 3 days ago

    If web browsers supported DANE, we would not need CAs for encryption.

    • Avamander 3 days ago

      DNSSEC is just a shittier PKI with CAs that are too big to ever fail.

      • immibis 3 days ago

        It is, but since we rely on DNS anyway, no matter what, and your DNS provider can get a certificate from Let's Encrypt for your site, without asking you, there's merit to combining them. It doesn't add any security to have PKI separate from DNS.

        However, we could use some form of Certificate Transparency that would somehow work with DANE.

        Also it still protects you from everyone who isn't your DNS provider, so it's valuable if you only need a medium level of security.

        • Avamander a day ago

          > It is, but since we rely on DNS anyway, no matter what, and your DNS provider can get a certificate from Let's Encrypt for your site, without asking you, there's merit to combining them.

          They can, but they'll also get caught thanks to CT. No such audit infrastructure exists for DANE/DNSSEC.

          > It doesn't add any security to have PKI separate from DNS.

          One can also get a certificate for an IP addresses.

        • ryao 2 days ago

          There is no need for a certificate from let’s encrypt. DANE lets you put your own self signed certificate into DNS and it should be trusted because DNS is authoritative, although DNSSEC should be required to make it secure.

          • tptacek 2 days ago

            And yet no browser trusts it, and a single-digit percentage of popular zones (from the Tranco list) have signatures; this despite decades of deployment effort. Meanwhile, over 60% of all sites on the Internet have ISRG certificates.

captn3m0 4 days ago

This is great news. This would blow a hole in two interesting places where leaf-level certificate pinning is relied upon:

1. mobile apps.

2. enterprise APIs. I dealt with lots of companies that would pin the certs without informing us, and then complain when we'd rotate the cert. A 47-day window would force them to rotate their pins automatically, making it even worse of a security theater. Or hopefully, they switch rightly to CAA.

  • bearjaws 3 days ago

    Giving me PTSD for working in healthcare.

    Health Systems love pinning certs, and we use an ALB with 90 day certs, they were always furious.

    Every time I was like "we can't change it", and "you do trust the CA right?", absolute security theatre.

  • DiggyJohnson 4 days ago

    Do you (or anyone) recommend any text based resources laying out the state of enterprise TLS management in 2025?

    It’s become a big part of my work and I’ve always just had a surface knowledge to get me by. Assume I work in a very large finance or defense firm.

  • grishka 4 days ago

    Isn't it usually the server's public key that's pinned? The key pair isn't regenerated when you renew the certificate.

    • toast0 4 days ago

      Typical guidance is to pin the CA or intermediate, because in case of a key compromise, you're going to need to generate a new key.

      You should really generate a new key for each certificate, in case the old key is compromised and you don't know about it.

      What would really be nice, but is unlikely to happen would be if you could get a constrained CA certificate issued for your domain and pin that, then issue your own short term certificates from there. But if those are wide spread, they'd need to be short dated too, so you'd need to either pin the real CA or the public key and we're back to where we were.

      • nickf 3 days ago

        I've said it up-thread, but never ever never never pin to anything public. Don't do it. It's bad. You, and even the CA have no control over the certificates and cannot rely on them remaining in any way constant. Don't do it. If you must pin, pin to private CAs you control. Otherwise, don't do it. Seriously. Don't.

        • toast0 3 days ago

          There's not really a better option if you need your urls to work with public browsers and also an app you control. You can't use a private CA for those urls, because the public browsers won't accept it; you need to include a public CA in your app so you don't have to rely on the user's device having a reasonable trust store. Including all the CAs you're never going to use is silly, so picking a few makes sense.

          • richardwhiuk 3 days ago

            You don't need both of those things. Give your app a different url.

        • ori_b 3 days ago

          Why should I trust a CA that has no control over the certificate chains?

          • nickf 3 days ago

            Because they operate in a regulated, security industry where changes happen - sometimes beyond their control?

        • einsteinx2 3 days ago

          Repeating it doesn’t make it any more true. Cert providers publish their root certs, you pin those root certs, zero problems.

          • nickf 3 days ago

            Then the CA goes away, like Entrust. Huge problems. I speak (sadly) from experience.

          • Plasmoid 2 days ago

            They rotate those often enough.

  • 1a527dd5 3 days ago

    Dealing with enterprise is going to be fun, we work with a lot of car companies around the world. A good chunk of them love to whitelist by thumbprint. That is going to be fun for them.

philsnow 3 days ago

> As a certificate authority, one of the most common questions we hear from customers is whether they’ll be charged more to replace certificates more frequently. The answer is no. Cost is based on an annual subscription […]

(emphasis added)

Pump the brakes there, digicert. Price is based on an annual subscription. CA costs will actually go up an infinitesimal amount, but they’re already nearly zero to begin with. Running a CA has got to be one of the easiest rackets in the world.

  • jwnin 2 days ago

    Costs to buy certs will not materially change. Costs to manage certs will increase.

bityard 3 days ago

I see that there is a timeline for progressive shortening, so if anyone has any "inside baseball" on this, I'm very curious to know:

Given that the overarching rationale here is security, what made them stop at 47 days? If the concern is _actually_ security, allowing a compromised cert to exist for a month and a half is I guess better than 398 days, but why is 47 days "enough"?

When will we see proposals for max cert lifetimes of 1 week? Or 1 day? Or 1 hour? What is the lower limit of the actual lifespan of a cert and why aren't we at that already? What will it take to get there?

Why are we investing time and money in hatching schemes to continually ratchet the lifespan of certs back one more step instead of addressing the root problems, whatever those are?

  • dadrian 2 days ago

    The root problem is certificate lifetimes are too long relative to the speed at which domains change, and the speed at which the PKI needs to change.

peanut-walrus 3 days ago

So the assumption here is that somehow your private key is easier to compromise than whatever secret/mechanism you use to provision certs?

Yeah not sure about that one...

ori_b 3 days ago

Can someone point me to specific exploits that this key rotation schedule would have stopped?

It seems to me like compromised keys are rare. It also seems like 47 days is low enough to be inconvenient, but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.

  • Avamander 3 days ago

    > Can someone point me to specific exploits that this key rotation schedule would have stopped?

    It's not only key mismanagement that is being mitigated. You also have to prove more frequently that you have control of the domain or IP in the certificate.

    In essence it brings a working method of revocation to WebPKI.

    > but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.

    Compared to a year?

    • ori_b 3 days ago

      > You also have to prove more frequently that you have control of the domain or IP in the certificate.

      That doesn't particularly matter; if someone takes over the domain but doesn't have a leaked key, they can't sign requests for the domain with my cert. It takes a leaked key for this to turn into a vulnerability.

      On the other hand, anyone that owns the domain can get a perfectly valid cert any time, no need to exploit anything. And given that nobody actually looks at the details of the cert owner in practice, that means that if you lose the domain, the new owner is, treated as legit. No compromises needed.

      The only way to prevent that is to pin the cert, which this short rotation schedule makes harder, or pin the public key and be very careful to not regenerate your keys when you submit a new CSR.

      In short: Don't lose your domain.

      > Compared to a year?

      Typically these kinds of things have an exponential dropoff, so most of the exploited folks would be soon after the compromise. I don't think that shortening to this long a period, rather than (say) 24h would make a material difference.

      But, again, I'm also not sure how many people were compromised via anything that this kind of rotation would prevent. It seems like most exploits depend on someone either losing control over the domain (again, don't do that; the current issuance model doesn't handle that), or just being phished via a valid cert on an unrelated domain.

      Do you have concrete examples of anyone being exploited via key mismanagement (or not proving often enough that they have control over a domain)?

      • Avamander a day ago

        > That doesn't particularly matter; if someone takes over the domain but doesn't have a leaked key, they can't sign requests for the domain with my cert. It takes a leaked key for this to turn into a vulnerability.

        It does, if someone gets temporary access, issues a certificate and then keeps using it to impersonate something. Now the malicious actor has to do it much more often, significantly increasing chances of detection.

      • kbolino 2 days ago

        I just downloaded one of DigiCert's CRLs and it was half a megabyte. There are probably thousands of revoked certificates in there. If you're not checking CRLs, and a lot of non-browser clients (think programming languages, software libraries, command-line tools, etc.) aren't, then you would trust one of those certificates if it was presented to you. With certificate lifetimes of 47 days instead of a year, 87% of those revoked certificates become unusable regardless of CRL checking.

        • ori_b 2 days ago

          Why is leaving almost 15% of bad certificates in play ok? If it was shortened to 48 hours, then it would make 99.5% of them unusable, and I suspect the real world impact would still be approximately zero.

          • kbolino 2 days ago

            It does not have to be perfect to be better. It's not great that 13% of revoked certificates would still be there (and get trusted by CRL-ignoring clients) but significantly smaller CRL files may get us closer to more widespread CRL checking. The shorter lifetime also reduces the window of time that a revoked certificate can be exploited by that same 87%. While I'd wager most certificates that get revoked are revoked for minor administrative mistakes and so are unlikely to be used in attacks, some revocations are still exploitable, and it's nearly impossible to measure the actual occurrence of such things at Internet scale without concerted effort.

            This reminds me a bit of trying to get TLS 1.2 support in browsers before the revelation that the older versions (especially SSL3) were in fact being exploited all the time directly and via downgrading. Since practically nobody complained (out of ignorance) and, at the time, browsers didn't collect metrics and phone home with them (it was a simpler time), there was no evidence of a problem. Until there was massive evidence of a problem because some people bothered to look into and report it. Journalism-driven development shouldn't be the primary way to handle computer security.

  • crote 3 days ago

    The 47 days are (mostly) irrelevant when it comes to compromised keys. The certificate will be revoked by the CA at most 24 hours after compromise becomes known, so a shorter cert isn't really "more secure" than a longer one.

    At least, that's what the rules say. In practice CAs have a really hard time saying no to a multi-week extension because a too-big-to-fail company running "critical infrastructure" isn't capable of rotating their certs.

    Short cert duration forces companies to automate cert renewal, and with automation it becomes trivial to rotate certs in an acceptable time frame.

throwaway96751 4 days ago

Off-topic: What is a good learning resource about TLS?

I've read the basics on Cloudflare's blog and MDN. But at my job, I encountered a need to upload a Let's encrypt public cert to the client's trusted store. Then I had to choose between Let's encrypt's root and intermediate certs, between key types RSA and ECDSA. I made it work, but it would be good to have an idea of what I'm doing. For example why root RSA key worked even though my server uses ECDSA cert. Before I added the root cert to a trusted store, clients used to add fullchain.pem from the server and it worked too — why?

  • ivanr 3 days ago

    I have a bunch of useful resources, most of which are free:

    - If you're looking for a concise (yet complete) guide: https://www.feistyduck.com/library/bulletproof-tls-guide/

    - OpenSSL Cookbook is a free ebook: https://www.feistyduck.com/library/openssl-cookbook/

    - SSL/TLS and PKI history: https://www.feistyduck.com/ssl-tls-and-pki-history/

    - Newsletter: https://www.feistyduck.com/newsletter/

    - If you're looking for something comprehensive and longer, try my book Bulletproof TLS and PKI: https://www.feistyduck.com/books/bulletproof-tls-and-pki/

  • dextercd 4 days ago

    I learned a lot from TLS Mastery by Michael W. Lucas.

  • physicles 3 days ago

    Use ECDSA if you can, since it reduces the size of the handshake on the wire (keys are smaller). Don’t bake in intermediate certs unless you have a very good reason.

    No idea why the RSA key worked even though the server used RSA — maybe check into the recent cross-signing shenanigans that Let’s Encrypt had to pull to extend support for very old Android versions.

    • throwaway96751 2 days ago

      I've been reading a little since then, and I think it worked with RSA root cert because this cert was a trust anchor of the Chain of Trust of my server's ECDSA certificate.

  • pizzafeelsright 3 days ago

    Curious why you wouldn't have a Q and A with AI?

    If the information is relatively unchanged and the details well documented why not ask questions to fill in the gaps?

    The Socratic method has been the best learning tool for me and I'm doubling my understanding with the LLMs.

    • throwaway96751 2 days ago

      I think this method works best when you can verify the answer. So it has to be either a specific type of question (a request to generate code, which you can then run and test), or you have to know enough about the subject to be able to spot mistakes.

_bin_ 4 days ago

Is there an actual issue with widespread cert theft? That seems like the primary valid reason to do this, not forcing automation.

  • cryptonym 4 days ago

    Let's Encrypt dropped support for OCSP. CRL doesn't scale well. Short lived certificate probably are a way to avoid certificate revocation quirks.

    • Ajedi32 4 days ago

      It's a real shame. OCSP with Must-Staple seemed like the perfect solution to this, it just never got widespread support.

      I suppose technically you can get approximately the same thing with 24-hour certificate expiry times. Maybe that's where this is ultimately heading. But there are issues with that design too. For example, it seems a little at odds with the idea of Certificate Transparency logs having a 24-hour merge delay.

      • NoahZuniga 3 days ago

        Also certificate transparency is moving to a new standard (sunlight CT) that has immediate merges. Google requires maximum merge delay to be 1 minute or less, but they've said on google groups that they expect merges to be way faster.

      • lokar 3 days ago

        The log is not really for real time use. It’s to catch CA non-compliance.

  • dboreham 4 days ago

    I think it's more about revocation not working in practice. So the only solution is a short TTL.

  • trothamel 4 days ago

    I suspect it's to limit how long a malicious or compromised CA can impact security.

    • hedora 4 days ago

      Equivalently, it also maximizes the number of sites impacted when a CA is compromised.

      It also lowers the amount of time it’d take for a top-down change to compromise all outstanding certificates. (Which would seen paranoid if this wasn’t 2025.)

      • lokar 3 days ago

        Mostly this. Today of a big CA is caught breaking the rules, actually enforcing repairs (eg prompt revocation ) is a hard pill to swallow.

    • rat9988 4 days ago

      I think op is asking has there been many real case scenarios in practice that pushed for this change?

  • chromanoid 4 days ago

    I guess the main reason behind this move is platform capitalism. It's an easy way to cut off grassroots internet.

    • gjsman-1000 4 days ago

      If that were true, we would not have Let's Encrypt and tools which can give us certificates in 30 seconds flat once we prove ownership.

      The real reason was Snowden. The jump in HTTPS adoption after the Snowden leaks was a virtual explosion; and set HTTPS as the standard for all new services. From there, it was just the rollout. (https://www.eff.org/deeplinks/2023/05/10-years-after-snowden...)

      (Edit because I'm posting too fast, for the reply):

      > How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?

      Everyone is reliant on a 3rd party for the internet. It's called your ISP. They also take complaints and will shut you down if they don't like what you're doing. If you are using an online VPS, you have a second 3rd party, which also takes complaints, can see everything you do, and will also shut you down if they don't like what you're doing; and they have to, because they have an ISP to keep happy themselves. Networks integrating with 3rd party networks is literally the definition of the internet.

      • nottorp 4 days ago

        How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?

        Let's Encrypt... Cloudflare... useful services right? Or just another barrier to entry because you need to set up and maintain them?

        • icedchai 4 days ago

          You are always dependent on a 3rd party to some extent: DNS registration, upstream ISP(s), cloud / hosting providers, etc.

          • nottorp 4 days ago

            And now your list has 2 more items in it …

            • icedchai 2 days ago

              Does it? I need to get a cert from somewhere, whether that's Lets Encrypt for free, or some other company that charges $300/year for effectively the same thing.

      • chromanoid 4 days ago

        I dunno. Self-hosting w/o automation was feasible. Now you have to automate. It will lead to a huge amount of link rot or at least something very similar. There will be solutions but setting up a page e2e gets more and more complicated. In the end you want a service provider who takes care of it. Maybe not the worst thing, but what kind of security issues are we talking about? There is still certificate revocation...

        • icedchai 4 days ago

          Have you tried caddy? Each TLS protected site winds up being literally a couple lines in a config file. Renewals are automatic. Unless you have a network / DNS problem, it is set and forget. It is far simpler than dealing with manual cert renewals, downloading the certificates, restarting your web server (or forgetting to...)

          • chromanoid 4 days ago

            Yes, but only for internal stuff. I prefer traefik at the moment. But my point is more about how people use wix over free webspace and so on. While I don't agree with many of Jonathan Blow's arguments, but news like this make me think of his talk "Preventing the collapse of civilization" https://m.youtube.com/watch?v=ZSRHeXYDLko

            • ikiris 3 days ago

              Traefik without certmanager is just as self inflicted a wound. It’s literally designed to handle this for you.

              • chromanoid 2 days ago

                I have to use an internal cert out of my control anyways. For personal projects I switched to web hosters after some bad experience. But I vividly remember setting up my vps as a teen. while I understand the reasoning it's always sad to see those simpler times go away. and sometimes I don't see the reasoning behind and suspect it's because some c-suites don't see big harm, since it ought to make things safer and those people that are left in the dust don't count anyway...

    • bshacklett 4 days ago

      How does this cut off the grassroots internet?

      • chromanoid 4 days ago

        It makes end to end responsibility more cumbersome. There were days people just stored MS Frontpage output on their home server.

        • icedchai 4 days ago

          Many folks switched to Lets Encrypt ages ago. Certificates are way easier to acquire now than they were in "Frontpage' days. I remember paying 100's of dollars and sending a fax for "verification."

          • whs 3 days ago

            Do they offer any long term commitment for the API though. I remembered that they were blocking old cert manager clients that were hammering their server. You can't automate that (as it could be unsafe, like Solarwinds) and they didn't give one year window to do it manually either.

            • icedchai 3 days ago

              You do have a point. I still feel that upgrading your client is less work than manual cert renewals.

          • chromanoid 4 days ago

            I agree, but I think the pendulum just went too far on the tradeoff scale.

        • ezfe 3 days ago

          I've done the work to set up, by hand, a self-hosted Linux server that uses an auto-renewing Let's Encrypt cert and it was totally fine. Just read some documentation.

    • jack0813 4 days ago

      There are very convenient tools to do https easily these days, e.g. Caddy. You can use it to reverse proxy any http server and it will do the cert stuff for you automatically.

      • chromanoid 4 days ago

        Ofc, but you have to be quite techsavy to know this and to set this up. It's also cumbersome in many low-tech situations. There is certificate revocation, I would really like to see the threat model here. I am not even sure if automation helps or just shifts the threat vector to certificate issuing.

umvi 3 days ago

So does this mean all of our Chromecasts are going to stop working again once this takes effect since (judging by Google's response during the week long Chromecast outage earlier this year) Chromecast is run by a skeleton crew and won't have the resources to automate certificate renewal?

throw0101b 4 days ago

Justification:

> The ballot argues that shorter lifetimes are necessary for many reasons, the most prominent being this: The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.

> The ballot also argues that the revocation system using CRLs and OCSP is unreliable. Indeed, browsers often ignore these features. The ballot has a long section on the failings of the certificate revocation system. Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.

Personally I don't really buy this argument. I don't think the web sites that most people visit (especially highly-sensitive ones like for e-mail, financial stuff, a good portion of shopping) change or become "less trustworthy" that quickly.

  • gruez 4 days ago

    The "less trustworthy" refers to key compromise, not the e-shop going rogue and start scamming customers or whatever.

    • throw0101a 4 days ago

      Okay, the key is compromised: that means they can MITM the trust relationship. But with modern algorithms you have forward security, so even if you've sniffed/captured the traffic it doesn't help.

      And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.

      • gruez 4 days ago

        >And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.

        By that logic, we don't really need certificates, just TOFU.

        • throw0101d 4 days ago

          > By that logic, we don't really need certificates, just TOFU.

          It works fairly well for SSH, but that tends to be a more technical audience. But doing a "Always trust" or "Always accept" are valid options in many cases (often for internal apps).

          • tptacek 4 days ago

            It does not work well for SSH. We just don't care about how badly it works.

            • throw0101d 4 days ago

              > It does not work well for SSH. We just don't care about how badly it works.

              How "should" it work? Is there a known-better way?

              • tptacek 4 days ago

                Yes: SSH certificates. (They're unrelated to X509 certificates and the WebPKI).

                • throw0101d 4 days ago

                  > Yes: SSH certificates. (They're unrelated to X509 certificates and the WebPKI).

                  I am aware of them.

                  As someone in the academic sphere, with researchers SSHing into (e.g.) HPC clusters, this solves nothing for me from the perspective of clients trusting servers. Perhaps it's useful in a corporate environment where the deployment/MDM can place the CA in the appropriate place, but not with BYOD.

                  Issuing CAs to users, especially if they expire is another thing. From a UX perspective, we can tie password credentials to things like on-site Wifi and web site access (e.g., support wiki).

                  So SSH certs certainly have use-cases, and I'm happy they work for people, but TOFU is still the most useful in the waters I swim in.

                  • tptacek 4 days ago

                    I don't know what to tell you. The problem with TOFU is obvious: the FU. The FU happens more often than people think it does (every time you log in from a new or reprovisioned workstation) and you're vulnerable every time. I don't really care what you do for SSH (we use certificates) but this is not a workable model for TLS, where FUs are the norm.

                    • throw0101d 4 days ago

                      > I don't really care what you do for SSH (we use certificates) but this is not a workable model for TLS, where FUs are the norm.

                      It was suggested by someone else: I commented TOFU works for SSH, but is probably not as useful for web-y stuff (except for maybe small in-house stuff).

                      Personally I'm somewhat sad that opportunistic encryption for the web never really took off: if folks connect on 80, redirect to 443 if you have certs 'properly' set up, but even if not do an "Upgrade" or something to move to HTTPS. Don't necessary indicate things are "secure" (with the little icon), but scramble the bits anyway: no false sense of security, but make it harder for tapping glass in bulk.

    • xurukefi 3 days ago

      Nobody forces you to change your key for renewals.

avodonosov 3 days ago

First impression: with automation and short lived certificates, the Certifying Authorities become similar to Identity Providers / OpenID Provider in openid / openid-connect. The certificates are tokens.

And significant part of security is concentrated around the way Certifying Authorities validate the domain ownership. (So called challenges).

Next, maybe clients can run those challenges directly, instead of relying onto certificates? For example, when connecting a server, client client sends two unique values, and the server must create DNS record <unique-val-1>.server.com with record value of the <unique-val-2>. Client check that such record is created and thus the server has proven it controls the domain name.

Auth through DNS, that's what it is. We will just need to speed up the DNS system.

  • NicolaiS 2 days ago

    This will not work as any attacker that can MITM the client (likely scenario for end-users), can also MITM this "certificate issuing" setup and issue their own cert.

    The reason an attacker can't MITM Let's Encrypt (or similar ACME issuers) is because they request the challenge-response from multiple locations, making sure a simple MITM against them doesn't work.

    A fully DNS based "certificate setup" already exists: DANE, but that requires DNSSEC, which isn't wildly used.

    • avodonosov 2 days ago

      You are right that the scheme I described is vulnerable. Even without MITM. Just fakeserver.com upon receiving request from client sends equal request to server.com, which creates the needed DNS record and thus real client is "convinced" that fakeserver.com controls DNS.

      But that just a nuance that could be fixed. I elaborate little more on what I mean in https://news.ycombinator.com/item?id=43712754

      Thx for pointing to DANE.

  • fpoling 3 days ago

    That does not work as DNS is insecure. DNSSEC is not there and may never will.

    • ryandv 3 days ago

      But this is already basically how Let's Encrypt challenges certificate applicants over ACME DNS01 [0].

      I would be more concerned about the number of certificates that would need to be issued and maintained over their lifecycle - which now scales with the number of unique clients challenging your server (or maybe I misunderstand, and maybe there aren't even certificates any more in this scheme).

      Not to mention the difficulties of assuring reasonable DNS response times and fresh, up-to-date results when querying a global eventually consistent database with multiple levels of caching...

      [0] https://letsencrypt.org/docs/challenge-types/#dns-01-challen...

      • avodonosov 2 days ago

        In the scheme I descrubed where client directly runs challenges the certificates are not issued at all.

        I am not saying this scheme is really practical currently.

        That's just an imaginary situation coming to mind, illustrating the increased importance of domain ownership validation procedures used by Certifying Authorities. Essentially the security now comes down to the domain ownership validation.

        Also a correction. The server not simply puts <unique-val-2>, it puts sha256(<unique-val-2> || '.' || <fingerprint of the public key of the account>).

        Yes, the ACME protocol uses some account keys. Private key signs a requests for new cert, and public key fingerprint during domain ownership validation confirms that the challenge response was intended for that specific account.

        I am not suggesting ACME can be trivially broken.

        I just realized that risks of TLS certs breaking is not just risk of public key crypto being broken, but also includes the risks of domain ownership validation protocols.

    • detaro 3 days ago

      And would be replacing the CA PKI with an even more centralized PKI.

nickf 3 days ago

Don't forget the lede buried here - you'll need to re-validate control over your DNS names more frequently too. Many enterprises are used to doing this once-per-year today, but by the time 47-day certs roll around, you'll be re-validating all of your domain control every 10 days (more likely every week).

xyst 3 days ago

I don’t see any issue here. I already automate with ACME so rotating certificates on an earlier basis is okay. This should be like breathing for app and service developers and infrastructure teams.

Side note: I wonder how much pressure this puts on providers such as LetsEncrypt, especially with the move to validate IPs. And more specifically IPv6…

  • ShakataGaNai 3 days ago

    Because there are lots of companies, large and small, which haven't gotten that far. Lots of legacy sites/services/applications.

    I don't disagree with you that it should be super common. But it's surprisingly not in many businesses. Heck, Okta (nominally a large security company) still sends out notifications every time they change certificates and publishes a copy of their current correct certs in github: https://github.com/okta/okta-pki - How they do the actual rotation? No idea, but... I'd guess it's not automatic with that level of manual notification/involvement. (Happy to be proven wrong though).

mystraline 2 days ago

I'm sure this will be buried, but SSL is supposed to provide encryption. That's it.

Self-signed custom certs also does that. But those are demonized.

Also SSL also tries to define a ip-dns certification of ownership, kind of.

There's also a distinct difference between 'this cert expired last week' and 'this cert doesn't exist' and mitm attack. Expired? Just give a warning, not a scare screen. MITM? Sure give a big scary OHNOPE screen.

But, yeah, 47 days is going to wreck havok on network and weird devices.

  • kbolino 2 days ago

    If there was no IP/DNS ownership verification, how would you even know you had been MITMed? You think the attacker is going to set a flag to let you know the certificate is fake?

    The only real alternative to checking who signed a certificate is checking the certificate's fingerprint hash instead. With self-signed certificates, this is the only option. However, nobody does this. When presented with an unknown certificate, people will just blindly trust it. So self-signed certificates at scale are very susceptible to MITM. And again, you're not going to know it happened.

    Encryption without authentication prevents passive snooping but not active and/or targeted attacks. And the target may not be you, it may be the other side. You specifically might not be worth someone's time, but your bank and all of its other customers too, probably is.

    OCSP failed. CRLs are not always being checked. Shorter expiry largely makes up for the lack of proper revocation. But expiration must consequently be treated as no less severe than revocation.

    • mystraline 2 days ago

      True, I get where you're coming from, but I think there's more problems even in those implied answers.

      Homoglyph attacks are a thing. And I can pay $10 for a homoglyph name. No issues. I can get a webserver on a VM and point DNS. From there I can get a Letsencrypt. Use Nginx to point towards real domain. Install a mailserver and send out spoofed mails. You can also even set up SPF/DKIM/DMARC and have a complete attested chain.

      And its all based on a fake DNS, using nothing more than things like a Cyrillic 'o'.

      And, the self-signed issue is also what we see with SSH. And it mostly just works too.

      • kbolino 2 days ago

        The security situation with SSH is actually kind of dismal. You're right that standard SSH server configurations are generally equivalent to self-signed certificates but the trust model often used there is known as TOFU ("trust on first use") and is regarded by people who practice computer security as fundamentally broken. It persists only because the problem is hard to solve and it is still better than nothing and SSH gets targeted for MITM a lot less than HTTPS (SSH is targeted much more by drive-by attacks looking for weak passwords).

        TLS with Web PKI is a significantly more secure system when dealing with real people, and centralized PKI systems in general are far more scalable (but not hardly perfect!) compared to decentralized trust systems, with common SSH practices near the extreme end of decentralized. Honestly, the general state of SSH security can only be described as "working" due to a lack of effort from attackers more than the hygienic practices of sysadmins.

        Homoglyph attacks are a real problem for HTTPS. Unfortunately, the solutions to that class of problem have not succeeded. Extended Validation certificates ended up a debacle; SRP and PAKE schemes haven't taken off (yet?); and it's still murky whether passkeys are here to stay. And a lot of those solutions still boil down to TOFU since, essentially, they require you to have visited the site at least once before. Plus, there remain fallback options that are easier to phish against. Maybe these problems would have been more solvable if DNSSEC succeeded, but that didn't happen either.

        • tptacek 2 days ago

          It's hard to think of a real-world problem that PAKEs solve for HTTPS.

          • kbolino 2 days ago

            I see it useful as part a layered strategy: X.509 certificates establish the authenticity of the server for that domain, while the SRP/PAKE would establish the authenticity of the legal entity you're actually trying to reach. In the case of a homoglyph-assisted phisher, this would prevent them from obtaining the real password or any credential that would be useful to attack the real target, and also warn the user not to trust them period. However, the current layering of HTTPS doesn't make it possible to enforce the use of secure password exchange, and so I think passkeys are a better solution, because they allow us to remove the password modality entirely.

            I'm not entirely sure about how effective passkeys would be against homoglyph-assisted MITM though. Assuming you've visited the legitimate domain before and established your passkey at that time, your passkey wouldn't be selected by the browser for the fake domain. But if you started with the fake domain, and logged in through it using a non-passkey method (including first sign up or lost-credential recovery), then I would think the attacker could just enroll his own passkey on your behalf. Now, if we layered passkeys on top of mTLS, then we could almost entirely eliminate the MITM risk!

            • tptacek 2 days ago

              As you note, we already have a system that uses more appropriate cryptography (than a PAKE) to solve this: FIDO.

              You've lost me at mTLS here. At some point it starts to feel like we're advocating for security protocols just so we can fit them all in somewhere.

              • kbolino 2 days ago

                That was a bit tongue-in-cheek, sorry. I've worked in mTLS shops and it's definitely not practical for the public Internet.

                Ultimately, I think the practical solution to homoglyphs is in the presentation layer, whether it be displaying different scripts in different ways, warning when scripts are mixed, or some other kind of UX rather than protocol change. The only protocol change I can think of to address them would be to go back to ASCII only (and even that is more of a presentation issue since IDNs are just Punycode).

                • nickf 2 days ago

                  mTLS is going to be a problem soon, arguably bigger than this lifetime reduction. Most server certs today have clientAuth EKU and can be used for mTLS. That stops next year.

trothamel 4 days ago

Question: Does anyone have a good solution for renewing letsencrypt certificates for websites hosted on multiple servers? Right now, I have one master server that the others forward the well-known requests too, and then I copy the certificate over when I'm done, but I'm wondering if there's a better way.

  • nullwarp 4 days ago

    I use DNS verification for this then the server doesn't even need to be exposed to the internet.

    • magicalhippo 3 days ago

      And if changing the DNS entry is problematic, for example the DNS provider used doesn't have an API, you can redirect the challenge to another (sub)domain which can be hosted by a provider that has an API.

      I've done this and it works very well. I had a Digital Ocean droplet so used their DNS service for the challenge domain.

      https://letsencrypt.org/docs/challenge-types/#dns-01-challen...

    • samgranieri 2 days ago

      I use dns01 in my homelab with step-ca. works like a charm, and it's my private certificate authority

  • hangonhn 4 days ago

    We just use certbot on each server. Are you worried about the rate limit? LE rate limits based on the list of domains. So we send the request for the shared domain and the domain for each server instance. That makes each renew request unique per server for the purpose of the rate limit.

  • noinsight 4 days ago

    Orchestrate the renewal with Ansible - renew on the "master" server remotely but pull the new key material to your orchestrator and then push them to your server fleet. That's what I do. It's not "clean" or "ideal" to my tastes, but it works.

    It also occurred to me that there's nothing(?) preventing you from concurrently having n valid certificates for a particular hostname, so you could just enroll distinct certificates for each host. Provided the validation could be handled somehow.

    The other option would maybe be doing DNS-based validation from a single orchestrator and then pushing that result onto the entire fleet.

  • pornel 3 days ago

    I copy the same certbot account settings and private key to all servers and they obtain the certs themselves.

    It is a bit funny that LetsEncrypt has non-expiring private keys for their accounts.

  • navigate8310 4 days ago

    Have you tried certbot? Or if you want a turnkey solution, you may try Caddy or Traefik that have their own automated certificate generation utility.

  • throw0101b 4 days ago

    getssl was written with a bit of a focus on this:

    > Get certificates for remote servers - The tokens used to provide validation of domain ownership, and the certificates themselves can be automatically copied to remote servers (via ssh, sftp or ftp for tokens). The script doesn't need to run on the server itself. This can be useful if you don't have access to run such scripts on the server itself, e.g. if it's a shared server.

    * https://github.com/srvrco/getssl

compumike 3 days ago

(Shameless self-promotion) We set up our https://heiioncall.com/ monitoring to give our on-call rotation a non-critical “it can wait until Monday” alert when there are 14 days or less left on our SSL certificates, and a critical alert “do-not-disturb be damned” when 48 hours left until expiry. Because cert-manager got into some weird state once a few years ago, and I’d rather find out well in advance next time.

Edit: it’s configured under Trigger -> Outbound Probe -> “SSL Certificate Minimum Expiration Duration”

kevincox 2 days ago

To me I don't really care about the certificate lifetime, what I care about is the time between being allowed to renew and time until expiry.

Right now Let's Encrypt recommends renewing your 90d certificates every 60 days, which means that there is a 30 day window between recommended to renew and expiry. This feels relatively comfortable to me. A long vacation may be longer than 30 days but it is rare and there is probably other maintenance that you should be doing in this time (although likely routine like security updates rather than exceptional like figuring out why your certificate isn't renewing).

So if 47 days ends up meaning renew every 17 days and still have that 30 day buffer I would be quite happy. But what I fear is that they will recommend (and set rate limits based on) renewing every 30 days with a 17 day buffer which is getting a little short for comfort IMHO. While big organizations will have a 24h oncall and medium organizations will have many business hours to figure it out is sucks for individuals who what to go away for a few weeks without worrying about debugging their certificate renewal until they get home.

1970-01-01 4 days ago

Your 90-day snapshot backups will soon become 47-day backups. Take care!

  • gruez 4 days ago

    ???

    Do people really backup their https certificates? Can't you generate a new one after restoring from backup?

  • belter 4 days ago

    This is going to be one of the obvious traps.

    • DiggyJohnson 4 days ago

      To care about stale certs on snapshots or the opposite?

      • belter 4 days ago

        Both. One breaks your restore, the other breaks your trust chain.

procaryote 3 days ago

If this is causing you pain, certbot with Acme DNS challenge is pretty easy to set up to get you certs for your internal services. There are tools for many different dns providers like route53 or cloudflare.

I tend to have secondary scripts that checks if the cert in certbots dir is newer than whatever is installed for a service, and if so install it. Some services prefer the cert in certain formats, some services want to be reloaded to pick up a new cert etc, so I put that glue in my own script and run it from cron or a systemd timer.

  • merb 3 days ago

    The problem is more or less devices that do not support dns challenges or only support letsencrypt and not the acme protocol (to chain acme servers, etc)

    • cpach 3 days ago

      What kind of devices are you thinking of? Like switches and other network gear?

      • JackSlateur 3 days ago

        I've deployed LE on IPMI (dell, supermicro), so that's not a good excuse ! As long as you have a way to "script" something on your devices (via ssh, API or whatever) .. you are good to go

      • merb 2 days ago

        FortiGate 50g (higher version than 7.0 probably fixes that, but no idea when that will be released), some synology nas and there are tons of other boxes like that.

  • AtNightWeCode 2 days ago

    Yeah, but the problem as I see it is not to renew the certs. Some systems becomes unstable or needs to reboot during installation of new certificates. I worked on systems where it takes hours to install and use new certificates.

borgster 2 days ago

The ultimate internet "kill switch": revoke all certificates. Modern browsers refuse to render the page.

raggi 4 days ago

It sure would be nice if we could actually fix dns.

arisudesu 2 days ago

Having short-lived certificates is good, replacing them too often is not. This is implemented trivially for single-host deployments which just run certbot or ACME each subdomains. But for sophisticated setups where a certificate for a domain (or multiple domains or a wildcard) must be shared across fleet of hosts, it is a burden.

There are no ready-made tools available to automate such deployments. Especially if a certificate must be the same for each of the hosts, fingerprint included. Having a single, authoritative certificate for a domain and its wildcard subdomains deployed everywhere is much simpler to monitor. It does not expose internal subdomains in certificate transparency logs.

Unfortunately, organizations (persons) involved in decisions, do not provide such tools in advance.

  • lo0dot0 2 days ago

    I agree. There should be a process in place for checking if changes are ready to be rolled out, and one of the checks should be a working prototype implementation, that is open source, that shows that running your systems can still be managed.

dsr_ 3 days ago

Daniel K Moran figured out the endgame:

"Therefore, the Lunar Bureau of the United Nations Peace Keeping Force DataWatch has created the LINK, the Lunar Information Network Key. There are currently nine thousand, four hundred and two Boards on Luna; new Boards must be licensed before they can rent lasercable access. Every transaction--every single transaction--which takes place in the Lunar InfoNet is keyed and tracked on an item-by-item basis. The basis of this unprecedented degree of InfoNet security is the Lunar Information Network Key. The Key is an unbreakable encryption device which the DataWatch employs to validate and track every user in the Lunar InfoNet. Webdancers attempting unauthorized access, to logic, to data, to communications facilities, will be punished to the full extent of the law."

from The Long Run (1989)

Your browser won't access a site without TLS; this is for your own protection. TLS certificates are valid for one TCP session. All certs are issued by an organization reporting directly to a national information security office; if your website isn't in compliance with all mandates, you stop getting certs.

rhodey 3 days ago

After EFF Lets Encrypt made the change to disable reminder emails I decided I would be moving my personal blog from my VPS to AWS specifically. I just today made the time to make the move and 10 minutes after I find this.

I could have probably done more with Lets Encrypt automation to stay with my old VPS but given that all my professional work is with AWS its really less mental work to drop my old VPS.

Times they are a changing

  • mystified5016 3 days ago

    Why not just automate your LetsEncrypt keys like literally everyone else does? It's free and you have to go out of your way to do it manually.

    Or just pay Amazon, I guess. Easier than thinking.

webprofusion 2 days ago

Pretty sure this only refers to publicly trusted certs. What percentage of public certs are still being manually managed?

I've been in the cert automation industry for 8 years (https://certifytheweb.com) and I do still hear of manual work going on, but the majority of stuff can be automated.

For stuff that genuinely cannot be automated (are you sure you're sure) these become monthly maintenance tasks, something cert management tools are also now starting to help with.

We're planning to add tracking tasks for manual deployments to Certify Management Hub shortly (https://docs.certifytheweb.com/docs/hub/), for those few remaining items that need manual intervention.

elric 2 days ago

This sounds an awful lot like security theatre.

> The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.

This is patently nonsensical. There is hardly any information in a certificate that matters in practice, except for the subject, the issuer, and the expiration date.

> Shorter lifetimes mitigate the effects of using potentially revoked certificates.

Sure, and if you're worried about your certificates being stolen and not being correctly revoked, then by all means, use a shorter lifetime.

But forcing shorter lifetimes on everyone won't end up being beneficial, and IMO will create a lot of pointless busywork at greater expense. Many issuers still don't support ACME.

jonathantf2 4 days ago

A welcome change if it gives some vendors a kick up the behind to implement ACME.

  • ShakataGaNai 3 days ago

    There is no more choice. No one is going to buy from (example) GoDaddy if they have to login every 30 days to manually get a new certificate. Not when they can go to (example) digicert and it's all automatic.

    It goes from a "rather nice to have" to "effectively mandatory".

    • jonathantf2 3 days ago

      I think GoDaddy supports ACME - but if you're using ACME you might as well use Let's Encrypt and stop paying for it.

      • schlauerfox 3 days ago

        Depends on the kind of certificate you need.

junaru 4 days ago

> For this reason, and because even the 2027 changes to 100-day certificates will make manual procedures untenable, we expect rapid adoption of automation long before the 2029 changes.

Oh yes, vendors will update their legacy NAS/IPMI/whatever to include certbot. This change will have the exact opposite effect - expired self signed certificates everywhere on the most critical infrastructure.

  • xnyanta 3 days ago

    I have automated IPMI certificate rotation set-up through Let's Encrypt and ACME via the Redfish API. And this is on 15 year old gear running HP iLO4. There's no excuse for not automating things.

  • panki27 4 days ago

    People will just roll out almost forever-lasting certificates through their internal CA for all systems that are not publicly reachable.

    • throw0101d 4 days ago

      > through their internal CA

      Nope. People will create self-signed certs and tell people to just click "accept".

      • Avamander 3 days ago

        They're doing it right now and they'll continue doing so. There are always scapegoats for not automating.

zephius 4 days ago

Old SysAdmin and InfoSec Admin perspective:

Dev guys think everything is solvable via code, but hardware guys know this isn't true. Hardware is stuck in fixed lifecycles and firmware is not updated by the vendors unless it has to be. And in many cases updated poorly. No hardware I've ever come across that supports SSL\TLS (and most do nowadays) offers any automation capability in updating certs. In most cases, certs are manually - and painfully - updated with esoteric CLI cantrips that require dancing while chanting to some ancient I.T. God for mercy because the process is poorly (if at all) documented and often broken. No API call or middelware is going to solve that problem unless the manufacturer puts it in. In particular, load balancers are some of the worst at cert management, and remember that not everyone uses F5 - there are tons of other cheaper and popular alternatives most of which are atrocious at security configuration management. It's already painful enough to manage certs in an enterprise and this 47 day lifecycle is going to break things. Hardware vendors are simply incompetent and slow to adapt to security changes. And not everyone is 100% in the cloud - most enterprises are only partially in that pool.

  • tptacek 4 days ago

    I think everybody involved knows about the likelihood that things are going to break at enterprise shops with super-expensive commercial middleboxes. They just don't care anymore. We ran a PKI that cared deeply about the concerns of admins for a decade and a half, and it was a fiasco. The coders have taken over, and things are better.

    • zephius 4 days ago

      That's great for shops with Dev teams and in house developed platforms. Those shops are rare outside Silicon Valley and fortune 500s and not likely to increase beyond that. For the rest of us, we are at the mercy of off the shelf products and 3rd party platforms.

      • tptacek 4 days ago

        I suggest you buy products from vendors who care about the modern WebPKI. I don't think the browser root programs are going to back down on this stuff.

        • nickf 3 days ago

          This. Also, re-evaluate how many places you actually need public trust that the webPKI offers. So many times it isn't needed, and you make problems for yourself by assuming it does. I have horror stories I can't fully disclose, but if you have closed networks of millions of devices where you control both the server side and the client side, relying on the same certificate I might use on my blog is not a sane idea.

        • whs 3 days ago

          Agree. My company was cloud first, and when we built the new HQ buying Cisco gear and VMware (as they're the only stack several implementers are offering) it felt like we were sending the company 15 years backwards

        • zephius 4 days ago

          I agree, and we try, however that is not a currently widely supported feature in the boring industry specific business software/hardware space. Maybe now it will be, so time will tell.

          • ignaloidas 3 days ago

            Hey, you now have a specific cost to point to when arguing for/against solutions that have this problem. "each deployment will cost us at least 12 specialist hours per year just replacing the certificates" is a non-negligible cost that even the least tech-minded people will understand, and it can be a good lever for requiring the support.

          • ikiris 3 days ago

            Reverse proxies exist. If you don’t like having to do that then have requirements for standards of the past 10 years in your purchasing.

  • cpach 4 days ago

    Hardware vendors are simply incompetent and slow to adapt to security changes.

    Perhaps the new requirements will give them additional incentives.

    • zephius 4 days ago

      Yeah, just like TLS 1.2 support. Don't even get me started on how that fiasco is still going.

  • yjftsjthsd-h 4 days ago

    Sounds like everything is solvable via code, and the hardware vendors just suck at it.

    • zephius 4 days ago

      In a nutshell, yes. From a security perspective, look at Fortinet as an egregious example of just how bad. Palo Alto also has some serious internal issues.

    • dijit 3 days ago

      not really, a lot of those middleware boxes are doing some form of ASIC offloading for TLS, and the PROM that loads the cert(s) are not rated for heavy writes… thus writing is slow, blocking, and will wear your hardware out.

      The larger issue is actually our desire to deprecate cipher suites so rapidly though, those 2-3 year old ASICs that are functioning well become e-waste pretty quickly when even my blog gets a Qualys “D” rating after having an “A+” rating barely a year ago.

      How much time are we spending on this? The NSA is literally already in the walls.

  • Havoc 3 days ago

    At the same time I don’t think it’s reasonable to make global cert decisions like this based on what some crappy manufacturer failed to implement in their firmware. The issue there is clearly the crap hardware (though the sysadmins that have to deal with it have my condolences)

jezek2 2 days ago

Thanks for the heads up. I will adjust my cron jobs to run every week instead of every month.

I need it more frequently to get more time in case there is an error as I tend to ignore the error e-mails for multiple weeks due to my fatigue from handling of various kinds of certificates.

Personally I also have an HTTP mirror for my more important projects when availability is more important than security of the connection.

rfmoz 2 days ago

This could be addressed in a more progressive way.

For example, EV carts had the green bar that was a soft way to promote their presence/use over the normal ones. That bar, started as a strong evidence on the url box and lost that look with the time.

Something like that let the owner to decide and, maybe, the user push their use because it feels more secure, not directly the CA.

readthenotes1 4 days ago

I wonder how many forums run by the barely able are going to disappear or start charging.

I fairly regularly get cert expired problems because the admin is doing it as the yak shaving for a secondary hobby

  • ezfe 3 days ago

    Why would they start charging? Auto-renewing certificates with Let's Encrypt are easy to do.

    • dijit 3 days ago

      as long as you only have a single server, or a DNS server that has an API.

      Even certbot got deprecated, so my IRC network has to use some janky shell scripts to rotate TLS… I’m considering going back to traditional certs because I geo-balance the DNS which doesn’t work for letsencrypt.

      The issue is actually that I have multiple domains handled multiple ways and they all need to be letsencrypt capable for it to work and generate a combined cert with SAN’s attached.

iJohnDoe 4 days ago

Getting a bit ridiculous.

  • dboreham 4 days ago

    Looks like a case where there are tradeoffs to be made, but the people with authority over the decision have no incentive to consider one side of the trade.

  • bayindirh 4 days ago

    Why?

    • nottorp 4 days ago

      The logical endgame is 30 second certificates...

      • krunck 4 days ago

        Or maybe the endgame could be: creation of a centralized service that all web servers are required to be registered with and connected to at all times in order to receive their (frequently rotated) encryption keys. Controllers of said service then have kill switch control of any web service by simply withholding keys.

        • nottorp 4 days ago

          Exactly. And all in the name of security! Think of the children!

      • bayindirh 4 days ago

        For extremely sensitive systems, I think a more logical endgame is 30 minutes or so. 30 seconds is practically continuous generation.

        An semi-distributed (intercity) Kubernetes cluster can reasonably change its certificate chain every week, but it needs an HSM if it's done internally.

        Otherwise, for a website, once or twice a year makes sense if you don't store anything snatch-worthy.

        • nottorp 4 days ago

          > once or twice a year makes sense

          You don't say. Why are the defaults already 90 days or less then?

          • bayindirh 4 days ago

            Because most of the sites on the internet store much more sensitive information when compared to the sites I gave as an example, and can afford 1/2 certificates a year.

            90 days makes way more sense for the "average website" which handles members, has a back office exposed to the internet, and whatnot.

            • nottorp 4 days ago

              That's not the average website, that's a corporate website or an online store.

              Why do you think all the average web sites have to handle members?

              • bayindirh 4 days ago

                Give me examples of websites which doesn’t have any kind of member system in place.

                Forums? Nope. Blogging platforms? Nope. News sites? Nope. Wordpresss powered personal page? Nope. Mailing lists with web based management? Nope. They all have members.

                What doesn’t have members or users? Static webpages. How much of the web is a completely static web page? Negligible amount.

                So most of the sites have much more to protect than meets the eye.

                • ArinaS 3 days ago

                  > "Negligible amount."

                  Neglecting the independent web is exactly what led to it dying out and the Internet becoming corporate algorithm-driven analytics machine. Making it harder to maintain your own, independent website, which does not rely on any 3rd-party to host or update, will just make less people bother.

                • nottorp 4 days ago

                  I could move that all your examples except forums do not NEED members or users... except to spy on you and spam you.

                  • bayindirh 4 days ago

                    I mean, a news site needs their journalists to login. Your own personal Wordpress needs a user for editing the site. The blog platform I use (mataroa) doesn’t even have detailed statistics serve many users so they need user support.

                    Web is a bit different than you envision/think.

                    • ArinaS 3 days ago

                      > "I mean, a news site needs their journalists to login."

                      Why can't this site just upload HTML files to their web server?

                      • nottorp 3 days ago

                        Why can't this site have their CMS entirely separated from the public facing web site, for that matter? :)

                        > Eyeball optimization: Different titles, cutting summaries where it piques interest most, some other A/B testing...

                        Any non predatory practices you can add to the list?

                        • bayindirh 3 days ago

                          I think you were trying to reply to me.

                          I'm not a web developer, and I don't do anything similar on my pages, blog posts, whatever, so I don't know.

                          The only non-predatory way to do this is to being honest/transparent and don't pulling tricks on people.

                          However, I think, A/B testing can be used in a non-predatory way in UI testing, by measuring negative comment between two new versions, assuming that you genuinely don't know which version is better for the users.

                      • bayindirh 3 days ago

                        Two operational requirements:

                        1. Journalists shall be able to write new articles and publish them ASAP, possibly from remote locations.

                        2. Eyeball optimization: Different titles, cutting summaries where it piques interest most, some other A/B testing... So you need a data structure which can be modified non-destructively and autonomously.

                        Plus many more things, possibly. I love static webpages as much as the next small-web person, but we have small-web, because the web is not "small" anymore.

        • panki27 4 days ago

          That CRL is going to be HUGE.

          • psz 4 days ago

            Why you think so? Keep in mind that revoked certs are not included in CRLs once expired (because they are not valid any more).

      • saltcured 3 days ago

        I was thinking about this with my morning coffee.. the asymptotic end game would be that every TLS connection requires an online handshake with Connection Authorities to validate the server identity synchronously, right?

        But on a more serious note, can someone more familiar with these standards and groups explain the scope of TLS certificate they mean for these lifetime limits?

        I assume this is only server certs and not trust root and intermediate signing certs that would get such short lifetimes? It would be a mind boggling nightmare if they start requiring trust roots to be distributed and swapped out every few weeks to keep software functioning.

        To my gen X internet pioneer eyes, all of these ideas seem like easily perverted steps towards some dystopian "everything is a subscription" access model...

        • woodruffw 2 days ago

          > the asymptotic end game would be that every TLS connection requires an online handshake with Connection Authorities to validate the server identity synchronously, right?

          The article notes this explicitly: the goal here is to reduce the number of online CA connections needed. Reducing certificate lifetimes is done explicitly with the goal of reducing the Web PKI's dependence on OCSP for revocation, which currently has the online behavior you're worried about here.

          (There's no asymptotic benefit to extremely short-lived certificates: they'd be much harder to audit, and would be much harder to write scalable transparency schemes for. Something around a week is probably the sweet spot.)

          • saltcured 2 days ago

            I understand the optimization curve you are talking about. But, my coffee and I think my answer is more accurate as the theoretical asymptote as you reduce certificate lifetimes... you can never really have a zero lifetime certificate in a TLS connection, but you can reduce it to the handshake sequence necessary to establish the connection and its authenticated symmetric cipher.

            • woodruffw 2 days ago

              Yes. But the point is that isn’t going to happen. It would directly undermine the goal of eliminating the stability and scaling issues with OCSP.

    • jodrellblank 4 days ago

      https://mathematicalcrap.com/2022/08/14/the-great-loyalty-oa...

      "When they voiced objection, Captain Black replied that people who cared about security would not mind performing all the security theatre they had to. To anyone who questioned the effectiveness of the security theatre, he replied that people who really did owe allegiance to their employer would be proud to take performative actions as often as he forced them to. The more security theatre a person performed, the more secure he was; to Captain Black it was as simple as that."

CommanderData 3 days ago

Why bother with such a long staggered approach?

There should be 1 change from 365 to 47 days. This industry doesnt need constant changes, which will force everyone to automating renewals anyway.

  • datadrivenangel 3 days ago

    Enterprises are like lobsters: You gotta crank the water temperature up slowly.

AlfeG 3 days ago

Will see how Azure FD will handle this. We opened more than expected tickets with support on certs not updating automatically...

zelon88 4 days ago

This naively (or maliciously perhaps) maintains that the "purpose" of the certificate is to identify an entity. While identity and safeguarding against MITM is important, identity is not the primary purpose certificates serve in the real world. At least that is not how they are used or why they are purchased.

They are purchased to provide encryption. Nobody checks the details of a cert and even if they did they wouldn't know what to look for in a counterfeit anyway.

This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."

  • pornel 3 days ago

    Browsers check the identity of the certificates every time. The host name is the identity.

    There are lots of issues with trust and social and business identities in general, but for the purpose of encryption, the problem can be simplified to checking of the host name (it's effectively an out of band async check that the destination you're talking to is the same destination that independent checks saw, so you know your connection hasn't been intercepted).

    You can't have effective TLS encryption without verifying some identity, because you're encrypting data with a key that you negotiate with the recipient on the other end of the connection. If someone inserts themselves into the connection during key exchange, they will get the decryption key (key exchange is cleverly done that a passive eavesdropper can't get the key, but it can't protect against an active eavesdropper — other than by verifying the active participant is "trusted" in a cryptographic sense, not in a social sense).

  • chowells 4 days ago

    I think it's absolutely critical when I'm sending a password to a site that it's actually the site it claims to be. That's identity. It matters a lot.

    • zelon88 4 days ago

      Not to users. The user who types Wal-Mart into their address bar expects to communicate with Wal-Mart. They aren't going to check if the certificate matches. Only that the icon is green.

      This is where the disconnect comes in. Me and you know that the green icon doesn't prove identity. It proves certificate validity. But that's not what this is "sold as" by the browser or the security community as a whole. I can buy the domain Wаl-Mart right now and put a certificate on it that says Wаl-Mаrt and create the conditions for that little green icon to appear. Notice that I used U+0430 instead of the letter "a" that you're used to.

      And guess what... The identity would match and pass every single test you throw at it. I would get a little green icon in the browser and my certificate would be good. This attack fools even the brightest security professionals.

      So you see, Identity isn't the value that people expect from a certificate. It's the encryption.

      Users will allow a fake cert with a green checkmark all day. But a valid certificate with a yellow warning is going to make people stop and think.

      • chowells 4 days ago

        Well, no. That's just not true.

        I care that when I type walmart.com, I'm actually talking to walmart.com. I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.

        Preventing local DNS servers from fucking with users is critical, as local DNS is the weakest link in a typical setup. They're often run by parties that must be treated as hostile - basically whenever you're on public wifi. Or hell, when I'm I'm using my own ISP's default configuration. I don't trust Comcast to not MitM my connection, given the opportunity. I trust technical controls to make their desire to do so irrelevant.

        Without the identity component, any DNS server provided by DHCP could be setting up a MitM attack against absolutely everything. With the identity component, they're restricted to DoS. That's a lot easier to detect, and gets a lot of very loud complaints.

        • BrandoElFollito 4 days ago

          You use words that are alien to everyone. Well, there is a small incertainity in "everyone" and it is there where the people who actually understand DHCP, DoS, etc. live. This is a very, very small place.

          So no, nobody will ever look at a certificate.

          When I look at them, as a security professional, I usually need to rediscover where the fuck they moved the certs details again in the browser.

          • chowells 4 days ago

            Who said a word about looking at a certificate?

            I said exactly the words I meant.

            > I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.

            Without the identity component, I can't trust that those things I care about are insulated from local interference. With the identity component, I say it's fine to connect to random public wifi. Without it, it wouldn't be.

            That's the relevant level. "Is it ok to connect to public wifi?" With identity validation, yes. Without, no.

            • hedora 4 days ago

              When you say identity, you mean “the identity of someone that convinced a certificate authority that they controlled walmart.com’s dns record at some point in the last 47 days, or used some sort of out of band authentication mechanism”.

              You don’t mean “Walmart”, but 99% of the population thinks you do.

              Is it OK to trust this for anything important? Probably not. Is OK to type your credit card number in? Sure. You have fraud protection.

              • chowells 3 days ago

                So what you're saying is that you actually understand the identity portion is critical to how the web is used and you're just cranky. It's ok. Take a walk, get a bite to eat. You'll feel better.

                • hedora 3 days ago

                  I’m not the person you were arguing with. Just explaining your misunderstanding.

      • JambalayaJimbo 3 days ago

        Right so misrepresenting your identity with similar looking urls is a real problem with PKI. That doesn’t change the fact that certificates are ultimately about asserting your identity, it’s just a flaw in the system.

      • aseipp 3 days ago

        Web browsers have had defenses against homograph attacks for years now, my man, dating back to 2017. I'm somewhat doubtful you're on top of this subject as much as you seem to be suggesting.

  • racingmars 2 days ago

    > This naively (or maliciously perhaps) maintains that the "purpose" of the certificate is to identify an entity. [...] identity is not the primary purpose certificates serve in the real world.

    Identity is the only purpose that certificates serve. SSL/TLS wouldn't have needed certificates at all if the goal was purely encryption: key exchange algorithms work just fine without either side needing keys (e.g. the key related to the certificate) ahead of time.

    But encryption without authentication is a Very Bad Idea, so SSL was wisely implemented from the start to require authentication of the server, hence why it was designed around using X.509 certificates. The certificates are only there to provide server authentication.

  • gruez 4 days ago

    >This naively (or maliciously perhaps) maintains that the "purpose" of the certificate is to identify an entity. While identity and safeguarding against MITM is important, identity is not the primary purpose certificates serve in the real world. At least that is not how they are used or why they are purchased.

    "example.com" is an identity just like "Stripe, Inc"[1]. Just because it doesn't have a drivers license or article of incorporation, doesn't mean it's not an identity.

    [1] https://web.archive.org/web/20171222000208/https://stripe.ia...

    >This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."

    Certbot is trivial to set up yourself, and deploying it in production isn't so hard that you need to be "Google / AWS / Azure" to do it. There's plenty of IaaS/PaaS services that have letsencrypt, that are orders of magnitude smaller than those hyperscalers.

paradite 2 days ago

All I care about as a certbot user is what do I need to do.

Do I need to update certbot in all my servers? Or would they continue to work without the need to update?

Havoc 3 days ago

Continually surprised by how emotional people get about cert lifetimes.

I get that there are some fringe cases where it’s not possible but for the rest - automate and forget.

iqandjoke 2 days ago

The poor vendor folk needs to come more often on site to fix the cert issue.

aaomidi 4 days ago

Good.

If you can't make this happen, don't use WebPKI and use internal PKI.

wnevets 3 days ago

Has anyone managed to calculate the increase power usage across the entire internet this change will cause? Well, I suppose the environment can take one more for the team.

  • margalabargala 3 days ago

    The single use of AI to generate that video of Trump licking Elon Musk's feet, used significantly more power than this change will cause to be used over the next decade.

    It's great to be environmentally conscious, but if reducing carbon emissions is your goal, complaining about this is a lot like saying that people shouldn't run marathons, because physical activity causes humans to exhale more CO2.

    • wnevets 3 days ago

      > The single use of AI to generate that video of Trump licking Elon Musk's feet, used significantly more power than this change will cause to be used over the next decade.

      We are effectively talking about the entire world wide web generating multiple highly secure cryptograph key pairs every 47 days. That is a lot of CPU cycles.

      Also you not picking up on the Futurama quote is disappointing.

      • margalabargala 3 days ago

        > We are effectively talking about the entire world wide web generating multiple highly secure cryptograph key pairs every 47 days. That is a lot of CPU cycles.

        We aren't cracking highly secure key pairs. We're making them.

        On my computer, to create a new 4096-bit key takes about a second, in a single thread. For something I now have to do fewer than 8 times per year. On a 16-core CPU with a TDP of 65 watts, we can estimate that this took 0.0011 watt-hours.

        Yes, there are a lot of websites, close to a billion of them. No, this still is not some onerous use of electricity. For the whole world, this is an additional usage of a bit over 9000 kWh annually. Toss up a few solar panels and you've offset the whole planet.

        • wnevets 2 days ago

          > On my computer, to create a new 4096-bit key takes about a second, in a single thread. For something I now have to do fewer than 8 times per year. On a 16-core CPU with a TDP of 65 watts, we can estimate that this took 0.0011 watt-hours.

          but you think think it would take a decade for the entire internet to use as much power as a single AI video?

          • margalabargala 2 days ago

            No, doing out the math I see I was being hyperbolic.

            That one AI video used about 100kWh, so about four days worth of HTTPS for the whole internet.

        • detaro 2 days ago

          renewing a certificate does not involve making a new keypair either... It's merely a pair of signatures, one for the CSR and one by the CA.

      • detaro 3 days ago

        When you generate a new cert you do not generate a new keypair every time.

riffic 3 days ago

ah this is gonna piss off a few coworkers today but it's a good move either way.

thyristan 3 days ago

semi-related question: where is the letsencrypt workalike for s/mime?

Lammy 3 days ago

This sucks. I'm actually so sick of mandatory TLS. All we did was get Google Analytics and all the other spyware running “““securely””” while making it that much harder for any regular person to host anything online. This will push people even further into the arms of the walled gardens as they decide they don't want to deal with the churn and give up.

  • ShakataGaNai 3 days ago

    First.... 99% of people have zero interest in hosting things themselves anyways. Like on their own server themselves. Geocities era was possibly the first and last time that "having your own page" was cool, and that was basically killed by social media.

    As for certs... maybe at the start it was hard, but it's really quite easy to host things online, with a valid certificate. There are many CDN services like Cloudflare which will handle it for you. There are also application proxies like Traefik and Caddy which will get certs for you.

    Most people who want their own site today, will use Kinsta or SquareSpace or GitHub pages any one of thousands of page/site hosting services. All of whom have a system for certificates that is so easy to use, most people don't even realize it is happening.

    • Lammy 3 days ago

      lol at recommending Cloudflare and Microsoft (Github) in response to a comment decrying spyware

      Every single thing you mentioned is plugged in to the tier-1 surveillance brokers. I am talking plain files on single server shoved in a closet, or cheap VPS. I don't often say this but I really don't think you “get” it.

      • jonathantf2 3 days ago

        A "regular person" won't/can't deal with a server, VPS, anything like that. They'll go to GoDaddy because they saw an advert on the TV for "websites".

        • Lammy 3 days ago

          They absolutely can deal with the one-time setup of one single thing that's easy to set up auto-pay for. It's so many additional concepts when you add in the ACME challenge/response because now you have to learn sysadmin-type skills to care for a periodic process, users/groups for who-runs-what-process and who-owns-what-cert-files, software updates to chase LE/ACME changes or else all your stuff breaks, etc.

          Your attitude is so dismissive to the general public. We should be encouraging people to learn the little bits they want to learn to achieve something small, and instead we are building this ivory tower all-or-nothing stack. For what, job security? Bad mindset.

  • ezfe 3 days ago

    Lol this is literally not true. I've set up self-hosted websites with no knowledge (just reading tutorials) and TLS is by far not the hardest step.

    • Lammy 3 days ago

      A brand-new setup is not relevant to what I'm talking about. Try ignoring your entire infrastructure for a few years and see if you still think that lol

0xbadcafebee 3 days ago

I hate this, but I'm also glad it's happening, because it will speed up the demise of Web PKI.

CAs and web PKI are a bad joke. There's too many ways to compromise security, there's too many ways to break otherwise-valid web sites/apps/connections, there's too many organizations that can be tampered with, the whole process is too complex and bug-prone.

What Web PKI actually does, in a nutshell, is prove cryptographically that at some point in the past, there was somebody who had control of either A) an e-mail address or B) a DNS record or C) some IP space or D) some other thing, and they generated a certificate through any of these methods with one of hundreds of organizations. OR it proves that they stole the keys of such a person.

It doesn't prove that who you're communicating with right now is who they say they are. It only proves that it's someone who, at some point, got privileged access to something relating to a domain.

That's not what we actually want. What we actually want is to be assured this remote host we're talking to now is genuine, and to keep our communication secret and safe. There are other ways to do that, that aren't as convoluted and vulnerable as the above. We don't have to twist ourselves into all these knots.

I'm hopeful changes like these will result in a gradual catastrophy which will push industry to actually adopt simpler, saner, more secure solutions. I've proposed one years ago but nobody cares because I'm just some guy on the internet and not a company with a big name. Nothing will change until the people with all the money and power make it happen, and they don't give a shit.

msie 3 days ago

Eff this shit. I'm getting out of sysadmin.

ocdtrekkie 5 days ago

It's hard to express how absolutely catastrophic this is for the Internet, and how incompetent a group of people have to be to vote 25/0 for increasing a problem that breaks the Internet for many organizations yearly by a factor of ten for zero appreciable security benefit.

Everyone in the CA/B should be fired from their respective employers, and we honestly need to wholesale plan to dump PKI by 2029 if we can't get a resolution to this.

  • dextercd 4 days ago

    CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.

    It's really not that hard to automate renewals and monitor a system's certificate status from a different system, just in case the automation breaks and for things that require manual renewal steps.

    I get that it's harder in large organisations and that not everything can be automated yet, but you still have a year before the certificate lifetime goes down to 200 days, which IMO is pretty conservative.

    With a known timeline like this, customers/employees have ammunition to push their vendors/employers to invest into automation and monitoring.

    • ocdtrekkie 4 days ago

      It's actually far worse for smaller sites and organizations than large ones. Entire pricey platforms exist around managing certificates and renewals, and large companies can afford those or develop their own automated solutions.

      None of the platforms which I deal with will likely magically support automated renewal in the next year. I will likely spend most of the next year reducing our exposure to PKI.

      Smaller organizations dependent on off the shelf software will be killed by this. They'll probably be forced to move things to the waiting arms of the Big Tech cloud providers that voted for this. (Shocker.) And it probably won't help stop the bleeding.

      And again, there's no real world security benefit. Nobody in the CA/B has ever discussed real world examples of threats this solves. Just increasingly niche theoretical ones. In a zero cost situation, improving theoretical security is good, but in a situation like this where the cost is real fragility to the Internet ecosystem, decisions like this need to be justified.

      Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.

      This is a group of people who have hammers and think everything is a nail, and unfortunately, that includes a lot of ceramic and glass.

      • dextercd 4 days ago

        I think most orgs can get away with free ACME clients and free/cheap monitoring options.

        This will be painful for people in the short term, but in the long term I believe it will make things more automated, more secure, and less fragile.

        Browsers are the ones pushing for this change. They wouldn't do it if they thought it would cause people to see more expired certificate warnings.

        > Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.

        Representatives are not voting against the wishes/instructions of their employer.

        • ocdtrekkie 4 days ago

          I mean to give you an example of how far we are from this: IIS does not have built-in ACME support, and in the enterprise world it is basically "most web servers". Sure, you can add some third party thing off the Internet to do it, but... how many banks will trust that?

          Unfortunately the problem is likely too removed from understanding for employers to care. Google and Microsoft do not realize how damaging the CA/B is, and probably take the word of their CA/B representatives that the choices that they are making are necessary and good.

          I doubt Satya Nadella even knows what the CA/B is, much less that he pays an employee full-time to directly #### over his entire customer base and that this employee has nearly god-level control over the Internet. I have yet to see an announcement from the CA/B that represented a competent decision that reflected the reality of the security industry and business needs, and yet... nobody can get in trouble for it!

          • dextercd 4 days ago

            Let's Encrypt lists 10 ACME clients for Windows / IIS.

            If an organisation ignores all those options, then I suppose they should keep doing it manually. But at the end of the day, that is a choice.

            Maybe they'll reconsider now that the lifetime is going down or implement their own client if they're that scared of third party code.

            Yeah, this will inconvenience some of the CA/B participant's customers. They knew that. It'll also make them and everyone else more secure. And that's what won out.

            The idea that this change got voted in due to incompetence, malice, or lack of oversight from the companies represented on the CA/B forum is ridiculous to me.

            • ocdtrekkie 4 days ago

              > Let's Encrypt lists 10 ACME clients for Windows / IIS.

              How many of those are first-party/vetted by Microsoft? I'm not sure you understand how enterprises or secure environments work, we can't just download whatever app someone found on the Internet that solves the issue.

              • dextercd 4 days ago

                No idea how many are first-party or vetted by Microsoft. Probably none of them. But I really, really doubt you can only run software that ticks one of those two boxes.

                Certify The Web has a 'Microsoft Partner' badge. If that's something your org values, then they seem worth looking into for IIS.

                I can find documentation online from Microsoft where they use YARP w/ LettuceEncrypt, Caddy, and cert-manager. Clearly Microsoft is not afraid to tell customers about how to use third party solutions.

                Yes, these are not fully endorsed by Microsoft, so it's much harder to get approval for. If an organisation really makes it impossible, then they deserve the consequences of that. They're going to have problems with 397 day certificates as well. That shouldn't hold the rest of the industry back. We'd still be on 5 year certs by that logic.

                • ocdtrekkie 4 days ago

                  [flagged]

                  • dextercd 4 days ago

                    Stealing a private key or getting a CA to misissue a certificate is hard. Then actually making use of this in a MITM attack is also difficult.

                    Still, oppressive states or hacked ISPs can perform these attacks on small scales (e.g. individual orgs/households) and go undetected.

                    For a technology the whole world depends on for secure communication, we shouldn't wait until we detect instances of this happening. Taking action to make these attacks harder, more expensive, and shorter lasting is being forward thinking.

                    Certificate transparency and Multi-Perspective Issuance Corroboration are examples of innovations without bothering people.

                    Problem is, the benefits of these improvements are limited if attackers can keep using the stolen keys or misissued certificates for 5 years (plus potentially whatever the DCV reuse limit is).

                    Next time a DigiNotar, Debian weak keys, or heartbleed -like event happens, we'll be glad that these certs exit the ecosystem sooner rather than later.

                    • ocdtrekkie 4 days ago

                      [flagged]

                      • dang 3 days ago

                        Can you please follow the site guidelines when posting to HN, regardless of how wrong anyone else is or you feel they are? You broke them more than once in this thread (e.g. in this comment, in https://news.ycombinator.com/item?id=43698063, and arguably in your root post to the thread too - https://news.ycombinator.com/item?id=43687459).

                        I'm sure you have legit reasons to feel strongly about the topic and also that you have substantive points to make, but if you want to make them on HN, please make them thoughtfully. Your argument will be more convincing then, too, so it's in your interests to do so.

      • JackSlateur 3 days ago

        I hope you understand how funny this is

        The ballot is nothing but expected

        The whole industry has been moving in this direction for the last decade

        So there is nothing much to say

        Except that if you waited the last moment, well you will have to be in a hurry. (non)Actions have consequences :)

        I'm glad by this decision because that'll hammer a bit down those resisting, those who but a human do perform yearly renewal. Let's how stupid it can get.

    • xyzzy123 4 days ago

      Can you point to a specific security problem this change is actually solving? For example, can we attribute any major security compromises in the last 5 years to TLS certificate lifetime?

      Are the security benefits really worth making anything with a valid TLS certificate stop working if it is air-gapped or offline for 48 days?

      > CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.

      They're not incompetent and they're not "evil", and this change does improve some things. But the companies behind the top level CA ecosystem have their own interests which might not always align with those of end users.

      • dextercd 4 days ago

        If a CA or subscriber improves their security but had an undetected incident in the past, a hacker today has a 397 day cert and can reuse the domain control validation in the next 397 days, meaning they can MITM traffic for effectively 794 days.

        CAs have now implemented MPIC. This may have thwarted some attacks, but those attackers still have valid certificates today and can request a new certificate without any domain control validation being performed in over a year.

        BGP hijackings have been uncovered in the last 5 years and MPIC does make this more difficult. https://en.wikipedia.org/wiki/BGP_hijacking

        New security standards should come into effect much faster. For fixes against attacks we know about today and new ones that are discovered and mitigated in the future.

        • xyzzy123 3 days ago

          People who care deeply about this can use 30 day certs right now if they want to.

          • dextercd 3 days ago

            Sure, but it's even better if everyone else does too, including attackers that mislead CAs into misissuing a cert.

            CAs used to be able to use WHOIS for DCV. The fact that this option was taken away from everyone is good. It's the same with this change, and you have plenty of time to prepare for it.

            • xyzzy123 3 days ago

              > including attackers that mislead CAs into misissuing a cert.

              I thought we had CT for this.

              > CAs used to be able to use WHOIS for DCV. The fact that this option was taken away from everyone is good.

              Fair.

              > It's the same with this change, and you have plenty of time to prepare for it.

              Not so sure on this one, I think it's basically a result of a security "purity spiral". Yes, it will achieve better certificate hygiene, but it will also create a lot of security busywork that could be better spent in other parts of the ecosystem that have much worse problems. The decision to make something opt-in mandatory forcibly allocates other people's labour.

              • dextercd 2 days ago

                CT definitely helps, but not everyone monitors it. This is an area where I still need to improve. But even if you detect a misissued cert, it can not reliably be revoked with OCSP/CRL.

                --

                The maximum cert lifetime will gradually go down. The CA/B forum could adjust the timeline if big challenges are uncovered.

                I doubt they expect this to be necessary. I suspect that companies will discover that automation is already possible for their systems and that new solutions will be developed for most remaining gaps, in part because of this announced timeline.

                This will save people time in the long run. It is forced upon you, and that's frustrating, but you do have nearly a year before the first change. It's not going down to 47 days in one go.

                I'm not saying that no one will renew certificates manually every month. I do think it'll be rare, and even more rare for there to be a technical reason for it.

      • sidewndr46 4 days ago

        According to the article:

        "The goal is to minimize risks from outdated certificate data, deprecated cryptographic algorithms, and prolonged exposure to compromised credentials. It also encourages companies and developers to utilize automation to renew and rotate TLS certificates, making it less likely that sites will be running on expired certificates."

        I'm not even sure what "outdated certificate data" could be. The browser by default won't negotiate a connection with an expired certificate

        • xyzzy123 4 days ago

          > I'm not even sure what "outdated certificate data" could be...

          Agree.

          > According to the article:

          Thanks, I did read that, it's not quite what I meant though. Suppose a security engineer at your company proposes that users should change their passwords every 49 days to "minimise prolonged exposure from compromised credentials" and encourage the uptake of password managers and passkeys.

          How to respond to that? It seems a noble endeavour. To prioritise, you would want to know (at least):

          a) What are the benefits - not mom & apple pie and the virtues of purity but as brass tacks - e.g: how many account compromises do you believe would be prevented by this change and what is the annual cost of those? How is that trending?

          b) What are the cons? What's going to be the impact of this change on our customers? How will this affect our support costs? User retention?

          I think I would have a harder time trying to justify the cert lifetime proposal than the "ridiculously frequent password changes" proposal. Sure, it's more hygenic but I can't easily point to any major compromises in the past 5 years that would have been prevented by shorter certificate lifetimes. Whereas I could at least handwave in the direction of users who got "password stuffed" to justify ridiculously frequent password changes.

          The analogy breaks down in a bad way when it comes to evaluating the cons. The groups proposing to decrease cert lifetimes bear nearly none of the costs of the proposal, for them it is externalised. They also have little to no interest in use cases that don't involve "big cloud" because those don't make them any money.

          • dextercd 4 days ago

            "outdated certificate data" would be domains you no longer control. (Example would be a customer no longer points a DNS record at some service provider or domains that have changed ownership).

            In the case of OV/EV certificates, it could also include the organisation's legal name, country/locality, registration number, etc.

            Forcing people to change passwords increases the likelihood that they pick simpler, algorithmic password so they can remember them more easily, reducing security. That's not an issue with certificates/private keys.

            Shorter lifetimes on certs is a net benefit. 47 days seems like a reasonable balance between not having bad certs stick around for too long and having enough time to fix issues when you detect that automatic renewal fails.

            The fact that it encourages people to prioritise implementing automated renewals is also a good thing, but I understand that it's frustrating for those with bad software/hardware vendors.

    • bsder 3 days ago

      > They didn't do this because they're incompetent but because they think it'll improve security.

      No, they did it because it reduces their legal exposure. Nothing more, nothing less.

      The goal is to reduce the rotation time low enough that the certificates will rotate before legal procedures to stop them from rotating them can kick in.

      This does very little to improve security.

      • dextercd 3 days ago

        Apple introduced this proposal. Why would they care about a CA's legal exposure?

        Lower the lifetime of certs does mean that orgs will be better prepared to replace bad certs when they occur. That's a good thing.

        More organisations will now take the time to configure ACME clients instead of trying to convince CA's that they're too special to have their certs revoked, or even start embarrassing court cases, which has only happened once as far as I know.

        Theories that involve CAs, Google, Microsoft, Apple, and Mozilla having ulterior motives and not considering potential downsides of this change are silly.

      • nickf 3 days ago

        That isn’t at all true.

  • rcxdude 5 days ago

    A large part of why it breaks things is because it only happens yearly. If you rotate certs on a regular pace, you actually get good at it and it stops breaking, ever. (basically everything I've set up with letsencrypt has needed zero maintenance, for example)

    • ocdtrekkie 5 days ago

      So at a 47 day cadence, it's true we'll have to regularly maintain it: We'll need to hire another staff member to constantly do nothing but. (Most of the software we use does not support automated rotation yet. I assume some will due to this change, but certainly not 100%.)

      And also, it probably won't avoid problems. Because yes, the goal is automation and a couple weeks ago I was trying to access a site from an extremely large infrastructure security company which rotates their certificates every 24 hours. And their site was broke and the subreddit about their company was all complaining about it. Turns out automated daily rotation just means 365 more opportunities for breakage a year.

      Even regular processes break, and now we're multiplying the breaking points... and again, at no real security benefit. There’s like... never ever been a case where a certificate leak caused a breach.

      • Avamander 3 days ago

        > So at a 47 day cadence, it's true we'll have to regularly maintain it: We'll need to hire another staff member to constantly do nothing but. (Most of the software we use does not support automated rotation yet. I assume some will due to this change, but certainly not 100%.)

        This is fundamentally a skill issue. If a human can replace the certificate, so can a machine. Write a script.

  • arp242 3 days ago

    You can disagree with all of this, but calling for everyone involved to be fired is just ridiculous and mean-spirited.

    • rglover 3 days ago

      Is it? This is the crux of the problem with a lot of institutions. There's little to no professional accountability for bad moves anymore. It used to be that doing a good job and taking pride in one's work was all you needed to do to keep your job.

      Now? It's a spaghetti of politics and emotional warfare. Grown adults who can't handle being told that they might not be up to the task and it's time to part ways. If that's the honest truth, it's not "mean," just not what that person would like to hear.

aristofun 3 days ago

Oh no, Bunch of stupid bureacrats came up with another dumb idea. What a surprise!

belter 4 days ago

Are the 47 days to please the current US Administration?

  • eesmith 4 days ago

    Based on the linked-to page, no:

        47 days might seem like an arbitrary number, but it’s a simple cascade:
    
        * 200 days = 6 maximal month (184 days) + 1/2 30-day month (15 days) + 1 day wiggle room
        * 100 days = 3 maximal month (92 days) + ~1/4 30-day month (7 days) + 1 day wiggle room
        * 47 days = 1 maximal month (31 days) + 1/2 30-day month (15 days) + 1 day wiggle room
vasilzhigilei 3 days ago

I'm on the SSL/TLS team @ Cloudflare. We have great managed certificate products that folks should consider using as certificate validity periods continue to shorten.

  • bambax 3 days ago

    Simply having a domain managed by Cloudflare makes it magically https; yes, the traffic between the origin server and Cloudflare isn't encrypted, so it's not completely "secure", but for most uses it's good enough. It's also zero-maintenance and free.

    Keep up the good work! ;-)

    • vasilzhigilei 3 days ago

      Thanks! You can also set up free origin certs to make Cloudflare edge to origin connections encrypted as well.

  • tinix 3 days ago

    yeah I'm convinced this is the real reason for these changes...

    perverse incentives indeed.

  • Lammy 3 days ago

    SSL added and removed here ;-)

  • lucb1e 3 days ago

    Is this a joke (as in, that you don't actually work there) to make CF look bad for posting product advertisements in comment threads, or is this legit?

    • vasilzhigilei 3 days ago

      It's one of my first times posting on HN, thought this could be relevant helpful info for someone. Thanks for pointing out that it sounds salesy, rereading my comment I see it too now.

    • bambax 2 days ago

      People who are proud of the work they do are rare enough that we shouldn't punish them for it.