I 100% recommend watching that talk near the top of the article. It's a short talk of 20 minutes but even if you have only time to watch the first 3, that's where the most entertaining part is so still recommended.
So, it sounds like (please correct me if I misunderstood something) the issue is:
# cache signing objects and get chain
cache, chain = parse_chain_from_cert(full_cert)
root = chain[-1]
validate_root_authenticity(root)
# go up chain from leaf to the root
for cert in chain[:-1]:
parent = cert.parent()
# Wrong Signing object retrieved here:
signing_object = cache[parent.public_key]
signing_object.validate(cert)
So the leaf signing object is used instead of the roots, as they share the same public key, but that doesn't contain the parameters of the curve.
Interested in how exactly it was fucked up I looked at crypt32.dll. It's more like (heavily-adjusted to demonstrate the problem without >9k unrelated details):
from the_heavy_lifting_crypto_heavy_module import verify_signature
def is_known_trusted_ca(cert):
# BOOM
return cert in root_ca_cache[cert.pubkey]
def verify_cert(cert, issuer):
# BOOM
if verified_cache[(cert.pubkey, issuer.pubkey)]:
return True
if not verify_signature(cert, issuer):
return False
verified_cache[(cert.pubkey, issuer.pubkey)] = True
return True
# This has to be a search instead of simple chain-walking because servers
# do not send a "chain", they only send several additional certificates and
# let the client find a trusted path out of them, it is also how cross signing
# works. This justifies using a cache above.
def try_to_build_cert_chain(cert, cert_store):
if is_known_trusted_ca(cert):
return [cert]
for possible_issuer in cert_store.find_certs_with_dn(dn=cert.issuer_dn):
if verify_cert(cert, possible_issuer):
chain = try_to_build_cert_chain(possible_issuer, cert_store)
if chain:
return [cert] + chain
Then someone tasked with (why?!) "adding unnamed curve support" added (correct!) code to `the_heavy_lifting_crypto_heavy_module.verify_signature` but didn't know this breaks an assumption made in another module.
The exact detail does not matter though, you can build up working exploits even if you don't know the details listed above.
The servers are supposed to send a chain ("SHOULD" is the strongest language an RFC uses about chains) but, in practice, the only way to get where you're going as a client is to treat it as one leaf and then everything else is hints that might help you construct the chain.
The reason is that a server doesn't know which roots you actually trust, so maybe it thinks you trust X and X->A->B->C so it sends X->A, A->B, B->C but actually you trust A so you only needed to receive A->B, and B->C - X->A doesn't help you make a decision so your code needs to disregard it.
That's a general description of the problem, but the more pragmatic question to ask is "why do we support explicit curve parameters at all?". In practice, nobody uses them; virtually no organizations are even qualified to use them. Many platforms don't support them at all; it's not as if there's a serious compatibility argument for them.
The right response to this vulnerability is to jettison support for anything but named curves.
This is probably an inside-baseball argument for people who don't pay attention to elliptic curve implementation details. But it's easy to quickly understand.
Go to SAFECURVES.CR.YP.TO; you'll see a big long list of curves, each with their parameters and some implementation details. In reality, only a subset of these curves are commonly used (P-224, P-256, Secp256k1, and Curve25519 probably account for 95% of all curves).
Further, most specific curves are implemented not as parameters plugged into a generic elliptic curve scalar multiplication routine, but rather as curve-specific code (one of the major reasons we have different curves is to allow specific implementation optimizations). A browser that supports Curve25519 certainly uses Curve25519 code, not generic code.
The "explicit curve" feature that blew up on Microsoft is designed to allow people to specify their own parameters, not for well-known curves but for curves they themselves generated. Nobody does that. There was a time, in the long long ago, when people thought that might be a good idea; nobody thinks that anymore; everyone thinks the opposite thing. If anything, we're trying to get rid of a bunch of curves. Certainly we shouldn't be randomly supplying new Weierstrass curves.
So that's the problem here. A lot of cryptography engineers were probably surprised this CryptoAPI feature even worked. We had considered it awhile ago as a cryptopals challenge (and Sean actually did add it as one in Set 8 a few years ago), but we never dreamed anyone would be crazy enough to let this bug happen in the real world.
My feeling is that there has been a change in crypto. It seems in the past, crypto was a la carte.
Over time, this has been shown to open up a big security vulnerabilities such as this one, and also issues with key agility in TLS, etc.
I think the tendency today is to make make crypto more omakase where there are a few well studied, well implemented algorithms that you use for everything.
A small part of the problem is that the omakase options do change over time with changes in threat models and knowledge of vulnerabilities. One example is the recent push away from the NIST Curves to the more open knowledge Curve 25519, in part because of reasonable concerns that the NIST Curves were backdoored by NSA for easier cryptoanalysis. So you still need "low level" a la carte tools underlying and backing the omakase experiences.
Part of the problem of the Windows CryptoAPI in question is that it has to serve "both customers", the omakase sort that need the best option, and the a la carte needs to higher level tools serving as the omakase option. That is, for some traditional/legacy C/C++ users that CrpytoAPI is about as high level as they will get, and it's up to the documentation to steer its consumers to the best omakase options. But crypto32.dll also is the low-level backend of tools like WinRT's Windows.Security.Cryptography and .NET's System.Security.Cryptography (on Windows).
The push towards omakase and somewhat getting applications away from micromanaging the a la carte stuff, definitely seems the smart path. Handling the onion of it all in what needs to be supported at a low level to keep the high level omakase shop running smooth seems like it still has a lot of pitfalls.
Without addressing the thing about the NIST P-curves being backdoored, which is an extremely dubious proposition, I'll just say that whether you trust NIST or not, nobody should be using custom curves with TLS. Cryptographers have spent an inordinate amount of time working out, proposing, and competing with each other about what the best curves are, and what the best mechanisms for generating them are, and there are a wealth of named curves to pick from.
I'd be happier if nobody used the P-curves (they're unsafe for other reasons). But that's got nothing to do with whether people should literally be plugging their own Weierstrass coefficients and base points into X.509.
Standards should immediately deprecate support for explicit curve parameters.
> Without addressing the thing about the NIST P-curves being backdoored, which is an extremely dubious proposition
I don't believe it exactly either, I just know it as a zeitgeist reason why some of the dialog about curve changes has been happening. In hindsight I supposed I'd have been better off not mentioning it.
> Standards should immediately deprecate support for explicit curve parameters.
I agree and that wasn't the intent of my post, if it came across that way. While elsewhere in the thread it is mentioned that many named curves are implemented standalone, my understanding is that not all of them are.
I was only suggesting that crypto32.dll is intended to be low level enough that it is possible that some of the higher-level APIs that Microsoft has built that do focus on named curves only (such as the UWP/WinRT and .NET APIs I mentioned) may rely on the explicit parameter support in the lower level library. I don't envy Microsoft the task of balancing what is in a low-level API like that versus backwards compatibility, "upwards" compatibility with high level components, etc.
I believe that there are users who do produce keypairs with arbitrary explicit curves: NSA and its equivalents in other countries. Such agencies tend to like to use secret algorithms and generating your own curve for internal use is somewhat sane way to do that for ECC.
In other words I would not be to surprised if NSA found this bug by accident because it caused some of their internal “secret” cryptosystem to not work at all instead of by actively looking for it.
The X.509 structure does put all of this together, it's just that cryptographers will tend to refer to the actual numbers which vary from one to another as the "public key" as a shorthand, they know the rest of the context is important but it's not what they're talking about right now.
In RSA for example there's a public exponent, yours is almost invariably going to be 65537 and most of the time nobody will even mention this parameter, but the "same" key with exponent 3 would actually be a different key than yours.
In code such shorthand becomes a bug.
If you only allow a handful of named algorithms the bug is essentially useless to adversaries. Microsoft's non-standard decision to allow parametrised curves makes it into a huge attack surface.
Yeah, I get that! There would still be a named curve though, and things do change over time.
Sometimes I like to force things in such a way that you can’t avoid dealing with some issues. So, by putting these items in the public key, you are forced to look at them and deal with them when you write a library.
Often, engineers will write code correctly because they know all the context. But the next engineer will likely have less and less context as time goes by.
Thus forcing context acknowledgement, while seemingly redundant, might be essential to avoiding future bugs.
Correct me if I am wrong, but it seems like firefox (and maybe chrome) would never have this vulnerability as they use their own cryptography libraries for everything.
Maybe. The question isn't "are you using your own cryptography libraries for cryptographic operations." The question is "are you using Windows's native certificate chain validation API, or did you write your own?"
Even if Firefox/Chrome use their own implementations of cryptographic algorithms, there's a decent chance that they are still using Windows's native API for certificate chain validation in order to plug into the enterprise certificate management features that come baked into Windows, which IT teams sometimes use to distribute internal PKI's. Without having seen how Firefox and Chrome deal with their chain building, I can't say "yes" or "no" definitively.
Firefox uses only its own code, NSS, which is a Free Software crypto stack. This achieves consistency across OS platforms for Firefox. If Mozilla distrusts a sketchy CA then Firefox users are protected on a Mac just the same as in Linux, and in Windows.
Firefox also uses NSS to do certificate validation. In newer releases a config pref lets you ask Firefox to use Windows' local trust store to find any CA roots that are being additionally trusted on your PC, e.g. "enterprise" certificates or even a local cert on somebody's local web development setup. But the validation for those certificates is still NSS. If your corporation has (as one of my ex-employers did) really half-arsed certificates they'll be rejected by NSS even though Windows is happy with them and Firefox has fetched them from the Windows trust store. This is potentially annoying but probably also means security at your place of work is shot to hell.
Chrome on the other hand hands all normal certificate validation to Windows so as to deliver the same experience as other Windows products. This makes it a better drop-in replacement, but out-sources trust decisions (on Windows) to Microsoft. In this case that's bad news.
For some certs Chrome will reject them because they either claim to be logged in Certificate Transparency (and the proofs won't match for a bogus cert) or they don't claim to be logged but they claim to have been created after that was mandatory. But bad guys could work around this (for the next year or so) by picking date ranges for which CT logging wasn't yet mandatory and yet a compliant certificate isn't yet expired. A proof-of-concept of this has been done for Chrome on unpatched Windows.
Patching either Chrome or Windows to current fixes the problem for Chrome, but obviously patching Chrome doesn't fix other Windows software.
Looking at it they don't even load crypt32.dll so unless one has a modified version of firefox, or an external plugin, it seems unlikely they have the vulnerability.
Any software which might validate ECC certificates is affected. That could include IIS if your setup uses client certs or if you have software which connects to other sites over HTTPS, or similar situations.
It could also affect document signing (less commonly used, this is digital signatures not electronic signatures) or code signing (e.g. updates for software other than Windows itself might be affected, or installing new software you haven't separately validated as safe).
Off topic, but this vulnerability was discovered by the NSA, right? Then you’re not really in the position to brand the vulnerability, or you come off as trying to take credit where credit isn’t due. (I mean, you have every right to make a website for it, but giving it a logo and say “Why even patch if it doesn’t have a logo?” as if it now has an official logo seems to be taking it a bit far.)
Please excuse me if I got the origin story wrong.
Edit: Also, from the website:
> Who’s behind this?
> Trail of Bits
That could certainly be (mis)interpreted as taking credit for the vulnerability.
You would be misinterpreting them, since their page explicitly credits NSA, cites other people's POCs, and even the HN thread where I wrote down Thomas Pornin's (correct) guess as to what the vulnerability was.
The branding thing is a joke, about the fact that neither Microsoft nor NSA branded the vulnerability. Lighten up.
The test site doesn't specify whether we have to use a specific browser to test the vulnerability? I somehow doubt this affects all desktop browsers IE, Edge, Chrome and Firefox.
The Trail of Bits CA is unlikely to be in your trust store, so you will always get an error message. For example Chromium on this Linux lappy gives me a certificate error relating to the CA but I can still access the site. After importing the CA to my trust store I can no longer access the site at all because I get a proper fatal error relating to the certificate itself and not the CA.
I have problems getting Windows update to work under Arch Linux but I'm hoping to find something in the AUR.
There are a number of other brands for it too: CurveBall, NSACrypt, ChainOfFools, and more. None of them have dedicated sites though, so Whose Curve Is It Anyway is obviously the best.
Do you remember the CVE number of this bug? How about the Citrix bug from last week? It's a poor way to communicate, and there's very strong anecdotal evidence that using a real name promotes patching rates faster than otherwise.
Plus, the whosecurve.com website includes the information you want if you only have 10 seconds and it's by the same authors as the blog linked in the OP.
The fact that their writeups all credit NSA and link to other people's descriptions of the vulnerability has the "benefit" of ensuring that you have to want that misconception in order to end up with it.
Why is that ironic? Well, Trail of Bits also published this article last year:
https://blog.trailofbits.com/2019/07/08/fuck-rsa/
(Note, the points there are still valid. I just find it funny.)