Do We Need an Alternative to HTTPS and TLS?

“Do We Need an Alternative to HTTPS and TLS?”  This question came up in the Personal Clouds list recently.  Thanks to the well publicized problems with Certificate Authorities, variations on this question are a common theme among many of the communities in which I participate.  The CA has become the whipping boy for all the ills of authentication and network security.  Let’s just get rid of it, right?  It’s not that simple.

Although we may dislike certain aspects of a CA or other central authority, any replacement system would need to provide certain functions of that role or else mitigate the issues introduced by their absence.

The X.509 cert binds a bare asymmetric key to an identity and policies.  The CA’s role…

  • Manages the namespace to ensure no dupes in the DN or Serial number.
  • Enforces policies against the identity and restrictions bound into the cert.
  • Vouches for the identities bound to the certs.
  • Binds the keys, identity and policy prior to first use ensuring that they do not change.
  • Provides a revocation service.

I’ve been spending a lot of time lately in Steve Gibson’s newsgroups where they are developing a new authentication technology based on bare 256 bit random keys.  Members of the group are extremely fond of calculating highly improbable birthday attack odds and saying things like “Bruce Schneier says ‘trust the math'”.  The assertion is that it is OK that the system allows dupe keys because of the “vanishingly small” chance of one ever occurring.  I do trust the math.  I hold the code in somewhat lower esteem.  As a security architect it doesn’t matter to me how small the chance of a dupe is.  If the identity *can* be ambiguous, I assume that a dupe *will* occur and design controls to mitigate the impact of that occurrence.  The CA system was built on similar principles.  It assumes that dupes would occur and several layers of controls, in addition to randomness and large keyspaces, were included in the architecture to detect and eliminate them.

When I consider replacing the CA with a decentralized identity scheme the first thing I start asking is how the proposed system, lacking management of its namespace (or keyspace), deals with ambiguous identifiers.  One thing I’ve come to realize is that anonymity means ambiguity.  A system relying on bare keys and nobody to manage the namespace cannot distinguish between two holders of the same bare key.  Assuming this is solved (or at least considered), then I start looking at how the proposed system addresses some of these other issues.

HTTPS provides a few things that can be easily duplicated and a few things that cannot.  Since the user typically authenticates by sending credentials within the secure TLS tunnel, then an anonymous protocol to generate a random session key can work.  It doesn’t matter if two TLS sessions generate the same session key so long as it cannot be predicted.  That part we can do easily.  On the other hand, we also use TLS to authenticate the server.  Here we really *do* care about ambiguities of the identifiers, the policies bound to the certificates, the ability to revoke keys, that the identity provider relied on has no stake in the connection, etc.

I’m not suggesting that working towards alternatives to the CA system is bad or hopeless.  But whatever we might consider to replace HTTPS, we need to remember there’s a baby in the CA bath water.  It may be hard to see through the blinding glare of all the crypto math, but it’s in there and is worth saving.

Comments

  1. Interesting article. I personally think that any security mechanism relying on central authority, or any other kind of centralization can and should be replaced.

    I think one of the key phrases of your article is when you say “One thing I’ve come to realize is that anonymity means ambiguity”. First of all with this phrase you confuse decentralization with anonymity. (Look at Bitcoin, Bitcoin is everything but anonymous in nature).

    When you want to decentralize, the most efficient way to do it is to stop thinking in terms of two computer communicating and start thinking in terms of a message being sent in a network of interconnected peers, from a starting point to a destination. Because the network of peers will act as the central autority.

    The role played by the CA is useful, and do not need to be removed but to be replaced by a 100% reliable mechanism. And to stop with the “Globally speaking, most of the CA are fine, there are only minor exception / Trust us, the certificates are kept private / We need to hire a security engineer to help us configure the webserver with HTTPs because it is too complicate”.

    The CAs can and should be replaced by a network of peers and a protocol. The peers would be the web browsers of internet users – why wouldn’t they support such a protocol. From there, you can see that it would be easy to take into account the “traceroute” in this protocol, the path between two peers to detect when two people use the same key, if really you can’t trust the maths… It would solve the issue of the with duplicate keys.

    It’s worth saying that you would be adding an overhead and extra server computations for something having less than (10E-30) to occur (it’s even lesser than that I don’t have the exact number right now). And you have to realize that for example Tox, a secure messaging P2P protocol relies on it to authenticate its users on the WHOLE network. But here we are saying that 1) two identical keys will be generated, but also that 2), the malicious user will be lucky enough to target the right server at the right time, reducing even more this probability….

    If you are interested in the duplicate keys topics, I would recommend you to visit the wiki of the tox.im project, they explain their solution it in a very comprehensive way (on their wiki/github websites).

    Regarding server authentication, when there is no duplicate key, then there is no problem to authenticate the server. I think about it in terms of two way authentication, with private/public keys in both sides, some kind of checksum controls etc… In your protocol you can also enhance things with extra features like reyling on the overall trust of the network (if many nodes gives good reputation to the candidate server node, assume it is the legitimate server node).

    The foundation of this approach is that if you remove CAs without replacing them with a network of peer, you actually do not remove them, you simply make the server become its own central authority. And then, a lot of new problems may arise (what about DNS spoofing etc… )

    But effectively replacing HTTPS-TLS/SSL with such a system will make a great difference. Much more websites would be secured, since there would be no overhead and no extra costs associated with security. When we are able to do without intermediaries, and when it significantly improve efficiency, solve problems and simplifies things, we should really do it.

    • …when you say “One thing I’ve come to realize is that anonymity means ambiguity”. First of all with this phrase you confuse decentralization with anonymity.

      Not at all. If I accept anonymous identifiers to post to my forum, for example, I know only that I see the same identifier on successive visits. If the identifier is sufficiently long and random appearing, I might even assume that it was created with sufficient entropy as to practically guarantee no dupes. However, I have no way to distinguish between two people are using the same identifier. A holder of that identifier can safely repudiate any posting because there is no assurance that the person is the only holder of that identifier. That ambiguity exists regardless of whether the identifier was issued by a central authority, a collective, or an individual.

      In cases where such assurances are important, additional controls must be used. Associating a password with an identifier provides additional assurance of uniqueness. Binding the identifier to a certificate, more assurance yet. The framework of additional controls is what allows someone to claim a unique identity. Without these the only claim that can be made is “I’ve seen this identifier before” which may be sufficient for forum postings, but not for high-value applications, because of the inherent ambiguity.

      The role played by the CA is useful, and do not need to be removed but to be replaced by a 100% reliable mechanism.

      I believe that you are confusing math with life here. It’s possible to create proofs of mathematical concepts but security systems are real-world implementations conceived, created, and operated by humans. If the criterion is a 100% reliable replacement for the CA we will never replace the CA. I believe a more practical objective is to find something an order of magnitude better than the CA system at a comparable cost.

      The CAs can and should be replaced by a network of peers and a protocol. The peers would be the web browsers of internet users – why wouldn’t they support such a protocol. From there, you can see that it would be easy to take into account the “traceroute” in this protocol, the path between two peers to detect when two people use the same key, if really you can’t trust the maths… It would solve the issue of the with duplicate keys.

      This gets back to my assertion that it is necessary to add secondary controls to a bare identifier wherever the ambiguity issue needs to be solved – which is exactly what is proposed here. Unfortunately, the method described doesn’t at all solve the issue of duplicate keys. The browser on my laptop or phone connects to the Internet from all over the world and through a variety of different proxies, depending on where I’m working at the time. Any system capable of uniquely recognizing me on subsequent connections would explicitly not be able to use attributes of my connection to do so.

      If you are interested in the duplicate keys topics, I would recommend you to visit the wiki of the tox.im project, they explain their solution it in a very comprehensive way (on their wiki/github websites).

      Thanks for the pointer. I’ll have a look!

Leave a Reply