Do We Need an Alternative to HTTPS and TLS?

“Do We Need an Alternative to HTTPS and TLS?”  This question came up in the Personal Clouds list recently.  Thanks to the well publicized problems with Certificate Authorities, variations on this question are a common theme among many of the communities in which I participate.  The CA has become the whipping boy for all the ills of authentication and network security.  Let’s just get rid of it, right?  It’s not that simple.

Although we may dislike certain aspects of a CA or other central authority, any replacement system would need to provide certain functions of that role or else mitigate the issues introduced by their absence.

The X.509 cert binds a bare asymmetric key to an identity and policies.  The CA’s role…

  • Manages the namespace to ensure no dupes in the DN or Serial number.
  • Enforces policies against the identity and restrictions bound into the cert.
  • Vouches for the identities bound to the certs.
  • Binds the keys, identity and policy prior to first use ensuring that they do not change.
  • Provides a revocation service.

I’ve been spending a lot of time lately in Steve Gibson’s newsgroups where they are developing a new authentication technology based on bare 256 bit random keys.  Members of the group are extremely fond of calculating highly improbable birthday attack odds and saying things like “Bruce Schneier says ‘trust the math'”.  The assertion is that it is OK that the system allows dupe keys because of the “vanishingly small” chance of one ever occurring.  I do trust the math.  I hold the code in somewhat lower esteem.  As a security architect it doesn’t matter to me how small the chance of a dupe is.  If the identity *can* be ambiguous, I assume that a dupe *will* occur and design controls to mitigate the impact of that occurrence.  The CA system was built on similar principles.  It assumes that dupes would occur and several layers of controls, in addition to randomness and large keyspaces, were included in the architecture to detect and eliminate them.

When I consider replacing the CA with a decentralized identity scheme the first thing I start asking is how the proposed system, lacking management of its namespace (or keyspace), deals with ambiguous identifiers.  One thing I’ve come to realize is that anonymity means ambiguity.  A system relying on bare keys and nobody to manage the namespace cannot distinguish between two holders of the same bare key.  Assuming this is solved (or at least considered), then I start looking at how the proposed system addresses some of these other issues.

HTTPS provides a few things that can be easily duplicated and a few things that cannot.  Since the user typically authenticates by sending credentials within the secure TLS tunnel, then an anonymous protocol to generate a random session key can work.  It doesn’t matter if two TLS sessions generate the same session key so long as it cannot be predicted.  That part we can do easily.  On the other hand, we also use TLS to authenticate the server.  Here we really *do* care about ambiguities of the identifiers, the policies bound to the certificates, the ability to revoke keys, that the identity provider relied on has no stake in the connection, etc.

I’m not suggesting that working towards alternatives to the CA system is bad or hopeless.  But whatever we might consider to replace HTTPS, we need to remember there’s a baby in the CA bath water.  It may be hard to see through the blinding glare of all the crypto math, but it’s in there and is worth saving.

Comments

  1. Dipl. Ing.(BA) Frank Gerlach says

    Hello T.Rob,
    of course I would be glad to communicate with you in personal email. Please let me know your address.

    Having said that, I was under the impression you wanted to spawn an open discussion here. Hopefully I do not drive away your customers with my comments 🙁

    • I love feedback here but WordPress blog comments are a horrible format for a deep tech threaded discussion. Happy to have the discussion in public but perhaps in a real discussion forum or list server. Meantime I’ll send you my contact info.

  2. Dipl. Ing.(BA) Frank Gerlach says

    Hello again,
    thanks for your reasoned response to MST. I accept that MST certainly is radically different than the “general direction” of commercial cryptography. I also accept that my arguments are largely theoretical at this point and should be tested in practice. But before you can change a problematic system, you must do rational reasoning. In many human endeavours we have seen overly complex approaches to fail. TLS cryptography and modern operating systems in general seem to fall into this category. We currently see a multitude of security issues which, in my opinion, should in the end be traced to overwhelming complexity. The standards we have designed are too complex for humans to securely realize. Time and again, simplicity has proven to be a virtue in engineering; software engineering is no exception. For example, MITM attacks vanish as soon as you do not use public key cryptography. A bank account would only divulge its contents if the counterparty knew the secret symmetric key attached to the account. Even if an attacker could fool somebody to enter their account number/password into a forged form, they could use that for little effect. Secondly the security of CAs vanishes as an issue, as soon as we do not use public keys. Of course we still need a “trusted key source”, which would be the postal system or the branch offices of the bank/telco/utility. As I said, in most countries a lot of trust is already vested into the postal system, without systemic failure so far !
    I concede that “cheap” internet services like Twitter, Facebook or the freemail services benefit quite a lot from public key cryptography. Maybe a “tiered” approach would be useful: A simple, highly secure system like MST for professional requirements like Banking and a less secure system for the “free internet services”.
    Again, I do not claim to be in possession of absolute truth and many of my arguments are partially based on conjecture, but we certainly have reason to question the basic assumptions of TLS given the serious incidents and revelations of the past 10 years or so. MST is an attempt to drastically improve the state of the affairs and I acknowledge that it is in some ways radical and “expensive”. I firmly believe though, that great engineering needs to question every assumption given big problems. When you are deep down a rabbit hole, it is time to consider moving backwards.

    • When I was CTO of Qredo our product was much more mature than MST. We had a turnkey key lifecycle management solution, key renewal, account recovery, and more. We had independent security lab certification. We had investors from the industries we targeted and got enterprise architects and C-Suite execs to take pitch meetings on the strength of those connections and backing. This post was based on what I learned from having walked this path as far as the board room (and pub and restaurant) with a product that was a lot more mature and complete than MST is now but still didn’t clear the bar.

      I’m feeling a bit constrained here in the comments section. If you’d like to carry on over email or voice I’m happy to do so. I have your email so say the word and I’ll send contact info. If not, I’m happy to carry on as a spectator from the sidelines and give you the last word here. And again, sincere best wishes in getting MST further than I got with Qredo. It’s still early days and all of the issues I raised are addressable or you may find a way to nullify them. Either would be a good outcome.

  3. Thanks for the pointer to MST. It’s the “requires a pre-shared key” part that is the most obvious concern. That puts it in a different class from public/private key crypto. Different threat models, different use cases.

    Yes, there’s some overlap but to make the claim that it provides the same security assurances as TLS requires some rather critical assumptions about the nature of the keys, the keyspace, the secure exchange of said keys, the system’s behavior in the presence of dupe keys, etc.

    Much of the assurance of TLS originates in the human processes outside of the protocol itself that are used to create trust anchors. Lots of things that want to “replace TLS” are concerned only with the protocol, and fail to describe the trust architecture in which the replacement lives.

    TLS provides a framework that brings together certificates, handshake protocols, and ciphers. The ISO-specified human processes and controls surrounding the certificates provide roots of trust, and the crypto elements build on those to provide a framework of assurance.

    MST simply omits the root of trust in TLS by starting with the assumption of the existence of pre-shared keys. Someone needs to back-fill that part if there are to be claims of equivalence. I do not see that in the MST project. Perhaps it will come later.

    • Dipl. Ing.(BA) Frank Gerlach says

      Hello, thanks for your reply on my post about MST. Arguably, the enormous complexity of the Public-Key cryptography and the supporting infrastructure (CAs etc) is exactly the weakness of TLS.

      TLS implementations are huge (30000 to 500000 Lines of Code) and historically bug-riddled. CAs have been hacked more than once and bogus TLS certficiates have been issued for sites like gmail.com.

      Just sending a pre-shared key to the end users via the good old paper postal system appears to be much more secure than TLS. For any serious application (e.g. banking, telecom billing, electricity billing, gas billing etc) sending a postal letter is not a cost issue.

      • Phrases like “the weakness of TLS” tend to trouble me. For one thing, I’m not sure it’s possible to talk in terms of “the” weakness of any mature security architecture. The larger the attack surface, the more weaknesses can be found, and yes all those bad things you describe have happened. But even if a new security architecture that addresses those specific things makes them less likely to happen, that’s no assurance that it doesn’t suffer from completely different weaknesses that are equally bad or worse unless it describes an architecture with roots of trust and how assurances are built up from there.

        For example, most of my customers are Fortune 500 enterprises and tend to lean toward banking, retail, and insurance. One of the things a central trust anchor like a CA does is to ensure a consistent level of implementation. My Fortune 500 customers don’t consistently implement security. SMB shops lack the expertise and because the incremental cost of that expertise has a high floor, they tend not to hire it. They enjoy some consistency inasmuch as their implementations are consistently bad. One client of mine explained that if they insisted all their customers followed the laws about encrypting patient health data, they would go out of business because their many smaller customers simply can’t do it.

        In the system you describe, the postal system is not the root of trust in provisioning symmetric keys securely. It merely adds assurance and, at least in the US, it has never been tested as the primary transport of the digital keys on which a majority of digital transactions depend. Should it be pressed into service to do that I suspect we’d find lots of heretofore unknown weaknesses. (We do use the US postal system as a secure repository of personal address information and in that function it unwittingly facilitates a lot of identity theft so it isn’t as attractive to me as it seems to be to you.) The root of trust is the system by which the keys are generated, provisioned, the keyspace maintained, the revocation facilities, etc. That this exists, is robust, and can be operated by entities who can barely function when these are delegated to a CA is an underlying assumption that “the good old paper postal system appears to be much more secure” doesn’t appear to take into account.

        Again, I’m not saying MST is bad or that it doesn’t serve a purpose. Only that it does not replace TLS until it describes the missing parts of the trust architecture fully. TLS was able to delegate part of that to X.509 which was already an extant and mature standard. MST neither references whatever extant robust external foundation on which it relies for secure lifecycle management of bare symmetric keys, nor specifies such a thing, but does assume it exists. It seems reasonable to want to fill in those gaps before relying on MST at scale or claiming it is sufficient to replace the underpinnings of the majority of digital transactions around the globe, IMHO.

        • Dipl. Ing.(BA) Frank Gerlach says

          Hello, I would like to comment the security of the postal system and general key material dissemination in general.

          I can only write about Germany, but at least here we put a lot of trust into it. For example, banking access credentials (passwords, transaction authentication numbers) are sent via the postal system. If we could not trust the postal system here, the entire internet banking system (as it is) could not be trusted.

          Finally, nothing stops banks and similar institutions to hand out symmetric keys in their branch offices with an ID card as the authentication of the customer.

          We should really question whether we really need the Public-Key Mumbo-Jumbo in all of these cases

          • I wish the project the best. Really. But even if we agree to disagree about the completeness of the security model, displacing the CA trust architecture with something new is a few orders of magnitude above just having a superior transport and an optimistic take on key lifecycle management. The CA trust model sets a bar that doesn’t get any easier to clear just because we now hold it in low disregard. Public/Private key encryption filled a need that existed because secure symmetric key distribution turned out to be an almost insoluable problem. MST tosses out the key distribution solution and addresses the gap by claiming that, thanks to the Post, it’s a solved problem. I just don’t see that as clearing the bar set by the CA trust model.

            Prove me wrong on this, please. Just do it in the market and not here in the comments section. I would love to see us evolve beyond the CA trust model. Nothing would please me more than to have to retract these comments and admit I was shortsighted and lacked the vision to see MST as the solution. Should that day come I’ll happily buy you a beer and celebrate. And you can hold me to that.

  4. You might have a look at MST (Minimal Secure Transport) https://github.com/DiplIngFrankGerlach/MST.

    It provides the same security assurances as TLS, but requires a pre-shared key. Also, it is very small (less than 1000 LoC, without AES) and can therefore easily be reviewed by an expert.

  5. Interesting article. I personally think that any security mechanism relying on central authority, or any other kind of centralization can and should be replaced.

    I think one of the key phrases of your article is when you say “One thing I’ve come to realize is that anonymity means ambiguity”. First of all with this phrase you confuse decentralization with anonymity. (Look at Bitcoin, Bitcoin is everything but anonymous in nature).

    When you want to decentralize, the most efficient way to do it is to stop thinking in terms of two computer communicating and start thinking in terms of a message being sent in a network of interconnected peers, from a starting point to a destination. Because the network of peers will act as the central autority.

    The role played by the CA is useful, and do not need to be removed but to be replaced by a 100% reliable mechanism. And to stop with the “Globally speaking, most of the CA are fine, there are only minor exception / Trust us, the certificates are kept private / We need to hire a security engineer to help us configure the webserver with HTTPs because it is too complicate”.

    The CAs can and should be replaced by a network of peers and a protocol. The peers would be the web browsers of internet users – why wouldn’t they support such a protocol. From there, you can see that it would be easy to take into account the “traceroute” in this protocol, the path between two peers to detect when two people use the same key, if really you can’t trust the maths… It would solve the issue of the with duplicate keys.

    It’s worth saying that you would be adding an overhead and extra server computations for something having less than (10E-30) to occur (it’s even lesser than that I don’t have the exact number right now). And you have to realize that for example Tox, a secure messaging P2P protocol relies on it to authenticate its users on the WHOLE network. But here we are saying that 1) two identical keys will be generated, but also that 2), the malicious user will be lucky enough to target the right server at the right time, reducing even more this probability….

    If you are interested in the duplicate keys topics, I would recommend you to visit the wiki of the tox.im project, they explain their solution it in a very comprehensive way (on their wiki/github websites).

    Regarding server authentication, when there is no duplicate key, then there is no problem to authenticate the server. I think about it in terms of two way authentication, with private/public keys in both sides, some kind of checksum controls etc… In your protocol you can also enhance things with extra features like reyling on the overall trust of the network (if many nodes gives good reputation to the candidate server node, assume it is the legitimate server node).

    The foundation of this approach is that if you remove CAs without replacing them with a network of peer, you actually do not remove them, you simply make the server become its own central authority. And then, a lot of new problems may arise (what about DNS spoofing etc… )

    But effectively replacing HTTPS-TLS/SSL with such a system will make a great difference. Much more websites would be secured, since there would be no overhead and no extra costs associated with security. When we are able to do without intermediaries, and when it significantly improve efficiency, solve problems and simplifies things, we should really do it.

    • …when you say “One thing I’ve come to realize is that anonymity means ambiguity”. First of all with this phrase you confuse decentralization with anonymity.

      Not at all. If I accept anonymous identifiers to post to my forum, for example, I know only that I see the same identifier on successive visits. If the identifier is sufficiently long and random appearing, I might even assume that it was created with sufficient entropy as to practically guarantee no dupes. However, I have no way to distinguish between two people are using the same identifier. A holder of that identifier can safely repudiate any posting because there is no assurance that the person is the only holder of that identifier. That ambiguity exists regardless of whether the identifier was issued by a central authority, a collective, or an individual.

      In cases where such assurances are important, additional controls must be used. Associating a password with an identifier provides additional assurance of uniqueness. Binding the identifier to a certificate, more assurance yet. The framework of additional controls is what allows someone to claim a unique identity. Without these the only claim that can be made is “I’ve seen this identifier before” which may be sufficient for forum postings, but not for high-value applications, because of the inherent ambiguity.

      The role played by the CA is useful, and do not need to be removed but to be replaced by a 100% reliable mechanism.

      I believe that you are confusing math with life here. It’s possible to create proofs of mathematical concepts but security systems are real-world implementations conceived, created, and operated by humans. If the criterion is a 100% reliable replacement for the CA we will never replace the CA. I believe a more practical objective is to find something an order of magnitude better than the CA system at a comparable cost.

      The CAs can and should be replaced by a network of peers and a protocol. The peers would be the web browsers of internet users – why wouldn’t they support such a protocol. From there, you can see that it would be easy to take into account the “traceroute” in this protocol, the path between two peers to detect when two people use the same key, if really you can’t trust the maths… It would solve the issue of the with duplicate keys.

      This gets back to my assertion that it is necessary to add secondary controls to a bare identifier wherever the ambiguity issue needs to be solved – which is exactly what is proposed here. Unfortunately, the method described doesn’t at all solve the issue of duplicate keys. The browser on my laptop or phone connects to the Internet from all over the world and through a variety of different proxies, depending on where I’m working at the time. Any system capable of uniquely recognizing me on subsequent connections would explicitly not be able to use attributes of my connection to do so.

      If you are interested in the duplicate keys topics, I would recommend you to visit the wiki of the tox.im project, they explain their solution it in a very comprehensive way (on their wiki/github websites).

      Thanks for the pointer. I’ll have a look!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.