Shedding the light on the “going dark” problem

My theory about the “going dark” problem is the opposite of the official government explanation. They claim that they need to be able to read the communications of bad actors. (“Bad actors” in the security sense here, not the Hollywood sense.) But the back doors they’ve engineered have more to do with weakening the keys than with breaking the algorithms.  Mitigations are simple: introduce additional entropy while generating the key, use uncommonly long keys, use protocols with Perfect Forward Secrecy.  Anyone serious about preventing eavesdropping can reasonably expect to do so with a bit of work.

If that’s true, then what’s the big deal about lots of ordinary people who are *not* surveillance targets also using encryption?


When Internet traffic is mostly transparent, then the encrypted traffic, and in particular the endpoints, stand out like fluorescent dye under UV light.  Even though the content can’t be read, knowing from where it originates and to where it ends up is valuable intel. It reveals the topology of relationships and provides context.

A perfect example was the recent case of Harvard Student Eldo Kim who sent the school an anonymous bomb threat in order to get out of his final exams. He sent the threat using the anonymous Guerilla Mail service, which he accessed using TOR. He was caught almost instantly. Authorities simply looked for cloaked connections at the time the email was received and found one originating from Harvard’s campus Wi-Fi. They didn’t need to read the communications, but merely connect the dots. If it were the case that Harvard students routinely use TOR for all their traffic and there had been hundreds of connections at the time the email was received, then it would have been much more difficult to identify the specific connection from which the email originated.

So, what does all this mean? If it were widely accepted that there are encryption technologies that the government can’t crack, there would be more demand for it for personal use.  People would object to their comms being decrypted if the stated objective of also rendering comms of the bad actors into plaintext were not also achieved.  It is in the government’s best interest for us all to simply declare that privacy is dead, give up all hope and allow them to continue to decrypt all traffic in the trunk lines without objection.  But that doesn’t mean they are right in their approach or that we need to buy into the BS. Unlike Harvard’s Kim, a snapshot from a single point in time isn’t sufficient to reveal a complex web of relationships. This approach requires the collection of all our data, over time and in plaintext, in order to retroactively find those contextual relationships.  The problem is, once they have it, abuses always follow.  We saw that with the TSA’s misuse of scanned images from the “nude-o-meter” scanners, and from the NSA’s abuse of surveillance to spy on spouses, ex-lovers and celebrities. As God Himself recently Tweeted, ” With great power comes zero accountability.”

If we accept that privacy is dead and act on that belief, then it truly is. The alternative, if we choose not to buy the BS, is to assume that VRM and the Personal Cloud can and should use privacy-enhancing technology regardless of whether the result is to block government surveillance or not.  Privacy was never absolute in the age of atoms.  But it was sufficiently expensive that deep surveillance needed to be targeted.  Our goal as technologists in the age of bits should not be to choose between none versus absolute privacy, but rather to preserve and erect new barriers that make deep surveillance sufficiently expensive that it must be selectively targeted.  This is the only system for which we have law and policy frameworks in place. If we abandon efforts to preserve what barriers exist and to erect new ones, our present body of privacy law and policy becomes useless.  That our body of privacy law and policy has been weakened in the shift from atoms to bits is no reason to abandon it entirely but rather makes urgent the task of preserving it so that it has time to evolve rather than collapse outright.

Efforts to preserve and enhance barriers to surveillance should aim at the top of the food chain. The higher the barrier to surveillance, the more selective it needs to be when performed, the better it fits within our social justice framework.  Whether there exists someone at the top of the food chain who is unimpeded by privacy barriers does not negate the value of having them in place or of developing new ones.  It only makes these ongoing efforts more important and more urgent.  Among those who profess that privacy is dead,  it is impossible to distinguish between the malicious and the merely misguided so we are forced to assume the worst case.  Assume anyone telling you privacy is dead has an agenda to sell.  It may be packaged nicely and labeled “Fertilizer” but deep down inside we know it’s bullshit.


  1. Great post, T.Rob. In an ideal world privacy would be perfectly balanced with accountability. But in this one, we may have to settle for something that’s more consistent with the balance of power between the state and the individual, which our laws and customs assumed for centuries.


  1. […] who report on them – overstating the problem? T. Rob Wyatt of IoPT Consulting thinks so. Wyatt wrote in December 2013 that most Internet traffic is completely transparent, causing encrypted traffic to […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.