Today Doc Searls reposted Dave Winer’s three part post challenging the need for HTTPS Everywhere. Dave writes:
There’s no doubt it will serve to crush the independent web, to the extent that it still exists. It will only serve to drive bloggers into the silos.
Some pretty strong claims from Dave and his posts are worth a read. They come, in my opinion, to an entirely wrong conclusion despite some valid points and a “sky is falling” delivery. Why wrong? Consider how you might prioritize security in a software development project. This is something I tell my consulting clients but I’m going to give it to you for free:
Defaults matter – even in software projects
Imagine two software development projects in which the finished product will handle sensitive data. The way most projects work is that they design all the needed functionality up front. Then they estimate how much time and money that will cost. These are subtracted from the due date and the budget and what’s left is for “non-functional” requirements, documentation, testing, contingency, and of course anything above the most basic security. We’ll call this one Project A. Now consider Project B. The project manager for this project has front-loaded a great many security controls into the cost and timeline.
Let’s assume now that the project manager for each project has made a mistake regarding security. In the case of Project A where security is minimal, the mistake is that some critical security control has been left out. Getting it added on will impact the timeline, the budget, and require the PM to go fight for the necessary funding and approvals. If the PM lacks the political goodwill to make the fight or the request is declined, the result is that customer data is exposed.
The mistake in the case of Project B is that security has been over-provisioned. If this is discovered early enough, the removal of that control and associated costs makes the project come in faster and cheaper than originally planned. If the error is discovered after the control is fully baked and it isn’t easy to remove, the result is that the project may have overpaid a bit (but still delivered on time and in budget, all other things being equal) but no customer data is exposed.
We know that anything involving humans is going to have defects and errors, especially something like a software development project. When such a mistake occurs in Project A, very strong incentives act to discourage correction of an error that exposes customers to potentially catastrophic risk. Project A depends heavily on their being no such errors, despite their inevitability. Compare to Project B which assumes errors will be made but arranges things so that the risk is very small and restricted to the company rather than the customers.
Sadly, the Project A approach is almost always the one taken. Often when I suggest Project B to a client it is pointed out to me that competitors using the Project A method will deliver new products faster and at greater profit. While this is true, the consequence is that for security and other “non-functional” requirements, competition on profit margin is a race to the bottom. Security and customer risk are externalities.
Secure by default
When I first joined IBM, a colleague named Keys Botzum had for several years been working to move IBM’s WebSphere Application Server to a “secure by default” posture and had made significant progress. I am by no means a WAS expert so I have only one example to relate.
Early versions of WAS shipped with a default certificate and a default password. These made it easy to get a new system up and running in a development environment. Administrators were advised to change the certificate and password before deploying systems to Production, especially when these systems were exposed to requests from the Internet.
Sadly it was possible for many years to scan the web for WAS servers, then use the default certificate and password from your own WAS instance to remotely administer some company’s web application. Not because the servers were impossible to secure, but because people didn’t know how or didn’t bother. One client told me that using the default certificate and password was the “prevailing best practice” which was the standard to which they’d be held in court. They knew of the vulnerability, and how to correct it, but didn’t act to do so.
WebSphere Application Server is of course just one example of infrastructure software that has moved to a secure-by-default posture. The trend has even extended to consumer-facing things like wireless routers, most of which now come equipped with a unique password printed on a label affixed to the device. If the password is removed, resetting the device to factory defaults puts it back. Internet and eCommerce security is vastly improved because the defaults in the back end server components have been moving toward secure-by-default and enable meaningful security. Administrators have to deliberately choose to disable it for it to be unsecure. This is the “Project B” approach applied to the user-facing security model, and it’s working.
Bad security remains the norm
Despite improvements in the back-end, crappy web site security remains the norm. As a rule I report issues I find with web site security to the owners of the site, at a minimum. Though my intent is to try to get the problems fixed, I’ve been responsible for the closure of several sites over the years. Site closures are by far the exceptions. Most site owners, something like 90%, simply ignore my report. A very few site owners have actually fixed the issues and that’s sufficient incentive for me to keep at it.
It’s a Sisyphean task because web sites and web developers are now a market commodity and the vast majority of participants don’t understand web security in depth or even how to verify assurances that “it’s in there.” Most site owners will never contract for an external review or penetration test until after a breach.
While you might think this is limited to mom and pop stores, in my experience bad security is the norm among large enterprise and government as well. Naturally, I have real-life examples.
State of NC
A year or so ago my broker emailed PDF documents to me for the transfer my 401k. These had the account numbers, my SSN, personal address, etc., and were a signature away from transferring my entire life’s savings to another account. The forms were emailed unencrypted so I dropped the broker and filed a breach complaint with the State of NC. It turns out that the NC State’s breach reporting web site runs over HTTP.
The report form asks for lots of detailed information about the breach including how it happened, the systems affected, what measures have been taken to prevent a recurrence, and so forth. The NC Department of Justice presumably uses this information to figure out what action to take. Hackers on the other hand would see it as a source of new targets to breach since the victim is asked to provide details on how they were breached the first time and any new security measures implemented since.
Out of curiosity I tried typing https:// into the URL for the breach reporting form and got a time-out. So it isn’t that the NCDOJ don’t default to HTTPS for this information, but rather they don’t even support it.
Hilton Hotels
I’ve written extensively about how I tried for several years to get Hilton to fix a flaw in which account credentials were transmitted over HTTP despite assurances in their Privacy Policy and Terms Of Service that technical protections were in place to prevent this specific thing. I complained to Hilton through their Tech Support, their PR people and demonstrated the problem to every property manager at every hotel at which I stayed during that time.
When that didn’t work I tried to get IBM’s account team to raise the issue (Hilton was an IBM client at the time). Still getting nowhere, I complained to the Commonwealth of Massachusetts who had at the time some of the toughest data privacy laws in the country. Despite my ability to demonstrate the vulnerability was told I have no grounds to make the report. (I’m not a MA citizen and no breach of MA citizen data was known.)
After all this failed, I tried the FTC. In this case I made a video where I demonstrated the problem and posted it on YouTube. Note that this is the exact problem for which FTC proceeded against Wyndham Hotels, a chain of 90 hotels at the time. Perhaps they didn’t want to go after a larger chain with greater resources, I don’t know. What I do know is that I was unable to get even the courtesy of a meaningful reply from the FTC.
A secure-by-default web is a better web
Up to now the web has been developed using the Project A method. We leave off even the most basic security controls, and we do this at every level including large enterprise and government. We make no provision for citizens to get known security holes plugged. That service is reserved only for victims, by which time it’s too late. Even victim complaints may fall on deaf ears unless there are half a million other victims in the same breach. The magnitude of the security vulnerability has no bearing on the ability of an individual to get corrective action. The only criteria that count are the number of affected victims and dollar amount and these are counted only after the breach has occurred.
This pretty much sucks for ordinary people who increasingly find that critical goods and services are best procured online. You may not like the unsecure web but opting out simply isn’t practical.
Our experiment in elective https-as-best-practice has failed miserably. This is true even among organizations that should know better and despite potentially catastrophic impact to data breach victims.
As users of the web, we aren’t any good at teasing out which functions on which sites should be secure and which are safely transmitted in the clear. The Project B “secure-by-default” method addresses this. Yes it’s true that HTTPS all the time and on every site may still be bad in all they ways Dave Winer points out. But it will be a whole lot less bad because when we make a mistake and accidentally secure something that doesn’t need to be secured, people’s lives aren’t ruined. That seems to me to be a better outcome than what we have now.
Bottom Line
It comes down to this:
- Any web site that requires a login or stores personally identifiable data needs to be on HTTPS. Not just the login form, not just the pages that handle data, and not just a few elements of the page. All of it.
- Until the average user can make that distinction reliably, it’s better to have all sites use HTTPS than for HTTPS to be optional, hope the right sites apply it, and hope users are sufficiently skilled to pick out the ones that do it wrong.
We can have plaintext HTTP by default or encrypted HTTPS by default. There is no subtly nuanced gray area in the middle in which appropriate sites reliably have HTTPS. Our failed expectations that this is so create millions of new victims every year. I too worry about erosion of the open web. I’m just not willing to keep sacrificing data breach victims by the millions on the premise that an open web and a secure web are mutually exclusive. They aren’t.
Hate to be this guy 😉 http://heartbleed.com/
(In theory, I agree with you. with Doc. with Dave Winer…and that makes my head want to explode)…
~dsw