All Your Accounts Are Belong To Us

Would you give your account ID, password, account numbers, email address, home address, and all your other sensitive personal information to random strangers? No? Are you sure? Scripts embedded in a web page or app allow the script provider to record every keystroke and every mouse movement you make on the page.

So why are so many of the scripts on account management pages hosted by 3rd parties?

[Read more…]

In defense of HTTPS Everywhere

Today Doc Searls reposted Dave Winer’s three part post challenging the need for HTTPS Everywhere.  Dave writes:

There’s no doubt it will serve to crush the independent web, to the extent that it still exists. It will only serve to drive bloggers into the silos.

Some pretty strong claims from Dave and his posts are worth a read.  They come, in my opinion, to an entirely wrong conclusion despite some valid points and a “sky is falling” delivery.  Why wrong?  Consider how you might prioritize security in a software development project.  This is something I tell my consulting clients but I’m going to give it to you for free:

[Read more…]

Enable-Javascript.com

Today for the first time, a web site I visited directed me to http://www.enable-javascript.com/  The site is supposed to be a service for webmasters who need an easy and accurate way to tell site visitors how to enable Javascript in the browser.  Though at first glance that may seem like a great idea and a useful service, it is just the opposite.

This is bad on so many levels.

  • The site makes no mention of any of the many good reasons why you would want Javascript disabled.
  • It doesn’t ask the user to consider how or why Javascript  came to be disabled in their browser in the first place and the implications of reversing that action.
  • It fails to consider the possibility that Javascript is enabled but that it is being blocked by a plug-in or add-on, in which case the instructions will be useless.
  • It offers no information on any of the tools that allow you to enable Javascript on a site-by-site basis

The only function of the site is to tell visitors how to enable Javascript globally for a variety of browsers, as if that were universally a Good Thing.  There is no attempt whatsoever to explain the issues with sufficient depth to allow the visitor to make an informed decision about enabling Javascript.  Considering that Javascript generally has to be manually disabled, who is the target audience?  People who used to know why they wanted scripts disabled but have since forgotten?

And who are the target audience among webmasters?  If the site is usable without script, visitors have no reason to enable script, therefore no reason to visit http://www.enable-javascript.com/.  Presumably, this service is targeted to webmasters whose sites fail to provide content with script disabled.  The webmaster who links to this site is saying to their visitors “My content is so valuable that it is worth the risk to you of turning on Javascript for all sites on the Internet, including those that host active malware such as phishing sites and malvertising networks.”

Or, more likely, it is aimed at webmasters whose advertising fails to render with scripts disabled. In that case the webmaster who links to this site is saying to their visitors “My content is so valuable to me that it is worth imposing the risk to you of turning on Javascript for all sites on the Internet, including those that host active malware such as phishing sites and malvertising networks.”

Who is the more naive one in this exchange?  The visitor who follows the link and enables Javascript globally?  Or the webmaster who genuinely thinks this is a good idea and implements it?

I use some script on my web sites but my approach is to not collect personal data so there’s nothing for me to lose, and to not monetize traffic with dynamic ad networks known to carry malvertising.  I want people to feel comfortable white-listing my site if they want it to be responsive and mobile-friendly, to see the slider on the home page, or to use the social media functions.  I will happily point them to NoScript, Ghostery, Privacy Badger, and more.  But the content doesn’t rely on scripts.  (Possibly The Odd is Silent does since WordPress hosts it, but I try to minimize the impact, including paying them to remove their ads.)

I would never ask you to enable scripts globally to view my content.  And I can’t help but wonder about anyone who would.

Vendor entitlement run amok

My main issue with vendors turning us into instrumented data sources isn’t the data so much as the lack of consent. My Fitbit knows a lot about me but it’s an add-on that I self-selected and it provides value to me. The tracking in my browser is not something I can easily avoid since the browser is now an integral part of my life. Between those extremes there are lots of IoT devices that you can currently choose a private version but where that choice is rapidly disappearing. You can still buy a dumb light switch but not a dumb car, for example. Your shiny new GT phones home.

Among the vendors who seem to feel an entitlement to our data is Microsoft, whose Windows 10 is basically a box of spyware disguised as a user-productivity-gaming-and-cat-video-watching platform. I’ve already written about the issues there, how to mitigate them, and the disheartening number of those “features” that can’t be disabled. Yet as bad as all that is, this latest revelation still managed to surprise me across several metrics: the lack of consent, the extent of the invasion, the degree of exposure, the fact that it’s already been exploited to infect user devices, the fact that the entity who exploited it is a “legitimate” vendor, and the fact that said “legitimate” vendor egregiously exposed the exploit to the Internet. [Read more…]

Intentcasting…to a roach?

OK, so it’s a robot and not a roach. But it is a robot that *looks* a lot like a roach. Researchers at Bielefeld University are experimenting with emergent behavior on a robot platform they named Hector. Their software thus far has been reactive. The new software aims to give the robot “what if” capabilities to solve problems it has not been programmed for. This would imbue the robot with independent goal-directed behavior – i.e. robot intentions.

But beyond that, “they have now developed a software architecture that could enable Hector to see himself as others see him.” In other words, they gave it theory of mind and their ultimate goal is for it to be able to sense the intentions of humans and take these into account when formulating responses and actions. They want it to be self-aware. Though the rest of the world will probably see in this the parallels to Skynet of Terminator fame, the more interesting part to me is the notion that it will sense human intention.

Perhaps this is because the current crop of “smart” devices seems very autistic to me.  Though they have a wide range of apparent intelligence, they respond only to what they can directly sense, and only within a context of which they are the center.  The inability to make inferences about humans, and in particular to understand their intentions, is a typically autistic cognitive deficit.  While it is possible to emulate this to some extent, it is often perceived as inauthentic and creepy, which may be why I write about it so much.

Bielefeld University's Hector robot

Bielefeld University’s robot Hector is close to being self-aware

The quest by the marketing industry to provide targeted messaging tailored to your specific interests and intentions very much parallels the autistic experience.  Any given product or brand seeks to better understand how it is perceived by humans.  Or to put it another way, products and brands lack theory of mind and the ability to infer human emotions and intentions from non-verbal communication.  Like any autistic person, they attempt to mitigate their cognitive deficits by gathering data, observing reactions, forming a model of human behavior, calculating appropriate responses, then improving data sources and refining the model over time.  When humans do this we call it vocational training and independence skills.  When vendors do this we call it ad-tech.  Both groups tend to wonder why people at large often perceive it as creepy.

Hector is essentially autistic.  With the addition if self-awareness and the ability to infer human intentions, Hector may cross the line to creepy.  We’ll find out shortly.

JTPhoneHome

JT (Jibo Terrestrial) phone home!

The consciousness of most of our iconic sci-fi robots like C3PO and Robbie was modeled after that of humans – it was self-contained and part of the robot itself. Even though the Star Wars bots could access the networked world, they didn’t send their sensor data back to a central mother ship to be interpreted, processed, and turned into instructions for the robot to follow, then transmitted back. Everything happened locally. Contrast this with our real-world robots that use the mother ship architecture. Siri, Cortana, Alexa, Google [x], Jibo, Pepper, etc. all phone home more often than ET.  If you use these products, their vendors have access to all the data they send back to the mother ship.  Because that data is potentially very valuable, it would be naive to believe that it will be discarded once its benefit to you  the user has been realized.

It remains to be seen how the software coming out of Bielefeld will work, but one hopes that some aspect of self-awareness will be so incompatible with latency as to strongly favor local processing. If that is true and the new robot architecture is more like science fiction of yesteryear than the science fact of today, there is some hope that someone, somewhere on the planet will finally use intention detection in a non-creepy way that primarily benefits the individual and not the vendor.  It might also give us insights that will improve the lives of autistic people by helping us learn to infer human behavior in non-creepy ways.

On the other hand, if you ever read about Hector in Ad Age, we are all doomed. Skynet will have awoken. And it will have a really good deal for you.

 

A version of this post which more deeply explores the autism connection is posted on my Ask-An-Aspie blog here.

Surprising security issue at Host Gator

I recently signed up for – and promptly dumped – Host Gator.  The QOS (Quotient of Suckage) was off the chart but in this post I’ll focus on a surprising security exposure that was revealed in the process.

[Read more…]

What is your definition of personal?

Over at the Cloud Ramblings blog, John Mathon provides his list of Breakout MegaTrends that will explode in 2015.  There’s an entry in there about Personal Cloud rising to prominence.  Yay!  John and I often see eye to eye on our visions of the near future of computing and Personal Cloud is definitely huge in that future.  But it seems that once you get past the name “Personal Cloud,” our visions begin to diverge.  I’d like to explain how they diverge, why my vision is better, and beseech John and all the other pundits, analysts and trade journalists out there to adopt a slightly stricter interpretation of what, exactly, constitutes “personal.”

[Read more…]

The latest malvertising incident and why you should care

Today’s news from Net-Security.org is that newly discovered malware was found on Google’s ad network and its purpose is to hijack your router’s DNS settings causing all devices behind your firewall to use poisoned DNS resolvers. That means even if *you* run NoScript, AdBlockPlus, HTTPS Everywhere, Ghostery, anti virus and avoid sketchy sites, a visitor on your guest network or even some anonymous neighbor leaching off your wireless signal can compromise your router.

Awesome.

Article: Attackers change home routers’ DNS settings via malicious code injected in ads

If all my ads were not so personalized and relevant, I’d be upset about this. But it’s SO worth it, right?

The funny thing is that the attackers have FAR more privileged access to your device and your data than do the malvertisers and yet so far they just want to take over your device and empty your bank account. If the attackers ever decide to go after your *data* they’ll not only find out your daughter is pregnant before you do, they’ll make her pay $100 to not tell you about it. Then you get an email asking what your conservative employer might think of your risque purchase history. They clean out your bank account and ruin you, it’s a 1-time profit. But if they blackmail you with your data they get a long-term income stream. They get a pension fund. Forget about calf-cow relationships. Start thinking ant-aphid.

But we’re good because there are lines – somewhere – that even creepy, invasive, malvertising adtech won’t cross and that will stop the spread of cybercrime over advertising infrastructure. Right? We’re good because the adtech industry is hard at work distancing themselves from organized crime and building security, accountability and user choice into the advertising system.

“Wake up T.Rob, you’re daydreaming again!”

Oh, right. I live in Bizarro World where adtech doesn’t acknowledge any responsibility for building the rails malware rides in on. They would side with us in our battle against against organized cybercrime, except they are too busy making advertising even more invasive: Targeted Online Marketing Got Creepier Again!

Note the exclamation point at the end.  Almost seems like the author is excited about this in a good way.  In fact, that’s the case.

So if you think of it – yes, it is very creepy. It goes to the extent that marketers will start knowing more about you than you do yourself.

But on the other hand we think it’s a great step forward. First of all it means that marketers are interested in finding out what we want to be offered. They are actually listening to us. Secondly this also means more targeted communications. Instead of being bombarded with advertisements you have zero interest in, you may find that eventually you start enjoy advertising as it fits seamlessly into what you are looking for.

But the Adtech folks aren’t stopping with impressively better tech, they are hitting new efficiency levels as well, as noted in Obama-Grade Ad Tech Coming to a Local Campaign Near You. “It’s been a challenge for even mid-range campaigns to be able to afford these online advertising capabilities. Today, it doesn’t matter if you’re running for city council or congress, because now you can reach voters in one of the most effective ways possible regardless of your campaign budget.”

Or if you go to Ad:Tech NYC next week, you can learn about the new frontier of tracking consumers offline in  Behavioral Breadcrumbs: New Tools to Read Digital Signals:

Most traditional digital tracking and measurement only works as long as a consumer sits in front of a browser. What happens when they disconnect? A new breed of technologies helps extend scalable insight into consumer behaviors beyond the screen. From RFID to Wifi to optical tracking, this panel will discuss methods that identify consumer behaviors, help test and ultimately measure.

Key Takeaways:

  1. Market to consumers using signals they’re pushing.
  2. Track behaviors using consumer signals.
  3. Create a type of interactivity and measurability in your campaigns.

I’m sorry, but I’m not PUSHING signal to your RFID reader, WiFi access point, or optical recognition tracker.

If you want to know what consumers pushing signals looks like, go talk to the folks at Customer Commons, whose QR-coded badges broadcast the intention to not be tracked in exactly these ways.  Does your optical tracker honor these signals?  I’m guessing not.

If you want to know what consumers pushing signals looks like, talk to the Respect Network who are building a platform specifically to exchange user-generated signal with marketers and businesses.

If you want to know what consumers pushing signals looks like, talk to me or my colleagues at Qredo who are building out the world’s first and best fully-encrypted, end-to-end communications and Personal Cloud platform that is mutually authenticated at the endpoints and yet the data and metadata are completely anonymous in the cloud servers.  We’re all about quality signal.

Most of all, if you want to know what consumers pushing signals looks like, read The Intention Economy.  Here’s a hint: when we customers push signals, it’s intentional, deliberate, and we like you for receiving them. If you have to hunt for the signal, if we don’t like that you received it, if stealth is involved, if it feels at all creepy to any of the participants, it probably isn’t being pushed.

I’m not going to reach anyone who honestly believes that signals received over passive RFID scans, Wifi hotspot scanning, and optical recognition tracking are being “pushed” by consumers.  However, there must be some marketing and advertising people who realize how incredibly wrong that characterization is and why.  To those people I plead: please side with the consumers against organized cybercrime.  Quit acting as the R&D arm of cybercrime who watch you lay the tracks, then ride them direct to your audience, poisoning the well for all involved.

We are on the verge of computerizing the consumer side of commerce.  When we computerized the supply side 30 or so years ago, it transformed the world.  But the consumer side is much larger and the transformation potentially that much richer.  Consumers want to build systems that send you signal.  Stop trying to sneak in and steal it and just partner with us.  Once we have some trust and accountability between us, organized cybercrime will have to do their own R&D.  And if you are wondering how to make those connections you’re in luck.  The next Internet Identity Workshop is next week.  The place is practically littered with common ground for us to meet on.

Marketers and advertisers, now you get to choose who you want to work with and for.  The customers, entrepreneurs, and identity geeks in the VRM community at IIW?  Or organized cybercrime?  Choose wisely because you’re running out of Mulligans on these compromised ad networks.

Is it bigger than a lolcat?

1551781_561186843966658_361670759_nOne of the currently popular Internet memes poses the question of what would be most difficult about today’s society to explain to a time traveler from the 1950’s.  The reply calls us all out on our frivolous use of the massive amount of computing power available to all of us.  The sentiment mirrors my VRM Day presentation at IIW where I lamented that we could have built consumer-side apps to transform commerce but instead we created Angry Birds.

We’ve all heard that the the amount of computing power it took to support the manned moon missions is now available in a calculator, or we have at least heard some similar comparison.   There is justifiable incredulity that computing power is now so cheap and plentiful that not only can we afford to squander it, but squandering it has become our very highest expression of that power.

Consider for a moment the example used in the meme to make the point.  While the vision of putting that wasted capacity to work doing research is noble (I’m an enthusiastic supporter of World Community Grid), it seems rather uninspired. Basically, we should be looking at Wikipedia instead of lolcats, according to the meme.  Despite that rather pedestrian example, the point is so compelling as to go viral. I wonder what impact it would have if there were an even better example of personal empowerment than Wikipedia.

You might rightly ask, as a friend recently did, “If consumer-side business apps are so compelling, how come nobody builds them the right way?  How come nobody buys the ones we have, crappy though they might be?”  Good questions, and ones I believe we know some of the answers to.  I’ve identified two root causes.

Architecture

The biggest problem is the prevailing architecture.  When we first applied computing to commerce on the vendor side, the hardware and software cost millions of dollars.  Financing the systems was possible only by spreading the cost across very large customer populations.  At the time, customer data was not inherently valuable, but rather was a byproduct of the system.  When I worked as a computer operator for an insurance company in the 1980’s you may have had a decade-old policy but I guarantee we weren’t keeping all that data online.  Data was expensive and we kept online only that which was required to conduct day-to-day business and we only archived that which was required to meet compliance obligations.  If it wasn’t required for daily operations or compliance we destroyed it.  Data was an expensive cost of doing business but it was less expensive than manual processing so it was tolerated as a necessary evil.

Eventually, the growth of computing power and shrinking cost of storage gave us the ability to analyze all that data and suddenly it was no longer an expense but a new source of profit.  Companies found they had untapped gold mines in their vaults and set out to unblock all that value.  But it wasn’t enough.  Soon they began to tweak systems to proactively collect ever more data, from every possible point of interaction with the customer.  Save it all, let SAS sort it out.  Unfortunately, the moment in time when we collectively realized that data is valuable was also the moment when corporations had more of it than ever before and consumers had none.  This locked in the proportions and model for distribution of data. Which is to say there is no distribution per se, just vendors with all your data and you with none.  All variations on this model start from this default position.

The corporations have come to believe that consumer data is their birthright.

The result is that the discussion around consumer’s access to their own data is framed in terms of “releasing” it to consumers, but only subsets, under strict terms, and usually under tight constraints on what the consumer can do with it.  The consumer is expected to be thankful for whatever access to their data they are granted.  The corporation is, after all, doing the consumer a favor, right?  (Say “yes” or we revoke your API key.)

In the absence of a better model, all new designs are based on businesses synthesizing new sources for ever more valuable consumer data.  These include your browser, your phone, your car, and so on.  But if you were to build out commercial software platforms from scratch in an environment with cheap, ubiquitous computing devices and high-quality open-source software, would this vendor-owns-all-data architecture even be possible?  If you tried to build Amazon from scratch today and a competitor said “we’ll give you the same market place, the same inventory and the same prices, but we’ll also give you machine-readable copies of all transactional data” someone would build apps to capture and analyze all that data, the app builder would get rich, the competitor market vendor would get rich, the loyal customer would get functionality, and Amazon would be forced to also give you your data or go extinct.  Unfortunately, Amazon achieved dominance without any competitive pressure to give you access to your own data, and so they don’t do that.  The same is true of every other large vendor.

The premise of Vendor Relationship Management, or VRM, is that with access to their own data, consumers could apply computing power to problems of commerce and of managing their lives.  We do this now to some extent, but we have a million different vendors holding our data and charging a million different subscriptions for the privilege.  We can’t integrate across these silos and we are locked into specific vendors because the accumulated data is not portable.  The vision of VRM is to consolidate all that data into a personal cloud.  I may still buy a book from Amazon but my personal cloud lists that book in a collection that also includes books I purchased from my local independent bookstore..  Receipts for all these purchases are captured at the time of sale and loaded into my personal cloud without any manual intervention on my part.  The same is true of all my other purchases, utility bills, mortgage or rent payment, car payment, etc.  Having captured all this data, I can analyze my own family’s spending and consumption patterns over time.  If the consumer-side analytics software is good enough, I might even discover that my daughter is pregnant before Target does.

So, the first big issue we need to overcome is the inertia present in the prevailing big-data, corporate silo architectures.  In the absence of a viable competing architecture, corporations have little incentive to change, and why should they?  That data is valuable and any accountant will tell you that giving away valuable, income-producing assets means less profit.  Of course, it’s actually not a zero-sum game like a balance sheet.  Digital data can be copied without diminishing the value of the original copy and if giving it away makes consumers more loyal then the result is more, not less, profit.  Convincing data-hoarding corporations to exploit abundance rather than scarcity is the first step.

Cost/Benefit

The second problem is the cost/benefit equation.  One of the reasons we look at lolcats and play Angry Birds is because these activities do not require constant vigilance of us.  Just the opposite, in fact.  Leisure pursuits have become the highest expression of computing power because they relieve us of the stress of daily life.  We need software business tools that behave the same way.  The lack of enthusiasm for the current and previous crops of Internet of Things “smart” devices and business software designed for lay persons is due in large part to the danger inherent in the usual implementations of these things.

Online banking, for example, requires of the user a much higher level of security hygiene than does Angry Birds.  Worse, you mist practice this vigilance not just while signed onto the bank, but at all times when using a device that might someday be used to sign onto your bank.  Online banking comes with the advertised functionality, but also incurs the cost of acquiring and practicing online safety habits.  If the banking app is reasonably good the cost/benefit nets to the positive side but it can be a close call.  On the other hand, it’s almost all upside and virtually no downside to seeing a cat not quite make that leap to the counter top. (The cat may beg to differ.)

One of the most important reasons today’s software is so unsecure is that all the incentives in the system reward lax security.  If you spend $1M on security, your competitor who spends nothing is much more profitable, as reflected in their superior financial performance.  In order to compete, you too must skimp on security.  You’ll regret it if you suffer a breach but, despite the headlines, that’s actually a relatively rare event. Predictably, this drives a race to the bottom.  Investment in software security is now mostly a post-breach phenomenon and eternal vigilance is your cost of online banking, or any other non-leisure activity that involves even a modest amount of personal risk.

A different sort of cost/benefit issue exists in so-called “smart” devices, the best (i.e. worst) example of which is lighting.  The first requirement of any “smart” device is to act like the thing it replaces.  What the first crop of device manufacturers failed to realize is that a bulb and a switch are different parts of the same system.  You should therefore improve them as a system.  Making either of these operate from your phone is cool, but not something you’d actually want to use – as the mechanical switch and dumb bulbs installed in your house today probably attest.  What manufacturers like Philips and Belkin brought to market are a bunch of “smart” switches that operate dumb bulbs, and a bunch of “smart” bulbs which require you to duct tape the dumb wall switch to the ‘ON’ position.  Nobody offers a smart bulb/switch set.  After the novelty of controlling the light from the phone wears off many people decide “smart” devices are actually pretty stupid and then uninstall them.  The requirements to use the phone handset to control the lights, of having to duct-tape the wall switch to the ‘ON’ position to make it all work, the loss of basic lighting control functionality when the Internet is down, combined with the extravagant retail price of the hardware, all add up to an operational cost which far outweighs the benefit of “smart” lighting.

But a truly smart switch is really the user interface to send a signal, and a truly smart bulb is a receiver and actuator of such signals.  If the switch passes power through to the bulb at all times, if flipping it sends the required signal, and if the bulb then receives the signal and performs simple on/off actions, then the pair of devices can directly replace the equivalent dumb versions of a switch and bulb.  Anyone who has ever operated a standard toggle switch and bulb will be able to operate this smart lighting system without any training or the need to whip out a phone. The truly smart lighting works without Internet connectivity because the signalling is all local, which means the device talks to you first rather than to the manufacturer.  If you can replace the dumb switch and bulb with smart versions and cannot tell the difference in normal operation, then there is no incremental cost but considerable benefit in doing so.

Of course, such a system must be capable of local signaling which in turn implies you get the data before the device manufacturer.  In fact, there’s a possibility you might block the data from getting back to the manufacturer and just keep it local if you know a bit about networking.  The notion that you might be the first and only user of your own data is called personal sovereignty.  Where I live in the United States, the Constitution was supposed to guarantee the sovereignty of citizens.  The constitution never anticipated digital technology, though.  Not only does the prevailing software architecture not recognize your sovereignty as the first owner of your own data, it is more accurate to say that your sovereignty is a direct and urgent threat to the prevailing architecture.

In the absence of personal clouds, consumers as first owners of their own data is unthinkable.
In the presence of personal clouds and cloud apps, consumers as first owners of their data is inevitable.

Think about that for a moment. Nearly all of the computing infrastructure on the planet is designed on the dual premises that 1) data is valuable; and 2) whoever builds the device or service has an absolute right to the data, even to the exclusion of the person to whom the data applies.  So it’s my TV but LG’s data.  It’s my home automation but it’s Insteon’s data.  It’s my cart full of groceries, but even though I was there physically participating in the sale, it’s still the store’s data and only the store’s data.

Changing this situation is the reason for Personal Clouds.  Putting all that spare computing power to better use will require all those vendors to provide not just an e-receipt, but specifically a machine-readable e-receipt, or an API that we can use to fetch our data on demand.  The chicken-and-egg of this situation is that without apps, there is little demand to squirrel away our transaction data, and without a bunch of data there is little interest from developers.  However, it only takes a few seeds to make an alternative architecture take off.   It takes someone who believes enough to build the platform and trust that people, data and apps will come.

For example, imagine being the first grocery loyalty company to make the line-item purchase data available to consumers.  The moment we have a basic app to display, search, and summarize line-item grocery data, that company will instantly become the most profitable loyalty program vendor on the planet.  Other loyalty program vendors will wonder why they ever thought customer data was a zero-sum asset and they too will start giving it away just to remain viable.  The more consumers take custody of their own data and extract value from it, the more value corporations will realize in sharing transaction data directly with the other transaction participants.

Similarly, in a world where consumers get to choose whether device data gets out of their home network and back the device vendor, devices that require a connection to the vendor to function will find few buyers and eventually end up on the discount rack at the back of the store.  In that environment, device manufacturers will change their business model to provide value-added services, friendly dashboards and/or great analytics in order to earn the customer’s trust and a share of the data.  They’ll need to give you a good reason to let it out because the cloud is by default private and it will take some affirmative action on your part before they see that data.

What’s next

The good news is that the only thing keeping us locked into the current architecture is inertia.  There’s a lot of infrastructure built around what Doc Searls calls the calf-cow model but one or two good applications built on a new person-centric architecture can be the trickle that becomes a flood and eventually displaces the old model.  I spent last week with a group of people dedicated to doing exactly that.  The technology needed to build a person-centric platform has been around for a while.  The only thing missing was someone with sufficient faith in the new business model to plug the pieces together with the controls facing the user and then trust the user to drive it.  Because this threatens the existing model and potentially shakes up entire industries, there will be considerable push back.  Those who benefit from the current system want to keep that calf-cow relationship in place.  They want you to be wholly dependent on them for all your information needs, even information that you generate.  They won’t like a new architecture in which you get to keep as private some of the data they now take for granted.

But we can’t keep walking around with the power of a 1980’s mainframe on our hip reserved exclusively for cartoon games and crazed cats.  Even in the absence of some better alternative, we have this vague uneasiness and a bit of guilt that those wasted MIPs could have been put to good use.  We want the Internet of Things but we want it to serve people.  We want the Internet of People and Things. (Hence the name of my company, IoPT Consulting.) When we transact business, we want our own copy of that transaction automatically delivered to our personal cloud.  We want applications to help us index, search, sort, summarize and analyze all our new-found data.  And when we get all that, vendors clinging to the calf-cow model will have to get on board or get put out to pasture.  Then that spare computing power will provide some real benefit beyond distracting us from the real world.  We’ll use it to make the real world better.

This is the mission of the group I’m calling “The Santa Barbara Crew” until I’m out from under non-disclosure:  providing serious, business-grade software, built on VRM principles, using personal clouds as the data store, with least possible risk and maximum benefit to users.  This will transform commerce even more than it did when we computerized the vendors.  The Internet of Things, if built correctly with people at the center, will transform the world more than commerce ever did. We (I say “we” because I’m participating in this project) plan to deliver all those things.  It won’t be compromised based on what we think some or other vendor will or won’t accept.  It’s designed based on what you or I would insist on if we were building out commercial IT infrastructure today from scratch.  More importantly, it’s the thing I think you’d want to use if given the choice.  Get ready because that choice is coming your way soon.

What will you do when that opportunity comes?
Will you remain a calf, forever stuck in the vendor’s pasture?
Or will you claim your own sovereign future?

 

Shedding the light on the “going dark” problem

My theory about the “going dark” problem is the opposite of the official government explanation. They claim that they need to be able to read the communications of bad actors. (“Bad actors” in the security sense here, not the Hollywood sense.) But the back doors they’ve engineered have more to do with weakening the keys than with breaking the algorithms.  Mitigations are simple: introduce additional entropy while generating the key, use uncommonly long keys, use protocols with Perfect Forward Secrecy.  Anyone serious about preventing eavesdropping can reasonably expect to do so with a bit of work.

If that’s true, then what’s the big deal about lots of ordinary people who are *not* surveillance targets also using encryption?

[Read more…]