Intentcasting…to a roach?

OK, so it’s a robot and not a roach. But it is a robot that *looks* a lot like a roach. Researchers at Bielefeld University are experimenting with emergent behavior on a robot platform they named Hector. Their software thus far has been reactive. The new software aims to give the robot “what if” capabilities to solve problems it has not been programmed for. This would imbue the robot with independent goal-directed behavior – i.e. robot intentions.

But beyond that, “they have now developed a software architecture that could enable Hector to see himself as others see him.” In other words, they gave it theory of mind and their ultimate goal is for it to be able to sense the intentions of humans and take these into account when formulating responses and actions. They want it to be self-aware. Though the rest of the world will probably see in this the parallels to Skynet of Terminator fame, the more interesting part to me is the notion that it will sense human intention.

Perhaps this is because the current crop of “smart” devices seems very autistic to me.  Though they have a wide range of apparent intelligence, they respond only to what they can directly sense, and only within a context of which they are the center.  The inability to make inferences about humans, and in particular to understand their intentions, is a typically autistic cognitive deficit.  While it is possible to emulate this to some extent, it is often perceived as inauthentic and creepy, which may be why I write about it so much.

Bielefeld University's Hector robot

Bielefeld University’s robot Hector is close to being self-aware

The quest by the marketing industry to provide targeted messaging tailored to your specific interests and intentions very much parallels the autistic experience.  Any given product or brand seeks to better understand how it is perceived by humans.  Or to put it another way, products and brands lack theory of mind and the ability to infer human emotions and intentions from non-verbal communication.  Like any autistic person, they attempt to mitigate their cognitive deficits by gathering data, observing reactions, forming a model of human behavior, calculating appropriate responses, then improving data sources and refining the model over time.  When humans do this we call it vocational training and independence skills.  When vendors do this we call it ad-tech.  Both groups tend to wonder why people at large often perceive it as creepy.

Hector is essentially autistic.  With the addition if self-awareness and the ability to infer human intentions, Hector may cross the line to creepy.  We’ll find out shortly.

JTPhoneHome

JT (Jibo Terrestrial) phone home!

The consciousness of most of our iconic sci-fi robots like C3PO and Robbie was modeled after that of humans – it was self-contained and part of the robot itself. Even though the Star Wars bots could access the networked world, they didn’t send their sensor data back to a central mother ship to be interpreted, processed, and turned into instructions for the robot to follow, then transmitted back. Everything happened locally. Contrast this with our real-world robots that use the mother ship architecture. Siri, Cortana, Alexa, Google [x], Jibo, Pepper, etc. all phone home more often than ET.  If you use these products, their vendors have access to all the data they send back to the mother ship.  Because that data is potentially very valuable, it would be naive to believe that it will be discarded once its benefit to you  the user has been realized.

It remains to be seen how the software coming out of Bielefeld will work, but one hopes that some aspect of self-awareness will be so incompatible with latency as to strongly favor local processing. If that is true and the new robot architecture is more like science fiction of yesteryear than the science fact of today, there is some hope that someone, somewhere on the planet will finally use intention detection in a non-creepy way that primarily benefits the individual and not the vendor.  It might also give us insights that will improve the lives of autistic people by helping us learn to infer human behavior in non-creepy ways.

On the other hand, if you ever read about Hector in Ad Age, we are all doomed. Skynet will have awoken. And it will have a really good deal for you.

 

A version of this post which more deeply explores the autism connection is posted on my Ask-An-Aspie blog here.

Surprising security issue at Host Gator

I recently signed up for – and promptly dumped – Host Gator.  The QOS (Quotient of Suckage) was off the chart but in this post I’ll focus on a surprising security exposure that was revealed in the process.

[Read more…]

Roadie further blurs the lines between atoms and bits

The business model behind Roadie sounds simple enough: fill all that unused cargo space in commuter cars with goods for delivery.  But look a bit deeper and it is potentially transformative.

[Read more…]

Online advertising is the new digital cancer

cancer cellMany news reports of late have described malware being delivered through advertising networks. But that leaves the impression that the AdTech itself is benign and being hijacked for nefarious purposes. While it may have started out that way, that is definitely not the case today. Kaspersky Labs mention several times in their latest report that the adware has become so aggressive, intrusive, and exhibits such bad behavior that they are now classifying the adware code itself as malicious.

According to AdWeek, global advertising revenues have reached $512B and they forecast declines in revenue growth for 2015.  Meanwhile, cybercrime is estimated to cost the global economy $445B annually and that cost is increasing steadily due to advances in technology and in part because victims pay the price over many years so the victim pool grows relentlessly year over year.

Online advertising has escaped its digital Hayflick limits and is spreading out of control. Online advertising is the new digital cancer.

[Read more…]

The Marketing/Cybercrime symbiosis

MalvertisingWhat would you do if you suddenly realized that your business model was indistinguishable from organized crime?  Or, worse, what if you realized that your business directly harmed people economically and physically?  Web Marketing has evolved to become the R&D lab for organized cybercrime and is currently in that unfortunate position.  Here is the life cycle we are stuck in at present:

  1. Users find new ways to block ads and preserve (or at least fortify) their privacy.
  2. Marketing devises new adtech to circumvent user controls.
  3. Cybercriminals exploit adtech to deliver malicious payloads.
  4. Lather, Rinse, Repeat.

News reports of people whose bank accounts are emptied or their identities stolen by cybercriminals are all too familiar.  Mostly forgotten however is that when some high-level SSL certificates were compromised a few years back, forged certificates were found proxying the communications of dissidents from inside of repressive countries to Twitter, Google, Facebook and so on.  What people thought was completely secure communication was in fact transparent to the authorities.  It is a certainty that people came to physical harm after the Certificate Authority was breached, and that breach was a result of malvertising.

 How did we get here?

The problem is that Marketing believes that it is in the business of creating content and cannot get past that worldview. The reality is that in the age of popular press, then broadcast radio and television, Marketing was reinvented as the world’s first micropayment system. Diverting a timeslice of the attention that a massive audience paid to the program content, and substituting commercial content, created a revenue stream out of thin air. With a large enough audience the aggregate value of attentional time slices could be monetized predictably and in sufficient quantity to fund both the content and the overhead of the micropayment system that generates the revenue.

What Marketing has lost sight of (or perhaps never realized) is that their primary business is distilling and aggregating micropayments to fund content, not in creation of content itself. Yes, it’s called “marketing” and that implies signal from advertisers to consumers. But marketing delivered in the context of program content is invisible if nobody likes the program content. No matter what you spend on a Superbowl ad, people who don’t like football won’t watch the game to see the ad.  Funding content is primary.  Making content is a means to that end but need not be in the age of the Internet.

How can it be fixed?

In the world of bulk print media, and of broadcast radio and TV, signal goes only one way and advertising content was required to close the signaling loop. It created an information stream from consumers in the form of increased sales and revenue.  In the world of atoms, it was actually necessary to prevent the possibility of consumers responding en masse and overwhelming the seller.

But we do not need that anymore. The Internet closes the signaling loop much more effectively. Consumers can send signal upstream without overwhelming the recipient in the process.  We are finally in a position to skip the commercial content and just pay directly for program content. But people don’t want to manage a million subscriptions and vendors don’t necessarily want to do that either.  This is especially true when the lowest practical direct payment is significantly greater than the value of the content provided.  So we still need to aggregate micro-revenue streams and distribute funds back to content creators.  The difference is that we no longer need that to be driven by marketing content.

In a world where it is possible to track every second of content performance, directly funding content through subscription aggregation should be easy to do transparently, accountably, and without the invasive malicious technology. Marketing owns this space but that’s due only to historical legacy.  Unless they remove the blinders with “content creator” printed on the inside they’ll soon cede it to someone else. Content creators just need funding and if they can get it without annoying the crap out of their patrons, and especially without delivering malvertising along with their content, they would be happy to do so. Content creators do not need to sell Bud Light.  They just need funds.

Will Marketing step up?

When it comes to building a subscription aggregation ecosystem, Marketing currently holds a marginal advantage in its existing relationships with content creators and distribution outlets.  This would help in the construction of a subscription bundling ecosystem if only Marketing realized they need to build it. But that advantage is eroding quickly as the Internet commoditizes those advantages so time is of the essence. Direct funding of program content is coming whether Marketing builds it or not. If they wait too long, they lose their main delivery channel as content goes ad-free.

Isn’t Marketing also content?

Creation of content, that thing Marketing seems to believe is their primary business model, is still required but as a subordinate function. It has been pointed out many times that sellers have a need to get information about their products out to the buying public and Marketing fills that need. Fair enough. But if you are in the market for a widget then marketing information about widgets is the program content and it will be sought out on that basis.

Anyone surfing the web ad-free who is in the market for widgets will – surprise! – want to compare widget features, read reviews on widgets, check widget prices, look for things that might fit their needs better than widgets, etc. The role of Marketers for these people will be to make sure that the information exists and is easy to find. Their role will not be to invade the privacy of potential consumers, attempting to claim every possible attentional timeslice by bombarding the consumer from all sides every waking second with brand messages.  In an ad-free environment consumers will self-select to receive Marketing content at the point in time that it is relevant to them.

 When advertising is voluntary and opt-in, *all* advertising is precisely targeted and extremely valuable.

Let me repeat that for Marketers whose attention timeslice I didn’t get the first time:

When advertising is voluntary and opt-in, *all* advertising is precisely targeted and extremely valuable.  No Big Data crunching required. No invasive ad-tech required. No need to cover every visual or auditory blank space with branding.  Furthermore, assuming the system monetizes sales rather than clicks or impressions, Every. Single. View. Or. Click. Is. Legitimate. Full stop.

Our current opt-out approach and consequent oversupply of marketing messages drives the incremental value of individual ads ever lower.  But it is a mistake to believe that the value of an ad can never be less than zero.  An oversupply of ads can in fact create negative value, especially when delivered coercively as is explained in the next section.

An autistic point of view

There is a relatively new model of autism called the Intense World Theory.  Past theories of autism have assumed it arises from functional deficits in the human brain.  But Intense World Theory posits that much of typical autistic behavior results from over-stimulation.  This model explains so much better things such as texture sensitivity, physical agitation such as hand flapping or head banging in response to strong stimuli, and situational escalation leading to autistic meltdowns.

Marketing when and where a consumer requests it is an essential service.  Marketing as it is practiced today on the web is more like a zombie apocalypse.  Nobody actually wants to be attacked from all sides, relentlessly, by mindless things that just want a piece your brain, but Marketing refuses to believe that and plows ahead undeterred.  When we put up defenses, Marketing invents new tech to circumvent them and tells us it has an absolute right to do so.  This is an “essential truth” as one marketer recently put it.  When we get infected and come to harm through malvertising, Marketing disclaims any responsibility.

Ask anyone familiar with autism and Intense World Theory what they would predict consumer response to be to Marketing’s current approach of carpet-bombing the consumer’s attentional landscape.  Marketers tell us that the web depends on this model, that everyone involved is better off for it, and that they have a right to get their branding messages into our field of attention.  But Intense World Theory tells us that beyond a certain point, people begin to feel violated, overwhelmed and out of control.  They withdraw from the stimulation or find ways to defeat it, even to the point of self-destructive behavior if the stimulus is intense enough.

Head banging, hand flapping and body tics are how an autistic increases signal in order to drown out noise.  Ad-Block+, Ghostery and other consumer-side controls perform the same function with respect to Marketing.  Escalation of confrontation leads to a meltdown in the case of an autistic person, or to Congressional hearings in the case of invasive adtech.  The parallels are obvious and the outcomes predictable.

You don’t need to be autistic to respond this way.  Dial up the unwanted stimulus enough and everyone eventually gets to this point.  Don’t believe me?  Watch the reactions to the sound of fingernails on a blackboard.  This is the first time in history that it has been possible to so thoroughly invade an individual’s cognitive space so we have not previously driven neurotypical people to autistic defensive behavior.  Now that we are beginning to do so, we should recognize the response as predictable given the level of stimulus and move to change the approach. At the very least Marketing needs to dial down the stimulus.  Better yet, Marketing should relate to people as respected peers rather than as subjects.  Our attention is a privilege, not a right.

Suggestions

Marketing needs to reinvent itself as a funding aggregator for content first, and as the delivery of brand messages second.

  • Create content subscription bundles so a single subscription reduces or eliminates ads across most or all web properties.  Cybercrime cannot ride in on your rails once you rip up the track.
  • Remunerate providers proportionally.
  • Make sure independent content providers can get paid on par with large providers.  Some might even say indie content is more valuable.
  • Stop with the invasive adtech already.  We hate it and we hate you for it.
  • Make it easy for prospective consumers to find your brand messages when they are actually in the market for something.
  • Turn your commercial content into program content.  Remember the people who aren’t football fans who don’t watch the game to see the ads?  They do go watch them on YouTube and vote on them in contests.  We don’t mind brand messages if the content is compelling.  (Clue!)
  • Finally, and this applies to pretty much any business, if your business model is indistinguishable from and directly enables organized crime, don’t spend a minute rationalizing the harm caused.  CHANGE THE MODEL.

Marketers, the countdown clock is ticking.  Will you continue on the current path, eventually driving the public to a meltdown?

Marketing Week’s flawed IoT survey

marketing_week_infographicA few hours ago, Marketing Week published an article in their Trends section titled Smart Homes Lack Consumer Connection.  Although I’m an eager proponent of Internet of Things, I don’t find much insight or any actionable conclusions here for a number of reasons that I’ll explain below.  Do you find it insightful or helpful?  Does your answer change after you read this post?

What, no privacy concerns?

When it comes to people declining to install “smart” devices, the breakdown of their reasons as provided in the article is:

45% - Cost
44% - Unimportant
23% - Complexity
21% - Inappropriate data collection
18% - Intrusive
 3% - None of the above

Apparently it is possible to drastically reduce the ranking of privacy concerns by distinguishing between “too intrusive” versus “data being collected and used inappropriately.”   That’s quite a fine line to draw considering the lack of granularity in the other categories and instead of “Privacy – 39%” which would have trumped Complexity, we get two separate line items falling below everything else on the list except for “None of the Above”.  On the one hand it’s great that the study authors found something nuanced to look at.  On the other hand, gaaaaaaa!

What does “cost” mean here?

For example, a plethora of issues appear to be lumped into “cost.”  We all know that “cost” really means “cost versus benefit” and the article fails to distinguish whether people actually like the devices on offer and in their current form – i.e. see the devices as as highly beneficial.  Maybe respondents love the devices but lack the funds to buy them, in which case a plausible ROI demonstration is appropriate.  A good example of this is 40 watt equivalent LED bulbs that used to cost $30 ~ $50.  Now that they sell for < $10 they have gone mainstream.

That seems to be the direction the authors are going when discussing energy saving devices and use the phrase “save money” four times in the article.  On the other hand, “cost” may mean it isn’t worth paying the price for the devices on offer because the additional benefits derived simply aren’t compelling.  A good example of this was when there were no 100 watt equivalent LED bulbs or 3-way LED bulbs.  You had to pay a lot more money for something that wasn’t as functional as before.  Kinda like buying a “smart” bulb and then having to duct tape the wall switch to the On position and use your phone to control it, or having no control over a “smart” device when the Internet goes out.  Too bad the study authors didn’t see the need to find any nuance here.

Relevance

Revolv Hub

This image embodies much of what’s wrong with IoT. Rather than replacing devices with functionally equivalent smart devices that provide enhancements, today’s IoT expects you to buy new types of devices, designs them as though you wish to feature them in your decor, and requires you to control everything over the phone.

Just as a raftload of sins are hidden under “cost” in the study, so too are they aggregated under “not considered important in my life.”  Does that mean “not considered important enough to find a place to put this new device on display so my friends will know how cool I am” (see the Revolv hub photo in the article) or “because I’m a Luddite,” or something in between?

Notification

Every single person who enables the buzzer on the washer and dryer has indicated their desire for those devices to notify them.  Everyone whose telephone is not set to mute, whose doorbell is operational, who use an alarm clock, who use a kitchen timer, have indicated a desire for notifications.  It is impossible to argue that notifications themselves are unimportant, so what is it about these notifications that is not compelling or relevant?  Perhaps it is because the notification destination is almost always the phone and that ambient notification devices are never used?  Of course, use of ambient notification systems would require integrations to a wider variety of devices and Industry seems to be well aware that Internet of Things is not about that.  No, the IoT is apparently about controlling, rather than enabling, all your device integrations.  That may be significant part of the problem but you’d never know it from reading this study which never considers whether the prevailing device architecture is part of the problem.  The article not only fails to provide any insight in this area, but it doesn’t seem to recognize that there’s any nuance to be found.

Actuation

The other side of smart devices is actuation.  The primary time most of us wish for actuation is along the lines of “did I turn off the [insert name of device here] before I left the house?”  We’ve had device-issued notifications forever, even to some extent remotely, but we have not had a lot of “smart” actuation before.  For many people “not considered important in my life”  probably means exactly what you’d think and what the article suggests: we haven’t had this capability up to now and we don’t generally sit around wishing we did.

But “not considered important in my life” could also mean that the functionality of the devices on offer is perceived as laughable.  “You want me to replace a perfectly good wall switch with…my phone? BWAHAHAHAHAHA!”  This is the group into which I fall.  Admittedly this conclusion requires an informed and tech-savvy consumer.  However, targeting the portion of the market who do not understand the problem with this creates an incentive and business model based on keeping them clueless, and which also happens to facilitate the device-as-data-collection-portal paradigm.  Anyone but me have a problem with this approach?  Anyone else believe that devices should first act like the analog thing they replace and then provide enhancements as a secondary function?

It is also possible that “not considered important in my life” means “the device on offer doesn’t have the integrations that would make it compelling and traps me in a walled garden making it unlikely I’ll ever get the desired integrations.”    Call me crazy but when my deaf aunt comes to visit, I might actually want the doorbell, fire alarm, toaster, washer and dryer to talk to the house lighting so she can receive notifications just like everyone else in the home.  Anyone else believe that all devices should have open APIs so that prosumers and integrators can build compelling functionality with the mesh?  Or believe that a mesh of connectivity across all these unlike devices from different vendors needs to exist in order to realize the potential of IoT?  Maybe doing that would make IoT more relevant to the average consumer.  The study or authors, not sure which, or both, don’t seem to care how the “not important” category breaks out or whether the architecture is part of the reason people decline to buy.  Too bad.  We might have learned something by drilling into these issues.

Privacy – it’s in there

Marketing Week screen shotThe one area in which the authors found some nuance was privacy concerns.  It is unfortunate that the result of granularity in this category is to drastically understate the relevance of privacy in consumer minds as compared to the other categories.  The effect is apparent in the summary  that Marketing Week uses when referring to the article from elsewhere on the site: Consumers cite cost and lack of usefulness as barriers to adoption.  No, they didn’t.  If you combine both of the Privacy categories, there is a total of only 6 percentage points separating Cost (45%), Relevance (44%), and Privacy (39%).  Complexity (23%), which is the next closest category, comes in a distant 12 points below Privacy.  The concerns expressed seem to cluster around Cost, Relevance and Privacy as the barriers to adoption.  Odd that privacy would get dropped like that.

Perhaps when your audience is an industry driven by the collection and analysis of consumer data, to suggest that consumers have significant privacy concerns is taboo.  Or perhaps the researchers genuinely wanted to drill down in this area because it is important, created sub-categories for privacy, but that intention got lost in publication.  Hard to say what is going on and since the usefulness of the conclusions varies so widely depending on how you read the intent here, any credence we each give the study will tend to align with our own confirmation bias.  Anyone can interpret the results according to their own views and that, for me anyway, renders the results meaningless.

Does anyone other than me believe that devices should default to not sending data to the vendor and instead allow the device owner to optionally enable vendor access to the data based on receiving something of value in return?  That model would not only significantly improve consumer perceptions of data collection and intrusion, it would actually contribute to consumer confidence in IoT privacy.

Spin doctoring

Marketing Week Screen Shot

The article features an infographic, followed by this opening text.

I’m forced to make a lot of assumptions here because the study isn’t linked from the article and not accessible through Google search or anywhere else that I’ve found.  Since we do not have access to the study or information about its origins, we have to work with what’s in front of us.  Unfortunately, what’s in front of us doesn’t hold up well under close inspection.

Strangely, the first words in the article (at least those that aren’t a headline) are “The study, seen exclusively by Marketing Week, reveals…”  To which study are they referring, and what do they mean by “seen exclusively by”?

Are they trying to imply that someone independently and spontaneously funded this research without Marketing Week’s involvement and then gave Marketing Week exclusive access to it? The headline mentions “new research,” a non-specific phrase which could be plural or singular and suggests no connection exists between the reporter and the news being reported.  The rhetorical device of starting the article copy by back-referencing an unnamed but specific study from among all the available “new research”, and the passive construction using “seen exclusively by” combine to reinforce the suggestion that this is independent news reporting. So too do the references to “Source: Gekko” as the authors of the research.

If all that is true, then who commissioned the research?  And how did it end up as a Marketing Week exclusive and with their branding all over it?  Did Marketing Week vet the provenance of the study before publishing it?  Or did they in fact commission it themselves?  Why not just tell us the origins, scope and charter of the study or make it available, unless the intent is to deliberately put some spin on it?

To be fair, my suspicions of deliberate spin doctoring assume that the article was written by someone whose core competency is the use of English language in the art of persuasion, for example a marketing professional or experienced reporter in that field.  Someone like that doesn’t end up with a product like this by accident.  On the other hand, one could (some might say should) could give Marketing Week the benefit of the doubt and assume that the unusual rhetorical construction isn’t actually deliberate framing but rather a case of sloppy as hell writing and editing that managed to get past all the approvals required for a high-profile feature article.  Hey, it could happen.  Decide for yourself.  Got a different interpretation?  Let me know about it in the comments.

Personal conclusions

My issues with the methodology, the article’s interpretation of the results and the apparent framing lead me to conclude that there’s enough of an agenda showing through to distrust the whole thing.  I would have much preferred if the authors had drilled deeper into the broad spectrum of reasons consumers give for not buying today’s IoT devices.  There are very few devices on offer today that provide a combination of compelling functionality, an open API, operate when disconnected from the Internet, and integrate with anything.  Any study today would therefore be constrained by consumer perceptions of the crippled proprietary devices we have now as being representative of the possibilities of IoT, and thus such a study would be marginally useful at best.  But it would at least be more useful than the study presented.

Is it bigger than a lolcat?

1551781_561186843966658_361670759_nOne of the currently popular Internet memes poses the question of what would be most difficult about today’s society to explain to a time traveler from the 1950’s.  The reply calls us all out on our frivolous use of the massive amount of computing power available to all of us.  The sentiment mirrors my VRM Day presentation at IIW where I lamented that we could have built consumer-side apps to transform commerce but instead we created Angry Birds.

We’ve all heard that the the amount of computing power it took to support the manned moon missions is now available in a calculator, or we have at least heard some similar comparison.   There is justifiable incredulity that computing power is now so cheap and plentiful that not only can we afford to squander it, but squandering it has become our very highest expression of that power.

Consider for a moment the example used in the meme to make the point.  While the vision of putting that wasted capacity to work doing research is noble (I’m an enthusiastic supporter of World Community Grid), it seems rather uninspired. Basically, we should be looking at Wikipedia instead of lolcats, according to the meme.  Despite that rather pedestrian example, the point is so compelling as to go viral. I wonder what impact it would have if there were an even better example of personal empowerment than Wikipedia.

You might rightly ask, as a friend recently did, “If consumer-side business apps are so compelling, how come nobody builds them the right way?  How come nobody buys the ones we have, crappy though they might be?”  Good questions, and ones I believe we know some of the answers to.  I’ve identified two root causes.

Architecture

The biggest problem is the prevailing architecture.  When we first applied computing to commerce on the vendor side, the hardware and software cost millions of dollars.  Financing the systems was possible only by spreading the cost across very large customer populations.  At the time, customer data was not inherently valuable, but rather was a byproduct of the system.  When I worked as a computer operator for an insurance company in the 1980’s you may have had a decade-old policy but I guarantee we weren’t keeping all that data online.  Data was expensive and we kept online only that which was required to conduct day-to-day business and we only archived that which was required to meet compliance obligations.  If it wasn’t required for daily operations or compliance we destroyed it.  Data was an expensive cost of doing business but it was less expensive than manual processing so it was tolerated as a necessary evil.

Eventually, the growth of computing power and shrinking cost of storage gave us the ability to analyze all that data and suddenly it was no longer an expense but a new source of profit.  Companies found they had untapped gold mines in their vaults and set out to unblock all that value.  But it wasn’t enough.  Soon they began to tweak systems to proactively collect ever more data, from every possible point of interaction with the customer.  Save it all, let SAS sort it out.  Unfortunately, the moment in time when we collectively realized that data is valuable was also the moment when corporations had more of it than ever before and consumers had none.  This locked in the proportions and model for distribution of data. Which is to say there is no distribution per se, just vendors with all your data and you with none.  All variations on this model start from this default position.

The corporations have come to believe that consumer data is their birthright.

The result is that the discussion around consumer’s access to their own data is framed in terms of “releasing” it to consumers, but only subsets, under strict terms, and usually under tight constraints on what the consumer can do with it.  The consumer is expected to be thankful for whatever access to their data they are granted.  The corporation is, after all, doing the consumer a favor, right?  (Say “yes” or we revoke your API key.)

In the absence of a better model, all new designs are based on businesses synthesizing new sources for ever more valuable consumer data.  These include your browser, your phone, your car, and so on.  But if you were to build out commercial software platforms from scratch in an environment with cheap, ubiquitous computing devices and high-quality open-source software, would this vendor-owns-all-data architecture even be possible?  If you tried to build Amazon from scratch today and a competitor said “we’ll give you the same market place, the same inventory and the same prices, but we’ll also give you machine-readable copies of all transactional data” someone would build apps to capture and analyze all that data, the app builder would get rich, the competitor market vendor would get rich, the loyal customer would get functionality, and Amazon would be forced to also give you your data or go extinct.  Unfortunately, Amazon achieved dominance without any competitive pressure to give you access to your own data, and so they don’t do that.  The same is true of every other large vendor.

The premise of Vendor Relationship Management, or VRM, is that with access to their own data, consumers could apply computing power to problems of commerce and of managing their lives.  We do this now to some extent, but we have a million different vendors holding our data and charging a million different subscriptions for the privilege.  We can’t integrate across these silos and we are locked into specific vendors because the accumulated data is not portable.  The vision of VRM is to consolidate all that data into a personal cloud.  I may still buy a book from Amazon but my personal cloud lists that book in a collection that also includes books I purchased from my local independent bookstore..  Receipts for all these purchases are captured at the time of sale and loaded into my personal cloud without any manual intervention on my part.  The same is true of all my other purchases, utility bills, mortgage or rent payment, car payment, etc.  Having captured all this data, I can analyze my own family’s spending and consumption patterns over time.  If the consumer-side analytics software is good enough, I might even discover that my daughter is pregnant before Target does.

So, the first big issue we need to overcome is the inertia present in the prevailing big-data, corporate silo architectures.  In the absence of a viable competing architecture, corporations have little incentive to change, and why should they?  That data is valuable and any accountant will tell you that giving away valuable, income-producing assets means less profit.  Of course, it’s actually not a zero-sum game like a balance sheet.  Digital data can be copied without diminishing the value of the original copy and if giving it away makes consumers more loyal then the result is more, not less, profit.  Convincing data-hoarding corporations to exploit abundance rather than scarcity is the first step.

Cost/Benefit

The second problem is the cost/benefit equation.  One of the reasons we look at lolcats and play Angry Birds is because these activities do not require constant vigilance of us.  Just the opposite, in fact.  Leisure pursuits have become the highest expression of computing power because they relieve us of the stress of daily life.  We need software business tools that behave the same way.  The lack of enthusiasm for the current and previous crops of Internet of Things “smart” devices and business software designed for lay persons is due in large part to the danger inherent in the usual implementations of these things.

Online banking, for example, requires of the user a much higher level of security hygiene than does Angry Birds.  Worse, you mist practice this vigilance not just while signed onto the bank, but at all times when using a device that might someday be used to sign onto your bank.  Online banking comes with the advertised functionality, but also incurs the cost of acquiring and practicing online safety habits.  If the banking app is reasonably good the cost/benefit nets to the positive side but it can be a close call.  On the other hand, it’s almost all upside and virtually no downside to seeing a cat not quite make that leap to the counter top. (The cat may beg to differ.)

One of the most important reasons today’s software is so unsecure is that all the incentives in the system reward lax security.  If you spend $1M on security, your competitor who spends nothing is much more profitable, as reflected in their superior financial performance.  In order to compete, you too must skimp on security.  You’ll regret it if you suffer a breach but, despite the headlines, that’s actually a relatively rare event. Predictably, this drives a race to the bottom.  Investment in software security is now mostly a post-breach phenomenon and eternal vigilance is your cost of online banking, or any other non-leisure activity that involves even a modest amount of personal risk.

A different sort of cost/benefit issue exists in so-called “smart” devices, the best (i.e. worst) example of which is lighting.  The first requirement of any “smart” device is to act like the thing it replaces.  What the first crop of device manufacturers failed to realize is that a bulb and a switch are different parts of the same system.  You should therefore improve them as a system.  Making either of these operate from your phone is cool, but not something you’d actually want to use – as the mechanical switch and dumb bulbs installed in your house today probably attest.  What manufacturers like Philips and Belkin brought to market are a bunch of “smart” switches that operate dumb bulbs, and a bunch of “smart” bulbs which require you to duct tape the dumb wall switch to the ‘ON’ position.  Nobody offers a smart bulb/switch set.  After the novelty of controlling the light from the phone wears off many people decide “smart” devices are actually pretty stupid and then uninstall them.  The requirements to use the phone handset to control the lights, of having to duct-tape the wall switch to the ‘ON’ position to make it all work, the loss of basic lighting control functionality when the Internet is down, combined with the extravagant retail price of the hardware, all add up to an operational cost which far outweighs the benefit of “smart” lighting.

But a truly smart switch is really the user interface to send a signal, and a truly smart bulb is a receiver and actuator of such signals.  If the switch passes power through to the bulb at all times, if flipping it sends the required signal, and if the bulb then receives the signal and performs simple on/off actions, then the pair of devices can directly replace the equivalent dumb versions of a switch and bulb.  Anyone who has ever operated a standard toggle switch and bulb will be able to operate this smart lighting system without any training or the need to whip out a phone. The truly smart lighting works without Internet connectivity because the signalling is all local, which means the device talks to you first rather than to the manufacturer.  If you can replace the dumb switch and bulb with smart versions and cannot tell the difference in normal operation, then there is no incremental cost but considerable benefit in doing so.

Of course, such a system must be capable of local signaling which in turn implies you get the data before the device manufacturer.  In fact, there’s a possibility you might block the data from getting back to the manufacturer and just keep it local if you know a bit about networking.  The notion that you might be the first and only user of your own data is called personal sovereignty.  Where I live in the United States, the Constitution was supposed to guarantee the sovereignty of citizens.  The constitution never anticipated digital technology, though.  Not only does the prevailing software architecture not recognize your sovereignty as the first owner of your own data, it is more accurate to say that your sovereignty is a direct and urgent threat to the prevailing architecture.

In the absence of personal clouds, consumers as first owners of their own data is unthinkable.
In the presence of personal clouds and cloud apps, consumers as first owners of their data is inevitable.

Think about that for a moment. Nearly all of the computing infrastructure on the planet is designed on the dual premises that 1) data is valuable; and 2) whoever builds the device or service has an absolute right to the data, even to the exclusion of the person to whom the data applies.  So it’s my TV but LG’s data.  It’s my home automation but it’s Insteon’s data.  It’s my cart full of groceries, but even though I was there physically participating in the sale, it’s still the store’s data and only the store’s data.

Changing this situation is the reason for Personal Clouds.  Putting all that spare computing power to better use will require all those vendors to provide not just an e-receipt, but specifically a machine-readable e-receipt, or an API that we can use to fetch our data on demand.  The chicken-and-egg of this situation is that without apps, there is little demand to squirrel away our transaction data, and without a bunch of data there is little interest from developers.  However, it only takes a few seeds to make an alternative architecture take off.   It takes someone who believes enough to build the platform and trust that people, data and apps will come.

For example, imagine being the first grocery loyalty company to make the line-item purchase data available to consumers.  The moment we have a basic app to display, search, and summarize line-item grocery data, that company will instantly become the most profitable loyalty program vendor on the planet.  Other loyalty program vendors will wonder why they ever thought customer data was a zero-sum asset and they too will start giving it away just to remain viable.  The more consumers take custody of their own data and extract value from it, the more value corporations will realize in sharing transaction data directly with the other transaction participants.

Similarly, in a world where consumers get to choose whether device data gets out of their home network and back the device vendor, devices that require a connection to the vendor to function will find few buyers and eventually end up on the discount rack at the back of the store.  In that environment, device manufacturers will change their business model to provide value-added services, friendly dashboards and/or great analytics in order to earn the customer’s trust and a share of the data.  They’ll need to give you a good reason to let it out because the cloud is by default private and it will take some affirmative action on your part before they see that data.

What’s next

The good news is that the only thing keeping us locked into the current architecture is inertia.  There’s a lot of infrastructure built around what Doc Searls calls the calf-cow model but one or two good applications built on a new person-centric architecture can be the trickle that becomes a flood and eventually displaces the old model.  I spent last week with a group of people dedicated to doing exactly that.  The technology needed to build a person-centric platform has been around for a while.  The only thing missing was someone with sufficient faith in the new business model to plug the pieces together with the controls facing the user and then trust the user to drive it.  Because this threatens the existing model and potentially shakes up entire industries, there will be considerable push back.  Those who benefit from the current system want to keep that calf-cow relationship in place.  They want you to be wholly dependent on them for all your information needs, even information that you generate.  They won’t like a new architecture in which you get to keep as private some of the data they now take for granted.

But we can’t keep walking around with the power of a 1980’s mainframe on our hip reserved exclusively for cartoon games and crazed cats.  Even in the absence of some better alternative, we have this vague uneasiness and a bit of guilt that those wasted MIPs could have been put to good use.  We want the Internet of Things but we want it to serve people.  We want the Internet of People and Things. (Hence the name of my company, IoPT Consulting.) When we transact business, we want our own copy of that transaction automatically delivered to our personal cloud.  We want applications to help us index, search, sort, summarize and analyze all our new-found data.  And when we get all that, vendors clinging to the calf-cow model will have to get on board or get put out to pasture.  Then that spare computing power will provide some real benefit beyond distracting us from the real world.  We’ll use it to make the real world better.

This is the mission of the group I’m calling “The Santa Barbara Crew” until I’m out from under non-disclosure:  providing serious, business-grade software, built on VRM principles, using personal clouds as the data store, with least possible risk and maximum benefit to users.  This will transform commerce even more than it did when we computerized the vendors.  The Internet of Things, if built correctly with people at the center, will transform the world more than commerce ever did. We (I say “we” because I’m participating in this project) plan to deliver all those things.  It won’t be compromised based on what we think some or other vendor will or won’t accept.  It’s designed based on what you or I would insist on if we were building out commercial IT infrastructure today from scratch.  More importantly, it’s the thing I think you’d want to use if given the choice.  Get ready because that choice is coming your way soon.

What will you do when that opportunity comes?
Will you remain a calf, forever stuck in the vendor’s pasture?
Or will you claim your own sovereign future?

 

Isolation within the Personal Cloud

Tools for segmenting the network are approaching consumer-grade price points. Pictured: TP-Link TL-SG1024DE-V1-011 Gigabit Switch

Tools to segment the network are approaching home-user price points.
Pictured: TP-Link TL-SG1024DE-V1-011 Gigabit Switch

This is a bit preliminary because I haven’t had much time to work on my office network re-wire project and don’t have a lot of screen time with my brand new hardware.  However, I found a device that should help with those of you in the Personal Cloud community who are busy building prototypes, testing, and hacking.  I didn’t realize it but the price of a managed switch is down to the $150 range.  When I first started buying gigabit switches, the 5- or 8-port units were at least $100 and a managed switch was $400~$500.

I just picked up a 24-port Managed Gigabit switch.  It’s the “friendly” SMB version which I suppose means it is a bit light on features compared to a full L2 or L3 managed switch. However, it was only $150 and supports VLANs so you can segment off a bank of ports into a separate network – perfect for those Internet of Things devices you don’t trust, for guest wireless access, for isolating your beta testing network from your critical business workstation/laptop, etc.  And it is serious where it counts – 48GBps backplane allows full duplex Gigabit traffic on all 24 ports simultaneously, according to the spec sheet.  For my purposes, it has port mirroring so I can snoop on all those IoT devices and see if the next wave of LG TVs phone home like the current ones do, or any of the other devices outed at Def Con and other places don’t get fixed.

[Read more…]

Industry still puzzling over consumer reaction to tracking

Industry is still wondering what went wrong with tracking.

Industry is still wondering what went wrong with tracking.

Frank Hayes over at Storefront Backtalk asks “When Is Data Collection Creepy?”  That’s a really good question now that ordinary people are waking up to the possibility that anyone and everyone can track them online and in real life.  The post touches on but doesn’t quite illuminate that the biggest difference is one of atoms versus bits. When surveillance was physical Newtonian physics limited what could be done. We didn’t need laws or policies stating that you couldn’t surveil all of the people all of the time because to do so wasn’t physically possible. Because we have never had that capability before, we do not have any experience with it from a policy-making standpoint.

[Read more…]

My RBAC Manifesto

No one component taken out of context makes the Personal Cloud.

No one component taken out of context makes the Personal Cloud.

I’ve been following the Role Based Access Control thread on the Personal Clouds List and just sort of biting my tongue so as not to sidetrack any productive discussion there.  However, I cringe every time a new email comes out comparing Clique Space to RBAC.  One is a model, one is an implementation.  To compare them is like saying “China is not capitalism.”

I have issues on several levels with the whole discussion.  First, I believe that Role Based Access Control will be essential to the Personal Cloud architecture.  With all of the different functions proposed for Personal Cloud, it doesn’t seem scalable with the other types of access control.  Furthermore, there is no “personal cloud” if all the parts of it are developed in a vacuum.  Even though your component of the Personal Cloud may be simple enough to not require RBAC, how will it fit into the greater architecture?  For example, a smart light switch may have one role – either you can access it or not.  That’s a use case that screams out for simple Access Control Lists right up until you try to integrate it into a larger home automation system.  It isn’t so much that the switch now needs roles, but rather that the ability to manipulate or inquire on the switch from within the home automation system is itself a role of that larger system.  So as a designer the question becomes: In a larger cloud context where the owner manages using RBAC, do you want your device or component to be the only thing that requires the homeowner to program specific Access Control Lists?  How user friendly is that?

My answer to this is that as designers we need to recognize up front that the complexity of the Personal Cloud requires something more manageable than individual access control lists and then design our components to live in that greater context.

[Read more…]