I’ve spilled many bits in this blog about the difference between vendor-driven creepy malvertising ad-tech versus consumer-driven intentcasting and Vendor Relationship management. The vendor-driven model is the one where you are surveilled from all sides and the data compiled, analyzed, sliced, diced, massaged, correlated and enhanced until the vendor has a good idea to which things you will respond viscerally and then attempt to manipulate you with them. This model is based on exploitation of human biases and vigilance fatigue. Vendor Relationship Management (VRM) on the other hand is about the consumer broadcasting intent and preferences to a market that can respond accordingly. This model is based on fulfillment of the consumer’s self-directed interests and desires.
Somewhere in the middle are consumer surveys: direct customer input, wholly vendor driven. Or at least many people, vendors and customers alike, think these are somewhere in the middle. Me? I’m a sucker for surveys since they are about as close to VRM as it gets most of the time these days. I fill them out in bulk in hopes of detecting some whiff of VRM in one of them, and now and then I’m rewarded for my effort. But only once in a blue moon. Sadly, virtually all surveys I’ve seen fail to rise to a level that might qualify as anything close to VRM and most are just plain clueless.
For purposes of this post, I’ll refer to a survey I just completed from Qualtrics on behalf of the Grazie loyalty program of the Venetian Hotel in Las Vegas. I have no particular complaint with either Qualtrics or The Venetian. In fact, I rather like the hotel and a bit sad that IBM’s conference has outgrown the venue after all these years. It’s just that the clueless survey they sent me was the straw that broke the camel’s bank – sorry, back. I meant camel’s back.
Here are some ways in which surveys are clueless.
They are about brands, not customers
As a computer programmer I optimize code by putting the most important decisions and most frequently selected options first in the workflow. Similarly, if large numbers of respondents will be excluded from a survey it is most efficient to do that early on, before using up valuable computing resources on data will be tossed anyway. The same goes true, although to a lesser extent, for the resources required to gather the user input. Why waste the respondent’s time and server resources if they are not the target market?
Such optimizations tend to be reflected in the design of the user input workflow which is why the first round of such surveys are often composed of qualifying questions. The Grazie survey appears to conform to that pattern. The first round asks whether you participate in grocery/pharmacy loyalty programs? Retail? Dining? How about credit card? Airline? Hotel? Car rental? Interestingly, it is necessary to select something to get to the next screen. There’s an “other” box where you can write in something to indicate “doesn’t apply” or list a loyalty program you use that probably wasn’t a potential partner and thus not mentioned in the survey.
Inherent in this approach is the notion of knowing ahead of time who matters and, perhaps more important, who does not matter, and it would be understandable if you came to think much of the time that you fall into that latter category. Very precise parameters define the target audience and considerable effort is expended to filter respondents on those criteria. Some surveys are more inclusive than others but when one starts with what appears to be a bunch of related qualifying questions it is a strong clue as to what parameters are important to the sponsoring brand.
The traditional survey model is the brand asking questions and a sample of consumers responding. In the VRM version it’s the consumers talking and the brands listening. Unless people are paid to take the survey, their self-selection and gift of time and attention is an indication that they represent the potential market. Some of those people will be outside the “target” market as it is defined today but that target market reflects the future of the brand whereas the VRM market reflects the future of the market. Ideally these should coincide. If they do not the traditional survey is not optimized to detect that. Which would you rather tap into?
The survey is about feasibility, not opportunity
As a rule, consumer surveys are designed to determine whether the strategy that has already been decided is feasible, not to collaborate with customers on what the strategy should be. In the case of the Grazie survey the purpose of the project appears to be to improve the performance of the hotel’s loyalty program by making it more relevant in some very specific ways. For instance, they want to know what topics you care about and when you might wish to receive information about them. That’s a fairly transparent attempt to figure out what might get people to read their emails instead of discarding them unread, but let’s get one thing straight: Opt-In is Good. Sure, it is still surveillance ad-tech but I actually like that this is there.
They also wanted to know when you would like those emails and they differentiated between “when you are here” and “when you are at home.” You mean it’s possible to get the emails only when I am in the hotel and might actually want to read them? Sign me up! If the Venetian were participating in a VRM marketplace, this is one feature I would have specified on my own. The hotel knows when I’m registered. That is a pretty clear signal of my intent to show up and spend money. If they want to make it worth my while to spend more of that money on-property than off, then when I show up is the perfect time to tell me. Not bothering me when I’m not there acknowledges that my time is precious.
There’s an important lesson hidden in plain sight there. Tailored advertising isn’t just about what you are interested in, but also when you are interested in it, and “never” is a legitimate option. I might be interested in the Grazie loyalty program only when I’m at the hotel and if I never go back and never receive another email the personalization has done its job. Thus we have both an indication of interest and a desired frequency of zero routine contact, and this is not merely valid but optimal. This is surveillance marketing at its absolute best.
On the other hand, what if the one thing a respondent is concerned about isn’t addressed by the survey? Worried about privacy? Wish the loyalty program was multi-casino? Unfamiliar with the games? Too bad, because that is all out of scope. The survey in a nutshell is this:
Of these very specific things we are considering implementing, which, if any, interest you?
A VRM market would still have feasibility surveys but consumers could be, indeed would be, much more involved in the process at the brainstorming stage. A very good example is IBM’s Request For Enhancement (RFE) Community. Customers describe what features they believe are missing in a software product and other customers vote on the requests. Any requests that generate a strong response become candidates, and a few are accepted as new feature requirements. Once a feature is accepted as a requirement customers are provided a more traditional survey of the type “which of these alternative implementations are you most interested in?” and access to Beta testing programs.
The difference is that the traditional survey is typically the tail end of a process that involves first guessing what the customers want and trying to validate the guesses, while the VRM version involves nurturing a community whose passion for the product fuels a shared dialog in which the vendor participates. As either a vendor or a customer, which of these would you rather rely on to guide the product you make or use? As a vendor or as one of the more vocal customers, which would you rather participate in?
You are a number, not a free human being
With apologies to Patrick Mcgoohan, yes you are a number. The hallmark of a traditional survey is the degree to which it is machine-readable. Computers are best at recording discrete sets of enumerated responses and a survey optimized to that architecture will reflect it. Natural language processing is still cutting edge and has not been applied yet to processing survey responses in bulk. If you have ever read a Google Voice translation, you know why. When a survey offers only yes/no, multiple choice and few or no write-in fields, you have been reduced to one of a finite number of response combinations. Survey respondents are like so many Pachinko balls traversing a near-infinite number of paths that all resolve to a handful of outcomes.
There are some types of survey that are almost always user-friendly and you are probably familiar with them. If you have attended a conference or classroom training, chances are that you received surveys composed of a relatively few bubble-questions to fill in and a lot of blank lines to write on. Teaching effectively (or badly) is hard to capture in multiple choice questions and so in this industry actual people spend time evaluating the written responses of other actual people. As an instructor I want the survey to be able to capture the difference between my session getting bad marks because I didn’t prepare versus because the room was distractingly hot. Machine-readable surveys (of practical length) have no way to capture the variety of useful input on offer from respondents.
Getting back to Vegas
For me the dehumanizing effect of the traditional machine-readable survey is most pronounced when I get the feeling that the brand is off target. In this case, the Grazie survey, I would love to point them at the target but there are no opportunities to do so. The more the survey drags on, the more I feel I am just another faceless drone contributing to making the product worse. Imagine a logging crew foreman asking “which of these trees should we chop down?” and providing a survey with choices like Ash, Birch, Cherry, and Other. Meanwhile I am desperately looking for the box to indicate “you are in the wrong damn forest!” But because the survey is about feasibility not opportunity, because it exists to validate an existing strategy rather than to discover what I want from the brand, because it is not prepared to receive my unique and individual response, no such box exists.
Here’s the thing: I will never be a good candidate for the Grazie loyalty program as it is currently envisioned by the Venetian and Qualtrics. When I go to Vegas I set aside $20 or $50 a day for gaming. When it is spent, it’s spent and that’s it. At the end of the day if I’m up, that money goes in the safe. I always start the next day out with money I brought from home, using the planned daily stake. Even though the casino is statistically guaranteed to take all my money, on an entertainment value dollar-per-minute basis this is still cheaper than many Vegas shows or eating at almost any Vegas buffet.
But this level of spending will never be sufficient to reach any of the Grazie reward tiers. Even if they carried points over for a decade. So for all the questions asking “if we gave points for shopping with retail partner X would you spend more money?” I had to answer “no” in all cases.
At no point did they ever ask “if we gave you points toward your preferred loyalty program would you spend more?” That’s too bad because the answer in that case is definitely and enthusiastically yes. The hardest part of traveling for my wife is the flight across time zones so I save up airline points and use them to get us to first class. I save these from credit cards, hotel stays, and affiliate purchases. When I’m at a Hilton property, I get two points per dollar for charging the room on an affiliate card and take all my hotel points as airline points. When I stay at the Venetian I get credit card points regardless of where I spend the money and zero points from the lodging directed to the frequent flyer program. All other things being equal, there is no reason to stay on-property for meals or entertainment.
Would I spend more money on-property in exchange for airline points? When I’m traveling alone I’ll pick points over variety every time. Even when my wife is with me we’ll probably at least look for reasons to dine where we get miles and chances are a bit more money flows to the property. If there’s 10,000 more like me and the hotel gets only $100 more out of each of us that’s an extra million dollars spent on-property, and I’m guessing those are very conservative numbers.
But neither Qualtrics nor the Venetian asked that question, nor was there a single free-form question asking for uncategorized comments. It takes an incredible amount of hubris to compose a customer survey entirely out of machine-readable questions, yet this is the norm.
Similarly, it is tempting to say it takes an incredible amount of blindness for IBM to ignore the application of Watson to developing VRM-style markets. But in fairness they applied Watson technology first to health, then donated considerable Watson resources to African NGOs, both of which are justifiably higher priorities. The VRM marketplace lies at the intersection of natural-language parsing and mass customization. It cuts across all verticals. It is today the quintessential “$0 Trillion market” to which Doc Searls often refers. There is no better tool for VRM today than Watson and at the moment it is IBM’s market to lose, they just don’t appear to see it yet.
(Note: In IBM’s case, I expect an increasing amount of survey evaluation to be performed by Watson or its descendents. Watson not only understands natural language, but it is an engine purpose-built for teasing correlation out of large natural language datasets such as might be compiled in an open-ended survey. I tried several times while at IBM to interest colleagues in using Watson for VRM market intelligence without luck. But I believe that’s just a matter of prioritization, that it will come to pass eventually, and when it does it will transform commerce.)
But whether it is Watson or something else, brands will soon start to figure out how to listen to customers without all the invasive surveillance and manipulation. We will at least feel like we are perceived as people instead of numbers because we will drive the interaction, not the other way around.
Getting from surveys to VRM
Until that day comes, If I were Qualtrics, or the Venetian, or anyone else asking valued customers for their time to fill in a survey, include at least one free-form text field not associated with any category. If you want to get fancy, include a field for comments in each category. Do this even if you ignore the data. Because sometimes, we customers like to think you just might care about what we think and the survey as it is currently designed, without any opportunity to write in what we care about, tells us exactly the opposite.
You might think it is a bit cynical of me to suggest gathering comments with no plans to read them and perhaps it is. I believe it is easier to sell the idea today based on the psychological effect on respondents. The presence of an open text comment field or three gives us the impression, accurate or otherwise, that we have individual influence. Making our experience better can only make the survey itself better so that should be sufficient to sell the idea.
Of the companies who follow this advice, some will retain those unread comments and therein lies the rub: like nature, value abhors a vacuum. The accumulation of unread comments represents un-mined value and, unlike intelligence that must be refined at high cost and then is only approximated at best, this is high-signal data begging to be parsed in bulk. The kind of job for IBM’s Watson, or whatever is next. The value of that trove rises with each new element added until it intersects the cost of parsing it. The mere act of collecting useful data ensures that it will, eventually, be used.
So, first, add open-ended questions and free-form text fields to surveys. Second, consider adopting IBM’s RFE Community model or something like it. Among other things, you can capture the survey comments as RFE entries and outsource parsing of them to the community. The community might eventually replace most surveys altogether. If done right, including reputational weighting that rewards success, such a community self-organizes into a customer-led profit and loyalty foundry. And who doesn’t want that?
I’ll keep filling out surveys, and I’ll keep hoping brands and pollsters get a clue. Ironically the ones who include a free-form field to receive such a suggestion are the precisely ones who don’t need the suggestion. As for everyone else, we can only hope they get hit over the head with a clue stick. Perhaps we should organize a raiding party. Who’s with me?
Leave a Reply