Image courtesy of Pixabay

Is there anyone in your company who wants to use VoC/CX surveys as marketing tools rather than as customer listening tools?

It’s a scary and sad thought, but the answer to that question more often than not is “Yes.” I don’t know how many clients I’ve had to talk off the marketing ledge and get them to focus on the task at hand: use VoC/CX surveys to genuinely listen to your customers and, of course, act on their feedback.

Hat tip to Tareq Krayim for the nudge to provide my thoughts on this topic and write today’s post.

Tareq shared with me an article that suggested, nay, outright stated that “the easiest way to grow sales and double customer loyalty is to send a survey and then do nothing with the feedback.” The article is based on research that was summarized in an HBR article in May 2002. O my, where to begin.

First, don’t believe everything you read on the Internet. Especially from someone who admits that she is not market research savvy, as the author of that article did. Second, it’s always a good idea to refer to the original source document to understand how/why people interpret what is being said. (Which is why I’ve only provided you the link to the HBR article; you can also download their full research findings here.) Third, in research, sample size and context are so important.

In a nutshell, two researchers conducted a field experiment among a sample of one U.S. financial services firm’s customers (those enrolled in their CRM program). They had a control group and a test group, the latter of which participated in a 10-to-12-minute customer satisfaction survey conducted by phone. They were asked to rate program features, e.g., estate planning, account monitoring, and retirement planning, in addition to their overall satisfaction with the firm. The control group wasn’t surveyed at all. The researchers then tracked certain metrics (purchase behavior, defection, and profitability) for the two groups for a year and found that the test group was 3x more likely to have opened new accounts, were less than half as likely to defect, and were more profitable than the control group.

How could this be? From the HBR article, their theories were as follows:

  • satisfaction surveys appeal to customers’ desire to be coddled, reinforcing positive feelings they may already have about the surveying organization and making them more likely to buy its products. 
  • surveys may also increase people’s awareness of a company’s products and thereby encourage future purchases. 
  • the very process of asking people their opinions can induce them to form judgments that otherwise wouldn’t occur to them, i.e., measurement-induced judgments, as they are called, can influence later behavior.

I would argue that the first and third bullet points could go in either direction, i.e., positive or negative feelings or judgments.

In the full research findings, the researchers spell out their hypotheses in detail and build their case, but they also spend a chunk of real estate outlining caveats and explaining that further research is necessary. Troubling to me was this (bolding is mine):

“… this research was limited to highly satisfied customers of a single firm in one industry. It is possible that these results may be subject to the vicissitudes pertaining to the firm itself or to the financial services industry more generally. Similar studies need to be undertaken across firms and in other industries to replicate our findings and to provide generalizability.

Finally, it is also important to point out that the participants of our study had a formal ongoing relationship with the firm and were among its more profitable and higher potential customers. These factors may have magnified the positive effects of satisfaction measurement on participants

It’s troubling because when people take research findings at face value without knowing anything about how the research was conducted, among what audience, or what its potential limitations are, they draw broad conclusions and generalizations that can be detrimental to those who buy into them. Do we even know if this research is repeatable? It was a single industry, a single data point, a single point in time. Was this, in fact, marketing research and not a customer satisfaction survey, per se? The research was labeled as a “customer satisfaction survey,” though that casts a wide net; I would argue that it would have been better labeled as a program or product satisfaction survey, which says “marketing” to me.

Back to my original question: Have you tried to design your VoC/CX surveys, only to find that your marketing department wants to sabotage the survey with marketing research or include marketing-related questions? (BTW, I’m referring specifically to customer satisfaction/experience surveys.) If this is a challenge you’ve had to face, here are a couple of things to keep in mind for your VoC/CX surveys.

  • As pointed out in the HBR article, selling under the guise (“sugging”) of conducting marketing research is illegal. If you plan to use your survey to sell a product or service, you must disclose that upfront. And I’d argue that it’s really no longer VoC/CX at that point; call it marketing.
  • Twelve years later (that research was published in 2002), customers have seen a lot of surveys; I think they know which ones are designed to genuinely gather feedback about the experience versus trying to introduce/sell a product.
  • On a similar note, if you start incorporating marketing questions alongside your CX questions in your VoC/CX surveys (and then do nothing with the feedback to the “real” CX questions), customers will see through that, and you can watch your response rates tank over time
  • Always state the purpose of your survey and what you plan to do with the feedback.
  • Surveys are a touchpoint. Make sure the survey experience is a good one for your customers. Be genuine. Ask for feedback, and use it to make changes or improvements.
  • When you design a survey, make sure that every question has a purpose. The purpose should be about the experience, and the question should be actionable, i.e., you can act on it by either improving something or recognizing someone for a job well done.
  • Every question should have an owner, too. Who is responsible for the action to be taken or the outcome of that question? Without a purpose or an owner, the question is wasting real estate.
  • Be careful even with satisfaction surveys. As
    k the questions in such a way that say you’re really interested in whether the customer was satisfied or not with the product, service, or experience. Don’t frame it in such a way that comes across as salesy or introduces a product that doesn’t exist. (We don’t know how the questions were framed or asked in that research.) Better yet, ask only about products or services you know the customer owns or uses.
  • Just don’t do it. Ask about the experience, and leave it at that.
  • Most importantly, after you ask, act! The article that Tareq sent me stated, “… and do nothing with the feedback.” That’s a big mistake.

VoC/CX surveys should not be used for marketing purposes. At all. If you are asking for feedback about the experience, stay the course and stay focused on the purpose at hand. If you want to sell more or increase loyalty, deliver a great experience and let your customers speak for you or be an extension of your sales team; don’t do it in your VoC/CX surveys.

USA Today has come out with a new survey – apparently, three out of every four people make up 75% of the population. -David Letterman