Today I’m pleased to present a guest post from Sarah Simon.

This post marks another installment in Sarah’s series on lessons from the high country.

What the Mountain Teaches
After our winter ascent of Cooper Mountain and Ruby Mountain via the southwest ridgeline of Cooper, the plan is to descend the eastern slopes of Ruby Mountain and follow the snow-covered mining road back to the parking area.  At the summit of Ruby, my partner announces: “That descent route will take us forever and it’s getting late. Let’s descend the south ridge, it’s a shortcut.”  (Ugh, there’s that word!) Looking at the map, I agree we could cut almost two miles off of our descent by taking the more direct route but express serious concerns that, by doing so, we are heading into unknown territory and, based on the topographic map, some of the slopes in this newly-chosen descent were quite steep. We calmly argue, but argue nonetheless. “I’m not taking the original route; we’ll be here all night,” he declares and marches off into the unknown. “Great,” I mumble into the wind. The third member is on skis and has already descended via “Plan A” – I have no chance of catching up to her on snowshoes, and so I follow Mr. Shortcuts.

The descent ridge begins gently enough, but after selecting the wrong fork on the ridge, the terrain rapidly steepens and grows increasingly loose. “Rock!” I yell as fist-sized projectiles tumble down the slope from under my boots. We flirt dangerously with a snow couloir. While entering the snow would provide relief from the loose talus and scree, the slope is above 30 degrees in angle (38 degrees is considered “perfect” for a snow slide) and shows signs of previous avalanche release. A herd of bighorn sheep traverses below us, calmly observing a pair of two-legged fools. The slope momentarily relents. After catching our breath, however, we discover what lies between us and the return trail: An endless slope of medium-angle snow. Colorado is infamous for its unstable snowpack and touchy, avalanche-prone mountain slopes. “Now what?” It’s already mid-afternoon, and the winter sun is getting lower on the horizon. We agree to pick the safest line possible and begin our descent. In an attempt to minimize our exposure to sliding snow, we do our best to avoid avalanche trigger points, such as tree wells and visible rocks, and maintain a line with minimal loading above us. We are sinking up to our hips in snow; the going is agonizingly slow, and the air is tense. I utter a few choice words I would not want my mother to hear.

We sigh in relief upon regaining the trail. We move as quickly back to the Jeep as our tired legs will carry us. Our skier friend has been waiting in the cold for an hour. “What happened to you guys?” she queries, and – exhausted – we tell her all about our “shortcut.”

What This Means for Customer Experience Management
“Let’s take this shortcut.”  These words can – rightly so – put any seasoned mountaineer on red alert.  Ed Viesturs notes there are “no shortcuts to the top,” and I’ll add “there are no shortcuts back down from the summit, either.”  In the mountains, fatigue, fear, or time pressures can encourage the use of shortcuts.  Too often, these shortcuts lead, in the end, to expend unnecessary energy, enter sketchy terrain, or stay on the mountain longer than desired.

Unfortunately, I have witnessed too many customer experience practitioners opt for the proverbial shortcut with results as painful, tiring, and scary as what I experienced on the slopes of Ruby Mountain.  Here are some common CEM shortcuts to avoid.

Not outlining a program architecture
Many voice of customer practitioners rush to “get a survey out there” without considering VoC as a whole, across the organization. Instead of gaining an understanding of touchpoints, priorities, and painpoints and then prioritizing which touchpoints to measure, panicked practitioners just decide they need a survey now and launch away. The result can be a feedback initiative plagued by insights blind spots where customer priorities take a back seat to corporate concerns. Good voice of customer program architecture can be derived from service model mapping (inner-facing); even better architecture evolves from multi-faceted customer journey mapping (which includes external customer input). Even if budget or resource constraints dictate a small number of touchpoints can be measured, it is important to know what we don’t know, to identify our feedback program blind spots. Solid program architecture makes it easier to go back later, as budget and time allow, to fill information blank spots with customer, partner, and employee feedback. Practitioners skip important foundational elements like journey mapping and program architecture assessments in the name of expediency and money-savings. The truth is: laying a solid programmatic foundation can make customer experience efforts more efficient and effective in the long run.

Surveying customers with no end-point in mind
Far too many programs are still focused on capturing the customer voice with no plan in place for driving action. Plenty of energy goes into building the listening program and moving data around, but limited attention is paid to what happens next. How do we turn data into insights? What does it take to get insights into the hands of people who can make improvements and who are change agents?  How do we respond to negative survey scores, ad-hoc customer criticism, or social media complaints?  If your voice of customer program is gathering more data than your customer experience improvement efforts can digest and utilize, then perhaps it’s time to go on a data diet.  Narrow the focus on making improvements in the areas that are both high priority to your customers and have a reasonable feasibility and ROI for your company. Stop collecting data you have no time or structure to act on. Build customer surveys and other listening channels always with the end in mind.  If it’s unclear what to do with the results or if you’re uncertain that the company is prepared to handle customer opinions on a tactical or strategic level, then take a step back and resist the urge to hoard customer data with the intent to act on it someday.

Not securing business unit buy-in, cooperation, and good tidings
Did you hear the story about the CEM team who does nothing but collect customer data, irrigating the corporate fields with a regular flow of analysis consumed by eager executives, managers, and front-line staff self-governing their way to customer experience bliss?  No?  Me neither.

Build it and they’ll come, right?  Not so fast.  As customer experience management practitioners, we believe in the inherent good of customer insights and earnestly hope that our colleagues in the business units and executive suite chomp at the bit for customer data, action plans at the ready.  The truth is that our internal stakeholders have day jobs and customer experience management can feel like one more burden on their heavily laden plates. Failure to secure their buy-in and sponsorship can result in apathy, indifference…or even hostility.  To overcome this, we need to build and reinforce their buy-in to our customer experience improvement efforts. Stakeholder interviews (initial and ongoing), transparent communication of
VoC results and program changes, even business lunches or cups of coffee, all work to give your stakeholders a sense of input into the program, of insiders’ knowledge, and help them feel like they’ve got some skin in the game.

Surveys and key performance indicators aren’t aligned
When I ask clients to compare their survey questions to their key performance indicators, the results are often startling: There is little to no overlap between the questions used to solicit customer sentiment and the metrics used internally to measure success.  KPIs with no link to customer feedback result in a focus on internal processes, with a blindness to what the customer wants and needs.  A customer survey completely divorced from corporate or team KPIs is essentially dead on arrival as a tool for improving the customer experience.

For instance, let’s say the two primary performance measures for your help desk agents are first call resolution and call handling time, all captured in your issue tracking software. The analysis of your post-event technical support survey reveals that the two primary drivers of overall customer satisfaction with a support event are whether the agent makes the client feel like their issue is a priority and the client assessment of the communications skills of the agent. What a disconnect!  There’s no need to play a game of either/or, both internal process metrics and customer assessments are valid if used to drive positive business outcomes and an enhanced customer experience, but ensure that the two types of metrics harmonize to change the behavior of your employees and how you do business. Survey data and key performance indicators cannot fly in separate orbits.

Customer experience management is hard work and often it is tempting to take shortcuts.  Shortcuts seem full of promise, but frequently lead to frustrating dead-ends, fatigue, and wasted time putting the success of our program at risk.

“Let’s take this shortcut.” These words should – rightly so – put any customer experience management practitioner on red alert.

Sarah Simon is a career insights professional with 16 years of experience in the feedback industry. Specialties include VoC architecture, journey mapping, developing linkages to business performance, reduction of customer defection, results analysis and communication, with expert survey design skills.  She is the survivor of a botched early-generation “big data mining” operation and is happy to live to tell about it.