The New Measure of Customer Service Success
Using customer experience as a competitive differentiator is a goal for many businesses, but many find it's easier said than done.
As proof of this, more than two thirds of business leaders recently surveyed by Forrester Research stated that their firms have set this as a goal, but more than half lack a definitive strategy to do so.
That's because when it comes to improving customer service operations, many companies lack the right information.
In their attempts to determine the success—and ultimate value—of their contact centers, companies have traditionally looked at customer service purely from a financial vantage point. They have applied business-centered goals, like cutting costs, making money, and beating the competition, to their contact centers, and used indirect metrics, such as automation and containment rates, script adherence, and average handling times, as the guiding principles by which these contact centers were measured.
"The basic concept of contact center management was how fast you could answer the call and get the customer off the phone. That was the driving [key performance indicator]," says Maggie Klenke, founding partner of the Call Center School in Lebanon, Tenn.
Metrics like these are out of step with the current prevailing business shift from a business-centric approach toward a customer-centric one, from one-size-fits-all experiences to hyper-contextualized experiences that focus on giving customers what they want, when and how they want it.
In light of these changes, contact center leaders should determine whether or not traditional customer service metrics still serve them well today.
"Sure, there's a bottom line that you have to manage to, but that can't be the only thing," says Peggy Carlaw, vice president of Impact Learning Systems, a customer service training and consulting firm in San Luis Obispo, Calif. "Leadership today has to say that customer satisfaction is important and find a happy medium between customer satisfaction and cost."
Bruce Belfiore, CEO of BenchmarkPortal, a contact center research and consulting firm in Santa Barbara, Calif., agrees. "A great contact center operation reflects its management's passion for balancing the demands of high quality and low costs," he says.
To do this effectively requires measuring and benchmarking, something Belfiore says can deliver "a crackerjack profile of a contact center's operations that can inspire management to move forward aggressively."
While more organizations are focusing on measuring customer experiences, there's still room for improvement. A third of the companies in the Forrester survey don't evaluate the relationship between experience quality and business outcomes, use job-specific customer experience metrics to evaluate employee performance, or share customer experience metrics and models with employees. This "makes it harder for even well-meaning employees to tell if they're doing the right thing," Forrester analysts concluded in their "State of Customer Experience 2012" report.
The Complete Picture
As more organizations improve their quality of interactions, task completion rate is emerging as the most meaningful guidepost for contact center performance. This metric looks at the number of callers who were able to accomplish their goals through the interaction.
Carlaw calls task completion "the number-one driver of customer satisfaction" today.
Task completion also has a direct correlation with another important metric—abandon rate, which is closely tied to how quickly calls are answered: The longer it takes to answer a call, the higher the abandon rate, which could then inflate future call volumes, resulting in even higher abandon rates, and so on.
In cases when automation is involved, high abandon rates can point to problems with the interactive voice response (IVR) system. It could mean that the prompts and responses the system generated weren't helpful or appropriate. Or even worse, perhaps the system failed to hear and understand what the caller said, thereby routing the call down the wrong path.
But regardless of whether an agent or IVR picks up the call, measuring abandon rates is an inexact science, because many other factors could influence the caller's decision to hang up before completing his task. A caller's tolerance can easily be influenced by his patience and degree of motivation (the importance of the call and the issue that needs to be discussed); the availability of other self-help options, such as an FAQ section on the company Web site; the amount of time he has available; and whether he is paying for the call. His past history with the contact center—whether he has to wait a long time whenever he calls or if he got right through the last time he called—is also a determining factor.
Finishing First
That's why when evaluating task completion and abandon rates, companies also need to look at how tasks are completed. Are most issues being resolved entirely through self-service, with one agent, or with several agents and their supervisors?
First contact resolution measures how often customer issues are resolved on the first try. It is the driver for excellence in any customer service organization and has a powerful and positive ripple effect on all other performance and financial metrics.
"It's simple: Customers want to call, not wait too long, get someone on the phone who can help, and get done with the business at hand," Carlaw says. "And they want the first-line reps to be able to handle their issues without having to refer to their supervisors."
Not surprisingly, customer loyalty and satisfaction drops significantly after having to place a second call for help—and just about disappears after the third call, according to a consumer survey conducted by Customer Care Measurement and Consulting, an Alexandria, Va., firm, and Arizona State University's Carey School of Business. Frustrated consumers are those who have to contact companies an average of 4.4 times to get their issues resolved, the study finds.
Experts agree that first contact resolution is one of the most telling metrics available, but that it can also be difficult to quantify. An agent is unlikely to know that a caller had already visited the company's Web site for information, or that the caller had sent an email to the company about the same problem two weeks ago. But that doesn't have to be the case. "Sophisticated CRM systems can unlock this information," says David Raia, senior research analyst at BenchmarkPortal.
Tammy Cossairt, vice president of client strategy at Telerx, a contact center outsourcing firm serving clients in the pharmaceutical and consumer packaged goods industries, says low-tech methods, such as an agent specifically asking a caller if she's contacted the company before about the issue, can be used as well.
Keep It Simple
This feeds into another emerging metric—customer effort. The Customer Effort Score, developed by the Corporate Executive Board's Customer Contact Council, tracks the amount of time and effort that customers have to put into solving their post-sales problems. This includes cognitive, emotional, physical, and time elements, and presumes that the more effort a customer has to expend in each of these areas, the less satisfied he will be with the interaction.
"If you can take care of their call without them having to jump through hoops, customers will be really satisfied," Raia adds.
Customer effort can be negatively affected by many events and activities, including dealing with an IVR that offers lots of menus and choices, completing a complex process to verify an identity, being asked to repeat information within the call, or talking with agents who use a lot of jargon that then needs to be translated. Customer effort scores also look at whether a company provides accurate information about its products, services, and policies, and makes sure all the necessary information is readily available across all channels.
A Matter of Time
Customer effort is also tied to some of the other more common metrics, such as average handle time, waiting time, and the amount of time a caller is placed on hold by the agent. Average handle time takes into consideration the total amount of time the customer spends on the phone, from start to finish, and is a useful indicator of overall contact center efficiency. It often correlates highly with customer satisfaction.
Agent-generated hold looks at the percentage of the total call volume where agents put customers on hold and the number of times and the length of time the customer spends on hold. For the customer, a few minutes on hold can seem a lot longer.
"If you keep a customer on hold for two minutes and everyone [at the company] keeps him on hold for thirty seconds, you have real problems," Raia notes.
But the length of time on the phone by itself can be deceiving in some respects, according to Carlaw. "Look at the talk times and how much it can cost, and then compare that to the revenue generated," she says. "An agent could have been on the phone for four minutes longer, but she [could have] made so much more in extra sales."
Still another widely popular metric is Satmetrix's Net Promoter Score, which suggests how likely customers are to recommend a company to others. It asks just one question: "How likely is it that you would recommend our company to a friend or colleague?" The customer can respond with a rating on a scale of 1 to 10. Those ratings are then divided into three groups: Promoters (rating of 9 or 10), Passives (7 or 8), and Detractors (0 to 6). The percentage of detractor responses is subtracted from the percentage of promoters to give a Net Promoter Score. That metric is reportedly used by about 69 percent of companies today.
And finally, a good way to anticipate how satisfied the caller will be is to gauge the job satisfaction of the agents. Customer satisfaction is inversely proportional to agent dissatisfaction, according to Raia.
"If agents are happy to come to work each day, they are likely to do a better job of treating your customers [well]," Carlaw adds. In other words, happy agents are more likely to equal happy customers.
This has far-reaching implications well after the phone call ends. Research from Gartner reveals that customers who felt pleased, appreciated, important, or special during an interaction with a business were likely to recommend the company to friends and relatives 31.7 percent of the time, and to purchase more products or services from that company 19.1 percent of the time. Conversely, those who felt let down, frustrated, angry, ignored, or confused were likely to complain about the company to friends and relatives 25.6 percent of the time, switch to another company 20.1 percent of the time, and scale back their purchases from the company 9.5 percent of the time.
The Methodology
Across the industry, there is some disagreement as to the best time to conduct surveys to gauge a customer's satisfaction with a recent service interaction. One school of thought suggests that it's best to present a survey option to customers right at the end of calls to capture their immediate impressions. This also enables the company to quickly respond to customer complaints or problems. Plus, it's easier for organizations to coach agents "when the call is still fresh in their minds," Cossairt adds.
Other experts suggest surveying customers days or even weeks after the interaction, giving callers sufficient time to see if their issues were truly resolved. That information might not be available until they receive their next month's bill, for example.
Waiting that long, though, can defeat the purpose of the survey. According to Klenke, if too much time elapses between the initial call and the survey, details about which agent handled the call and what was said can be lost.
"You need to get calls that have closed within the past two weeks," Raia believes. "If you go beyond that, the caller will have forgotten what happened. The closer you get tothe actual phone call, the more accurate your information will be."
Discrepancies also exist regarding the survey methods. Some say surveys should be voice-based so they can capture comments verbatim, as well as the emotions that go with them. Others say an email survey is just as effective and far cheaper to conduct.
But what is not disputed is the need to always give the customer the option to participate in the survey. And then, to keep the survey simple. Klenke suggests three basic questions: Was the agent helpful? Were you satisfied with the interaction? Would you recommend us to a friend?
After that, be prepared to apply a filter to the results. "You will get very polarized views from people who were either very satisfied or very dissatisfied and not much from people who were in the middle," Klenke explains.
And while some companies rely on third parties to conduct customer satisfaction surveys, they might not always be necessary. "It depends on how you're using the results," Carlaw states. "If you're using [the data] to promote to customers how great your customer service is, having a third party do [the survey] can validate your claims."
Regardless of what metrics a company uses and who conducts the research, experts warn against placing too much stock in a single metric. After all, you wouldn't prepare an elaborate recipe and only measure one of the ingredients.
From Bad to Worse
Verint Systems has come out with a list of words or phrases that can spell disaster when customers use them during a support call. Alone, these words might seem innocuous, but they can be far from harmless when put into context.
Bad
1. customer
2. you people
3. all my friends
4. let me speak
5. you promised
6. explain to me
7. bear with me
Worse
1. any four-letter word
2. lawyer
3. supervisor
4. ridiculous/absurd
5. my statement
6. your fault
7. not good enough
News Editor Leonard Klie can be reached at lklie@infotoday.com.