When was the last time your or one of your store managers asked, "Why aren't our program scores improving?” I bet no one had the answer.
While this question can apply to virtually any scored corporate program, I want to focus on customer experience measurement (CEM) programs. The three primary models:
- Overt audit: operational checks, health and safety, brand protection
- Covert audit: mystery shops, age and ID verification
- Customer feedback: Customer satisfaction and intercept surveys.
Lately, we’ve seen companies employing CEM programs falling into three categories: They are not measuring the right things; they are measuring correctly but are not effectively using the information; or they are measuring correctly, and then effectively using the information.
Unfortunately, the third category seems to be a bit elusive. We see three common pitfalls: Using the score as the result of the program and focusing on improving it; poor ratio of measurement compared to analysis; and less-than-optimal ratio of program types and frequencies.
Score as the Result
The objective of any CEM program should be to capture information that can help you draw conclusions and improve the overall execution and in-store experience. In other words, it’s not just about the score or grade. The score is a measurement, not the goal itself. For example, if your program consistently scores 97% to 98%, it’s likely time to change what you are measuring to drive the score down by finding areas to improve. Of course, companies rarely do things to drive down the scores because they have tied bonuses and incentives to the numbers, so the focus has become all about the score.
By focusing on a few items, you should see improvement and greater consistency.
My point is that the score reflects only what you’re measuring. If your goal is high scores, you are likely not measuring your potential weak points.
To truly mine your pain points, your organization needs to center on genuine improvement, not the scores themselves. If there are consistently low scores and eight areas that need improvement, then perhaps tackle three items and make only those the focus of improvement for a period of time. Don’t get caught up in the scores and don’t even talk about the other five items; just focus on getting the selected three done better. By focusing on a few items, you should see improvement and greater consistency.
I would rather have my bonus tied to the sales and profits that come as a result of the improvements than to the program scores anyway.
Do you ever feel like your data is quicksand—a pit of grains that you don’t know what to do with, lest you get sucked in? I am starting to dislike the term “Big Data” because it tends to be overused. To me, it really just means more rows and columns in a spreadsheet. Instead of “more data,” pursue instead actionable intelligence. For example, do you want to know that your customers consistently think your restrooms are dirty, or do you want to know that your restroom standards are being executed but they are below the expectations of your customers? I believe the latter leads to a much more clearly defined next action item.
Also, if you are performing a covert program each month but an overt program only once a year, it is likely that you are not drawing the right conclusions in your analysis. Two possible issues are undercutting your analysis: Either the data from various programs is not collected close enough together to draw effective conclusions, or a program is measured too frequently for any change to take place before you measure again.
What if instead of doing one measurement per month over a quarter in each location, you did three measurements in the first month of a quarter and then spent the rest of the quarter focusing on the thing (or few things) that clearly need improvement? At the start of the next quarter, you can then see how you did and if it’s time to adjust the focus, or if you need to stay the course for a while longer.
So don’t just look for confirmation of the things you do well. Look into your potential weaknesses, seek out a few solutions and measure them to see if they are boosting store performance and consumer expectation. It’s ultimately about creating a winning experience that repeats itself day after day.
Cameron Watt is president and CEO of Intouch Insight/Service Intelligence. Reach him at email@example.com.
Members help make our journalism possible. Become a CSP member today and unlock exclusive benefits, including unlimited access to all of our content. Sign up here.