4 min read
11 Apr
11Apr

Continuing‌ ‌our‌ ‌series‌ ‌of competitive marketing ‌blogs‌ ‌on‌ ‌pitfalls‌ ‌and‌ ‌best‌ ‌practices‌ ‌of‌ ‌win/loss,‌ ‌today‌ ‌we’ll‌ ‌examine‌ ‌the‌ ‌best practice of‌ using win/loss to prioritize top competitive issues for your company. This blog was originally published here.

Ben:‌‌ Ken, tell me why this topic isn’t too obvious. I mean, who doesn’t prioritize top competitive issues?

Ken:‌‌ I think everyone agrees that this is what they should do, but many people struggle with this, especially in competitive intelligence and competitive marketing. CI pros are always on the hunt for a competitive discovery, some scoop that will help their company. The trouble is, it’s actually very hard to know when some nugget of competitive insight is really a big deal or not. More often than not, CI teams struggle when they present TOO MUCH information. Their stakeholders can’t use it all. If everything is important, then nothing is important. Before long, people tune out what the CI teams have to say.

Ben: Yes, I have seen this many times. It’s the data vs. intelligence [insights] issue, isn’t it? I never thought about it in terms of priorities, but I can see how this is connected. So, how does this apply to win/loss?

Ken: Win/loss is no different. You need a way to reliably distinguish major and minor issues uncovered in research if you want to do anything meaningful with what you learn. Say you lose a deal: you talk to the customer, and you find three key issues that caused the loss. Simply reporting those issues isn’t enough. The next deal could have completely different issues or what was a major issue in one deal was minor in another. You need to prioritize and push hardest for solutions to issues that will cause major problems for most deals. And because every company has limited resources, you need a way to calm people down about issues that really don’t matter when it comes to winning business. In other words, you need to use win/loss to help set priorities for the company.

Ben: That makes sense. Win/loss needs to uncover trends as much as it uncovers news. So, what do win/loss practitioners need to know?

Ken: A lot of it comes down to how you rank or score the issues you discover. This isn’t rocket science, and most everyone who tries to do any kind of quantitative analysis will come up with a scoring system of some sort.

Ben: You mean, like high/medium/low priority? Or score from 1-5 or 1-10?

Ken: Yes, and any of these will do. But from there, the best practices are more subtle. There are two we should talk about. First, you need an interviewing strategy that minimizes presumptive scoring bias. This is very common and typically takes the form of win/loss analysis with aggregate scores for abstract categories such as product, marketing, and sales. This creates the false impression that these issues equally impact outcomes—which of course they do not. Scoring of this type is a red flag.

Ben: Can you give me an example?

Ken: Sure. Let’s say you are up against a competitor with an almost identical product in a largely commoditized market. In such a situation, you can expect that factors like price and sales engagement would take a larger role than otherwise. By contrast, a market that’s relatively young will have products that are very different, and customers are less concerned about price than getting the product with the right features for their needs. Two completely different situations. But if your win/loss scoring methodology a priori reports out on such categories as “price”, “sales”, “features”, etc. then you are already misrepresenting the distinctive reality of these different markets.

Ben: Ah, yes, you would be giving more attention to price, perhaps, than customers themselves do.

Ken: Exactly, so you will have a very hard time reconciling priorities from the results because you’ve already made some big assumptions about what the priorities will be before you even start. And to make things worse, scoring arbitrary rollup categories such as product, marketing and sales tends to reinforce siloed thinking about solutions, whereas most competitive problems can be fixed or at least mitigated with a collaborative plan across those functions. But that’s a discussion for another blog!

Ben: So, what do you recommend?

Ken: Here’s what we do. First, we find out from the customer the set of key criteria they ended up using to make their final decision. We begin by asking about these criteria unprompted, and then probe for things that they don’t mention until we’ve identified all the important criteria for the decision. Then we ask the respondent to score the importance of these criteria. That gives us a simple way to find all the issues and rank their relative importance. Note that we ask them to score only the criteria where there were significant differences amongst the vendors. This keeps the number of criteria under control and optimizes our use of time. 

Ben: Nice. So, in some markets, price is very important. In others, much less so. This approach can adapt.

Ken: And then, as the interview proceeds, we can spend more time discussing the issues which the customer has ranked higher in importance. We will usually explore the issues in descending order of importance so that we cover the most important issues first in case we get cut off or run out of time.

Ben: Now, you said there were two things to get right. I get that you need to identify all the issues and get their relative importance. What’s the second thing?

Ken: Statistical significance. If you do, say, a dozen or more interviews, you will unearth at least 10, 20, or maybe even more issues that you can start to analyze across interviews. Some issues will be strengths for you, some weaknesses. But not all issues will have equal statistical significance. Those which many customers talk about are, all other things being equal, more significant than those which just one or two have mentioned.

Ben: That makes sense but how do you quantify it?

Ken: We use Student’s t-test to determine statistical confidence in findings. Many years ago, Richard Case hired a statistician at Stanford Research Institute to develop the model and code we use to determine statistical significance of scores in a win/loss study. It’s integrated into our analytics and reporting, so it’s easy to distinguish issues which are statistically significant—and therefore deserve serious attention—and those which are merely anecdotal. Ben: This sounds complicated. Don’t you find that people’s eyes glaze over when you get into this?

Ken: You don’t need to know how it works in order to benefit from it. I think everyone understands right away the importance of issues depends on both how much it impacts a customer and how many customers are impacted.

Ben: Well, that’s certainly my experience. Once you start presenting findings in this way, your recommendations take on more weight. They’re not just another opinion. They are based on solid ground.

Ken: Yes, and they have to be, if you are going to have any influence on the direction of the company.

Ben: Great stuff, Ken. I enjoyed our chat and look forward to next time.

Comments
* The email will not be published on the website.