Continuing our series of blogs on pitfalls and best practices of win/loss, today we’ll examine the best practice of measuring the return on investment of a win/loss program. This blog was originally published here.
Ben: I’m a big believer in measuring ROI of a program, but I think this one is going to be a challenge. You know the saying, “Success has a thousand fathers, while failure is an orphan?” Call me a cynic, but even if a win/loss program does a good job and sales go up, how can I take credit for that? And how would I measure my contribution?
Ken: Well, I totally agree with you that win/loss doesn’t accomplish anything operating in a vacuum! Unless it influences actions and decisions taken by the salespeople, the marketing team, and the product managers, then it’s pointless. But if a company’s overall competitive performance can be shown to improve at the time that the effects of a win/loss initiative kick in, then you can assess its contribution. And although it can be difficult to credibly measure, there are ways to do it that we can talk about. Surprisingly, though, I find that usually the ROI of a win/loss program is not measured at all!
Ben: I can relate to that, but you’d think that most companies would require some sort of financial justification to start a program.
Ken: In my experience, that’s not the case. Usually, a win/loss program initiative is driven by the vision of a senior leader in a company who simply believes that the intelligence you get from running a win/loss program is necessary for running a successful business. Or, a series of competitive losses will ring alarm bells, and someone gets assigned to do win/loss. That’s how these things start. The problem is that once the crisis has passed, or when the company fails to meet growth objectives for whatever reason, or when there are leadership changes, then the business value of win/loss along with every other corporate program is questioned and challenged. This is when win/loss programs lacking a rigorous ROI measurement system become highly vulnerable to cutbacks or cancellation.
Ben: Yeah, a change in priorities or a financial shortfall could definitely impact a win/loss initiative.
Ken: And when that happens, finance has to look for ways to reduce operating costs, so this is to be expected. However, if you can show that your program is instrumental to delivering on revenue goals, then you are in much better shape. Still, this is surprisingly hard to do.
Ben: So, what are the key things? Where would you start?
Ken: Start by measuring baseline competitive performance before or at least as soon as you introduce the win/loss program! Without that, it’s impossible to measure how much the program has contributed to resulting improvements. Not doing this can be a big mistake. And I am speaking from personal experience here. The first time I ran a win/loss program I didn’t bother. And everything was fine until we got a new CEO and he wanted numbers on everything. Which I didn’t have… So, the next time I set up a program, I was much more diligent and carefully measured the company’s competitive win rate before we started doing any win/loss interviews. It takes a quarter or two to get a new program going so you do have time to gather those baseline metrics. You just have to know to do it.
Ben: What are the key metrics?
Ken: From a strategic perspective, it’s market-share. This is the big-picture number. But most companies rely on third parties to measure this and even then, it's updated only once per year, at most. From an operational perspective, you need something you can measure yourself and keep tabs on at least quarterly. For me, the key metric is the competitive win-rate. This is simply the number of competitive wins divided by the total number of competitive deals.
Ben: Sounds simple. Why did you say it’s hard?
Ken: It’s harder than you might think for two reasons. First, you need to focus exclusively on COMPETITIVE deals. So, you should exclude renewals, add-ons, expansions, etc., where there is no competitive contest. If your CRM system tracks this, then great. In my experience, very few do, and even if they look like they do, the data in them can be uneven depending on how the salespeople fill it in. A good proxy for this is “new name account” which is often more reliably tracked in CRM systems. The point is that we don’t want just any deal to be considered in the competitive win-rate, just the ones where a competitor tried to take it away!
Ben: OK, competitive deals only. What is the other reason it’s hard?
Ken: You only want to count competitive losses. That is, deals lost to competitors. Now, here’s where it gets tricky: what do you do with deals where the team DIDN’T compete--where they qualified out early because they realized that it wasn’t a good fit? And, presumably, someone else won… What about those? Aren’t they competitive losses? My answer here is NO: you should exclude early qual-outs from the count of total competitive deals because you should only be measuring your competitive win rate on deals which make business sense. You don’t want salespeople chasing deals that don’t make sense for you or the customer. Part of the job of the sales team is to drop those early.
Ben: So how do you count those in the CRM system?
Ken: Every organization has their own spin on this, but most CRM systems will have a sales stage model in which the deal moves through a series of steps from the first level of prospecting, through qualification, to closed (either won or lost). You only want to count those lost deals that made it to the “qualified” stage. Deals which jumped over the “qualified” stage to closed should be excluded. Unfortunately, this is not easy to count with Salesforce. You need a Salesforce guru to help you set up the report. It can be done, so don’t take “no” for an answer if that’s what you are told initially. It’s critical for measuring the competitive win-rate in a meaningful way.
Ben: So, we divide the competitive wins by the total number of competitive deals and get the competitive win-rate. Then what?
Ken: Assuming that this is the baseline performance, now you have a way to measure the impact of competitive improvement. For example, let’s say your baseline win rate is 60% (you win 6 of 10 competitive deals). It’s not unreasonable. And then, after you run the win/loss program for a while, you raise that to 70% (7 of 10 competitive deals won). Again, a very reasonable assumption. Just to keep numbers simple, if your total revenue at stake in competitive deals is $100M, then the improvement was worth about $10M. Now you have the basis for an ROI calculation, depending on how much the win/loss program costs to run. It’s very easy to financially justify a program which delivers $10M of sales for a few hundred thousand dollars.
Ben: Any final gotchas?
Ken: You have to be honest with the numbers. Only include revenue from competitive deals. More mature businesses will naturally have less competition and more renewals. If competitors start siphoning off your renewals business, that can change. So, a win/loss program can be financially justified throughout a business lifecycle but will usually have the greatest impact when the market is still growing and attracting a feeding frenzy of competition and companies are vying to be market-share leader before things settle down.