4 min read
29 Jan
29Jan

Fintech expert Dan Hubscher talks with PSP's Richard Case why win/loss interviews must be conducted with customers and why you can't just ask the reps to tell you why you won or lost. This interview was originally published here.

Dan:   Of course, the focus of PSP’s method is getting win/loss insight by talking directly to customers. What’s your view on getting input from a client’s sales reps?

Richard:   The reps have a lot to say that the interview sponsor needs to hear, but it’s dangerous to rely on the reps’ input exclusively.  Do get their input, of course, about what the company can do to improve, but don’t ask them to read the mind of the customer.  You need to ask the customers directly about both wins and losses.

Dan:   I understand you did a study at Gartner that compared reps’ vs. customers’ opinions on won and lost deals.   Can you share what you found?

Richard:   I was doing research for a very large client, one of the top three computer and software companies at the time.  We had interviewed 64 customers, all blind and anonymous—so the people we interviewed didn’t know who we were working for, they just knew it was Gartner.  It was global–companies in the US, Canada, EMEA, and APAC.  32 wins and 32 losses, so we had a good data set.  We had produced verbatim transcripts for all the interviews and had provided a detailed, action-oriented analysis.  The client management wanted to know what the sales reps had to say about these same 64 deals. So, we proceeded to interview the 64 sales teams, sometimes the lead rep, and sometimes multiple members of the team, and asked them to put themselves in the shoes of the customer and answer our questions as they felt the customer would.  We then compared their responses to the actual customers’ responses.

Dan:   What did you learn?

Richard:   It was very interesting. The results were very different for wins and losses. Losing reps’ responses did not match the customers’ feedback at all.  They didn’t know their buyers’ decision criteria, or when they did, they didn’t have a clear sense of their relative importance or priorities. And they didn’t know how the customer rated all the vendors.  So, they lost the business.  As a rule, the losing reps blamed price or product deficiencies, never their sales skills.  They blamed everyone else in the company for their loss. Winning sales reps did match the customers’ importance of the issues and got much closer to the customers’ ratings of the vendors.  However, winning reps overrated the importance of price and generally overestimated the strength of their competition.  And, as a rule, they rated their own sales skills very highly and thought their skills were more important than customers did. Human nature. As an aside, before I joined Gartner, when I was director of competitive intelligence at Digital Equipment Corporation, I interviewed lots of reps to determine the effectiveness of my competitive intelligence group.  Based on their input, my team was totally unnecessary and a waste of money.  So was every other corporate group, except sales, in the company.

Dan:   Interesting.  Human nature, indeed.  What findings did you present to your client?

Richard:  If you depend on losing sales reps to give you input, you won’t learn the real reasons for losing.  If you implement changes based on reps’ misinformation, you’ll waste tons of money.   Listening to the winning reps is much more informative, but you need to discount what they say about pricing and the strength of the competitors.

Dan:   Did you reveal to the rep or the client who the customer or person interviewed was?

Richard:   No, we shared the findings, but we redacted the verbatim interview transcripts of identifying information and didn’t identify the customers in our report. This information is not included in order to provide a level of confidentiality to the respondent.

Dan:   Did you reveal your insights to the reps at that time?

Richard:   Yes, they got all the anonymized transcripts and the analysis. Both the reps and their managers. Also, we did a short presentation at quarterly sales meetings.

Dan:   How did the reps react?

Richard:   Winning reps were enthusiastic supporters of the program and offered additional contests for us to interview both wins and losses.  Losing reps were also supportive, but not quite as much.  Management decided they needed both rep input and outside conducted interviews.

Dan:   As you’ve continued to develop win/loss analysis techniques across 1000’s of industry interviews, how do the results compare to that early study at Gartner? Have you discovered fundamentally new truths, or have those fundamental truths remained largely the same?

Richard:  I have found the truth of not relying solely on losing sales rep input.  The typical reasons that my client loses deals are still true.  And even winning sales reps do not know all of the story.  Human nature is the same.

Richard:  Dan, have you seen this kind of thing in FinTech?

Dan: Yes, of course.  I can think of a FinTech product that had a name with two distinctly opposing connotations: one connotation was “winning”; the other sounded similar to a specific form of “cheating” that can happen in financial markets.  The latter often came up as a sales objection, particularly cited by customers to the reps in lost deals. The objection was notably infrequent in won deals.  Thus, the internal debate was over the question: is the product name a real barrier, or an easy excuse from the prospect masking the real reason for a loss?  As I was on the inside, I never found the answer, and we hadn’t conducted anonymous third-party interviews to find the truth.  In hindsight, that would have been revealing.

Richard: Very interesting.  Product names are important.

Dan:  Well, that’s it for today. Thanks for sharing your experience with us, Richard.  I look forward to our next talk.

Comments
* The email will not be published on the website.