4 min read
25 Mar
25Mar

Continuing our competitive marketing series of blogs on pitfalls and best practices of win/loss, today we’ll examine the pitfall of canned, robotic, or overly scripted interviews.‌ The original publication of this blog was here

Ben: So far, Ken, you and I have talked about some of the organizational and programmatic issues of doing win/loss. Any of those can sink a program. Our topic today has to do with how you do the interview itself--what you say to the respondent when you have them on the line.

Ken: Yes, and there are books on the topic of how to interview people. It’s a deep and really interesting area. Even in a commercial win/loss interview, you’ve got to involve your subject emotionally to elicit meaningful and truthful responses. And here, the main way you can set yourself up for failure is with an overly scripted, or what I call a “robotic interview.”

Ben: You mean when the interviewer mechanically reads off a list of prepared questions?

Ken: Yes, that’s it. We’ve all been on the receiving end of a phone survey of this type. The moment you realize that this is what’s happening, all you can think about is, “When is this going to be over?” If it’s just a few questions, then I suppose you might get away with it, but for a 30–60-minute interview, like the ones we do, it is intolerable for the respondent, especially with the senior-level decision-makers we are interviewing. It’s painful for the interviewer, too, because you can tell that the poor respondent is bored and is trying to get away.

Ben: Are you speaking from experience here, Ken?

Ken: Sure, of course! Early in my career, when I didn’t know better, I tried interviewing customers this way. It’s natural that you prepare some questions and work through your list. But there are techniques you can use to prevent this from ruining an interview. The challenge, though, is that using these techniques does require some domain knowledge about the topic of the interview. You can’t just farm it out to non-experts who don’t understand what they are asking about.

Ben: Richard Case told me that they tried that when he was at Gartner. They tried to use non-experts to do win/loss interviews.

Ken: That’s right. And it didn’t work out. They tried to use junior-level people to do scripted interviews instead of having the senior Gartner analysts do it. Richard listened in to hear how those interviews went, and he found that the respondents got angry.

Ben: Angry?

Ken: Yes, respondents got frustrated because they would say something really interesting or important, but these comments would go over the interviewers’ heads. They would simply move on to the next question. I mean, you could say, “The sky is green” and the interviewer would say, “Interesting. My next question is…” Dumb. Not what you expect if you take a call from Gartner.

Ben: So, what are some pointers for avoiding this trap?

Ken: We structure our customer interviews in three parts: first, an introduction, which establishes the context of the purchase. It’s an icebreaker and gets you some basic information.  It’s okay to use some canned questions here, like, “What motivated you to look into solutions in this space?” and “What was your role in the selection process?” These questions are open-ended, so they will get the respondent to start talking and opening up, and they let the respondent give you enough background information that you can intelligently start to discuss the main thing--how they made their decision. And so now we’re ready for the second part: the detailed discussion of the decision itself.

Ben: So how do you do that without canned questions?

Ken: The key here is to focus on finding out the criteria used to make the decision and how the respondent judged their options vis-à-vis those criteria. I come prepared with a long list of potential criteria based on my knowledge of the market and briefs from my client. But, before I mention any of these, first I want to hear from respondents unprompted, so that I don’t put words in their mouths. So, to hear it in their own words I simply ask them, “Why did you go with X?” Speaking generally, they might say that the product was easier to use, faster to deploy, less expensive, already the standard in the company, etc. I will note all these down. I ask them if there were any important advantages for the others, even if they didn’t go with them, and here I might hear about some capabilities that were important, even if not compelling enough to sway the decision.

Ben: Ah, so now you have the list of criteria for the decision?

Ken: Most of them, but usually not all. I next ask about unmentioned criteria that often do matter. Sometimes, the respondent just forgot about them. But, just as important, I use this to probe for issues that my client thinks should have been important--usually their secret sauce differentiators. If the respondent doesn’t mention these or thinks they aren’t important, I find out why—of course, my clients want to know. After that, I disregard things which didn’t matter in the decision or where there weren’t any significant differences. This gets us to focus on that handful of issues that stood out in the end: the reasons for the decision.

Ben: So, once you have that list of issues, what’s next?

Ken: I’ll then ask the respondent for importance ratings for each, and then I ask for vendor ratings for each of the criteria. If there is a big difference in rating for some criterion, I will drilldown and ask why, how they came to that conclusion, what they saw, what the weaker vendor said or did when confronted with that challenge, etc. These comments are generally going to be the most important part of the interview because they will reveal exactly why the client won or lost and what actions might counter a weakness or sustain a strength.

Ben: So, it’s still a structured conversation, but this is how you avoid reading off a list of questions all the way through.

Ken: Yes. The key thing really is that you listen carefully to what is being said and adjust the conversation as you go so that you don’t ask for a detailed rationale on a topic that has already been discussed. It can be very hard to listen carefully and remember what you covered when you are thinking about your next question, so this takes some effort and practice. Sometimes, respondents get things mixed up, misspeak, or say seemingly contradictory statements. You have to catch these and ask for clarification.

Ben: How about questions about the sales teams? How do you cover those?

Ken: Two ways. First, I will always ask for an importance rating of the sales engagement and check if there were significant differences in the responsiveness, professionalism, or expertise of the teams. Answers can vary widely and sometimes there are important differences that impact the sale. Other times, the respondent will say that both teams were equally good. If one team does something that really annoys the customer--rare, but it does happen--then it will usually come out here. Some respondents don’t feel comfortable saying bad things about people so I can flip this and ask them to talk about what stood out as especially strong for one sales team or another and get them to talk about what they liked rather than what they didn’t like. Of course, a blind interview, where the respondent doesn’t know who my client is, makes this more reliable.

Ben: And the second way?

Ken: Well, after I go through all the criteria and get the vendor ratings and all that done, I ask some concluding questions--the third part of the interview. More important, though, this is where I like to further explore issues where I sensed reluctance to open up earlier on in the discussion.  These are typically product or support issues, sales tactics, pricing, etc. Probing sensitive issues is important but there is the risk that they cause your respondent to clam up or even want to end the interview early. So, it’s best to save them for the end. Still, you can word these questions to minimize that possibility. For example, “Would you take bids from these vendors again?” is a non-threatening invitation to the respondent to open up about things that bothered them. And, “What could the sales teams have done to make your decision-making process easier?” is a positive way to get them to complain about things the sales people did or didn’t do. I always try to get them to contrast one vendor with another so that we can get concrete examples of what is good and what is bad. The more concrete, the more valuable it will be to the client to take action based on my report.

Ben: Thanks, Ken. Looking forward to our next chat!

Comments
* The email will not be published on the website.