“Let’s talk about how I can use GPT to help me with competitive intelligence.” Like every OpenAI ChatGPT session, I started the conversation.
If you haven’t tried ChatGPT yourself, stop what you are doing right now, go to www.openai.com, and sign up for a free account to this generative AI tool. This is the breakthrough chatbot you’ve read about in the news, the one that has been trained to know just about everything that can be learned from the web.
It’s exciting and a little scary to have this powerful new technology at your fingertips. AI this smart was science fiction just a few months ago. It can outscore most humans on legal bar exams, write and debug computer programs, compose poetry, even pen a blog or two (although not this one). So, why not go to the source and find out from ChatGPT itself what can be done?
According to ChatGPT, there’s very little it can’t handle — from data collection and monitoring, sentiment analysis, competitor profiling, trend analysis, scenario planning, competitive benchmarking, to content analysis — it’s got all the bases covered.
But how about win/loss, my specialty? “Could GPT actually conduct a win/loss interview,” I asked, somewhat incredulously. “How would you do it?”
Oh, yes, it could be done: “GPT could potentially conduct a win/loss interview using its advanced natural language understanding and generation capabilities, although it would require careful setup and integration with appropriate communication channels.”
It went on to list out the steps, starting with:
ChatGPT cautioned me, though, that “while GPT can provide valuable insights through conducting win/loss interviews, it may not be able to fully replicate the nuances of human conversation or pick up on certain non-verbal cues. Nevertheless, using GPT for win/loss interviews can still offer a scalable and efficient way to collect valuable feedback and insights from customers.”
I asked: What strategies could ChatGPT deploy to keep the respondent from losing interest or failing to move quickly through mundane matters and spend more time drilling down into interesting and important issues?
Always a willing servant, ChatGPT came up with lots of suggestions about how it would personalize the conversation, adapt the line of questioning to the interviewee’s responses, be programmed to use empathetic language and acknowledge the respondent’s feelings or concerns and expressing “genuine interest” in their experience, and using all the standard interviewing techniques of open-ended questions, active listening, and so on.
So, I asked it to go ahead and use all of these strategies and to please compose a win/loss interview transcript to find out why a customer bought an Audi A6. I gave it instructions to conduct the interview following exactly the same process I use for my win/loss interviews.
Out popped a finished transcript, with an entirely plausible set of competitive car models evaluated, competitive criteria considered, importance and vendor ratings, rationales for the ratings, etc. I then asked for and received a brief for BMW, Mercedes-Benz and Lexus for recommendations on actions they could take based on the interview.
It was absolutely jaw-dropping. It took me only 20 minutes to get this far and not a single line of computer code.
So, is it time to hang up my gloves and call it a day? What’s left for the human practitioner?
Well, as of April 2023, anyway, ChatGPT can’t interview human subjects with the spoken word, so “conversations” are all text chats in a window. Right away, I think that disqualifies it as a technology for win/loss interviewing in the way that I do it today. It’s already hard enough to get respondents to agree to a win/loss interview on the phone.
Asking them to do it through a chat window with an AI bot might have some novelty appeal, but sounds like a recipe for disaster of low response rates, low completion rates, and — at best — short, pat answers to questions rather than the detailed, emotionally rich responses that you should expect in a win/loss interview.
Someday, perhaps not too far off in the future, there will be very humanlike AI avatars on the front end, but for now, we are stuck with text.
In the meanwhile, I have been investigating how well ChatGPT could help with the day-to-day work of win/loss. Since it’s so good at summarizing, setting it to work on analyzing a win/loss interview transcript was a natural test. So, I fed it material drawn from PSP’s sample win/loss interview transcript.
Its capabilities here were once again astonishing: it (mostly) accurately stated what the interview was about and summarized the reasons why the winner won and the loser lost. However, here, too, I soon ran into real-world limitations that you need to know about:
Unfortunately, the versions of ChatGPT you can access on OpenAI’s website limit the length of the document you can paste into the chat window for summarization to about 2,700 words, roughly 20 minutes of conversation.
My real-world interviews are typically about two to three times that length depending on how long and fast the respondent can talk.
“Confabulation” is the not so small problem of the GPT software making things up. Things seem to be going fine, but then all of a sudden, it can say a whopper.
In my win/loss interview summarization tests, this showed up as ChatGPT including in its list of issues things that weren’t said, or might have been mentioned, but elsewhere in the discussion ruled out as a significant issue.
The problem with ChatGPT is that these errors sound completely plausible and delivered in the same confident tone as the correct information. So, you must check its work and not let yourself be bamboozled!
As smart as ChatGPT seems to be, it can miss points that require more inferencing than it can handle. In my testing, for example, ChatGPT stated that the customer did not evaluate references, but this was not true: they had assessed customer references as part of a Gartner consultation.
This logical connection was simply missed. Or, when I asked it what mistakes one of the vendors made, it couldn’t answer it concretely so instead it gave me a list of generic mistakes that vendors could make.
Note that ChatGPT 4.0, OpenAI’s latest version, is supposed to be smarter than ChatGPT 3.5 in this regard, and my testing showed this to be true. (ChatGPT 3.5 is free; you have to pay $20/month for limited access to ChatGPT 4.0.) ChatGPT 4.0 did a better job of finding mistakes that vendors made, for example. But it also seemed much more aggressive and possibly more prone to confabulation.
OpenAI warns you that conversations with ChatGPT are not private, so you can’t use them with confidential materials, such as raw win/loss interview transcripts. Redacted and anonymized interview transcripts should be better, but I have not used it with real client materials.
You have to marvel at the power of these AI technologies and their accelerated pace of improvement. It’s hard to deny that they will have a transformative impact on all knowledge workers, never mind win/loss practitioners.
But big-ticket commercial win/loss, especially because it depends on asking a busy business manager for time to discuss potentially sensitive matters, is probably one of the least appropriate targets for complete automation.
Robotic interviews or overly scripted interviews are a well-known pitfall for human interviewers already, and while ChatGPT isn’t the stereotypical robot that talks like a robot, it still comes across as a facsimile, with an odd yet pervasive numbness — exactly this, a lack of feeling — that will not engage people as a good interview must.
It makes much more sense to use AI to super-charge conventional surveys and push them beyond their current limits as confirmatory research tools. But make no mistake, something big has happened and there is no going back.