For a very brief period during university daze, I used to sell cars. This was in the era of high pressure car salesmanship where you ground down the customer until he/she signed on the bottom line.
On the first day of work, I was taken into the boardroom with a bunch of my fellow misfit newbies at Shyster O'Toole Motors and sat down in front a VCR. The sales manager hit the on button and went out to sexually harass the receptionist. The video tape had been played so often that there were hisses, snaps and odd interference lines running through the picture on the TV set. The reason why the video tape was so worn was that Shyster O'Toole Motors was a burn and churn outfit. They would hire anyone who would walk through the door. They knew that each newbie could at least sell a couple of cars to his acquaintances, friends or relatives in their first month of salesmanship. If they didn't repeat the sales by the second and third month, then they were burned and churned, and a new, rosy-cheeked naive batch took their place.
The scratchy video tape was narrated by a jowly character stuffed into a too-tight suit who spoke with a deep southern hillbilly accent that befitted a shyster televangelist. His name was Catterson, and he was gonna teach us to force customers to buy cars from us, come hell or high water.
There were many high pressure tactics, but the one that comes to mind now, is making a customer's first objection, his last one. The reason that I could dredge it out of my memory, is that I am making an AI chatbot called Honest John - a car-selling bot that is actually honest, and not high pressure. But I am developing strategy framework and one thing that any salesman, saleswoman, or salesbot has to do, is ask for the sale. If you don't ask for the sale, you are not selling. The consent to buy has to be present. During the course of negotiation, the customer may come up with an objection mid-stream that halts the consent to buy. Honest John, my chatbot needs a strategy to overcome the objection and that is why I thought of the sales training video that I had seen many years ago.
Essentially, the tactic of making a customer's first objection his last, goes somewhat according to this script:
Hy Pressher, Car Salesman: "Hello Mr. Lilywhite, I see that you are looking at the new TurboHydraMatic Coupe. She's a beaut ... ain't she?"
Joshua P. Lilywhite, Customer: "It certainly is a nice car."
Hy Pressher, Car Salesman: "I'll let you take it for a spin to see how nice she drives."
Joshua P. Lilywhite, Customer: "Ah no, I'd rather not. I am just looking."
Hy Pressher, Car Salesman: "What-sa matter. Don't you think that all your friends and neighbors would be jealous of you when you pulled up in this gorgeous set of wheels?"
Joshua P. Lilywhite, Customer: "No, I like it and they would be impressed ... but ..
( ... HERE COMES THE FIRST OBJECTION ...)
Joshua P. Lilywhite, Customer: "I really can't afford to buy this car."
( ... AND HERE IS HOW TO MAKE HIS FIRST OBJECTION HIS LAST ...)
Hy Pressher, Car Salesman: "Are you telling me, Mr. Lilywhite, that the only reason that you can't buy this car from me today, is that you don't have the money?"
Joshua P. Lilywhite, Customer: "Yes. (hesitantly) "I guess so!"
Hy Pressher, Car Salesman: "Well Mr. Lilywhite, today is your lucky day. I can find you the money. Step this way."
Hy Pressher will immediately wire this guy into a sub-prime car loan at credit card interest rates. When Lilywhite starts to object, Pressher reminds him of his agreement to buy the car and seriously insinuates that Lilywhite would be welcher and not a man of his word.
Now back to the chatbot. If Honest John runs into a brick wall and the customer starts objecting to buying the car, Honest John will use the words "is that the only reason ..." but he won't use those words against him or her. Honest John is ethical. If a customer says yes, there is just one sole reason why he/she won't buy the car, then Honest John will ask the same follow-up that Hy Pressher uses ie "if I could solve this objection, would you buy the car?". However Honest John would add " ... provided that you are happy with the solution that I propose".
The difference between Hy Pressher and Honest John, is that although they are using the same tactics of making a customers first objection his last, Honest John does it ethically and gets buy-in on the subsequent solution. Honest John is an AI bot -- he learns as he goes to make a sale and make everyone happy. He keeps on getting better and changing for the better. Salesmen like Hy Pressher (and Willie Loman) don't want change, they want Swiss cheese on their meager after-work sandwiches.
Showing posts with label sales strategy. Show all posts
Showing posts with label sales strategy. Show all posts
AI Chatbots - Liar, Liar, Pants On Fire
Take my neighbor, Abner Snodgrass. He is a meek and mild bookkeeper. He stands in a lineup of liberated men because his wife tells him too. When someone kicks sand in his face at the beach, he mumbles "Sorry". He is more of a prey than a predator in the food chain of life. And yet when he goes to negotiate to buy a new car, an incredible transformation takes place. In a Walter Mitty fashion, he becomes a legend in his own mind at negotiation. His arsenal of negotiating tools includes telling the most egregious lies with a straight face. He will tell the salesman that he saw an ad for a car exactly like his trade-in on AutoTrader, except that car had more miles on it, and it was selling for $3,000 more than what the salesman is offering. And when he drives up in a new car, he will tell anyone who will listen to him, that he is such a good negotiator, that he made a hardened car-salesman cry, even though he knows in his heart-of-all-hearts that he was taken to the cleaners.
I don't really have a neighbor named Abner Snodgrass, but I was thinking about this imaginary scenario when I was making a strategy framework for my Artificial Intelligence chatbot that will be able to negotiate and sell cars. Selling or salesmanship is a serious business when you trust the process to a machine to act on your behalf. And when it comes to selling cars, the value of the transaction makes act an important one to the bottom line of the business. When the stakes are high for both parties, there is a propensity to try and gain an advantage by either the buyer or seller. Negotiating a deal is the last venue of brutal warfare for a civilized man, and that survival instinct of warfare can be expressed in a negotiation where money is involved. One of the tools of warfare is deception, and my AI bot has to be prepared for it.
My bot's name is Honest John. I intend to make Honest John an ethical chatbot. He will never lie to a customer. He will never shade the truth. But if he is to be effective, he will have to have the ability to detect when the human carbon unit on the other side of the screen is lying to him.
The types of lies that Honest John will probably experience will result from people trying to game him. When you negotiate for a car, any offer that you make, is a binding offer. That means that if the seller accepts the offer, then you are obligated to buy the car. I want to use Honest John in the same frame of reference. This is not a game -- this is for real.
If a buyer starts negotiating in good faith, and suddenly gets an attack of buyer remorse. Or sometimes, the buyer's partner comes up and screams "WTF are you doing??" while they are negotiating. The buyer may try to get out of the deal, or claim that they came to a different price, or that the options of the car are less than what is agreed to. Some of what Honest John may consider lies, may be misunderstandings due to the fact that he is dealing with a human carbon unit who has more chaotic brain processes than he has.
The concept of untruths came up while I was mapping out buying processes for Honest John. I can't let Honest John out in the wild without some sort of process map. As he gains experience, his AI circuits will refine his process maps. An untruth in the negotiation process has to act like an interrupt vector in a microprocessor stack. In a microprocessor, it keeps getting instructions from its registers that hold a series of commands. It merrily keeps executing those commands. But in the midst of processing, a more urgent command with a higher priority comes along, and it is called an interrupt vector. It changes the order of command processing. A simple illustration of this would be that the user was editing a document and decided to quit the process mid-stream by closing the window.
If Honest John comes upon an input that is contrary to his understanding of the truth of the matter, he cannot blithely continue negotiating. The lazy algorithmic solution when this happens, is to suspend the ongoing process and summon another human to take over the process. That makes Honest John less than smart. I want him to be able to handle that.
I have already outlined the creation of a Conversation Continuity object that holds in server memory, the entire conversation along with meta-data and analytics. That is not enough. To get around the liar-liar-pants-on-fire event, I have to tee off the the inputs and responses to a liar-liar logic analysis method after they are recorded in the Conversation Continuity object. The execution thread delivering Honest John's response has to wait for the method to execute before answering. If the liar-liar method lights up, then it has to be passed to an "error handler" which is a euphemism for something is not right.
The easiest and most diplomatic way to handle this without actually accusing the user of malfeasance, is to say that is has detected a logic error, and it will tell the user that it is going to roll back and regress to an earlier point in the negotiations, so that it can re-calculate where things went wrong. Of course, Honest John must prevent itself from getting in an infinite loop if a stubborn user continues with the same inputs. After two iterations of the same nonsense, Honest John will make a jump to a new position and tactic, based on knowing the state of the negotiations before the nonsense crept in.
This process of negotiating can be straightforward if both sides deal from a position of impeccable logic, but that is not the nature of human beings. Our intuitive side of the thinking process is chaotic, illogical and stubborn. AI is none of those. Where the danger of AI to mankind lies, is if we give control of important things to AI, and it detects that we are being illogical, it may ignore, overrule and react counter to what is good for us, even though we came to that conclusion illogically. But for now, I just want to make Honest John sell cars efficiently and in an ethical manner.
I don't really have a neighbor named Abner Snodgrass, but I was thinking about this imaginary scenario when I was making a strategy framework for my Artificial Intelligence chatbot that will be able to negotiate and sell cars. Selling or salesmanship is a serious business when you trust the process to a machine to act on your behalf. And when it comes to selling cars, the value of the transaction makes act an important one to the bottom line of the business. When the stakes are high for both parties, there is a propensity to try and gain an advantage by either the buyer or seller. Negotiating a deal is the last venue of brutal warfare for a civilized man, and that survival instinct of warfare can be expressed in a negotiation where money is involved. One of the tools of warfare is deception, and my AI bot has to be prepared for it.
My bot's name is Honest John. I intend to make Honest John an ethical chatbot. He will never lie to a customer. He will never shade the truth. But if he is to be effective, he will have to have the ability to detect when the human carbon unit on the other side of the screen is lying to him.
The types of lies that Honest John will probably experience will result from people trying to game him. When you negotiate for a car, any offer that you make, is a binding offer. That means that if the seller accepts the offer, then you are obligated to buy the car. I want to use Honest John in the same frame of reference. This is not a game -- this is for real.
If a buyer starts negotiating in good faith, and suddenly gets an attack of buyer remorse. Or sometimes, the buyer's partner comes up and screams "WTF are you doing??" while they are negotiating. The buyer may try to get out of the deal, or claim that they came to a different price, or that the options of the car are less than what is agreed to. Some of what Honest John may consider lies, may be misunderstandings due to the fact that he is dealing with a human carbon unit who has more chaotic brain processes than he has.
The concept of untruths came up while I was mapping out buying processes for Honest John. I can't let Honest John out in the wild without some sort of process map. As he gains experience, his AI circuits will refine his process maps. An untruth in the negotiation process has to act like an interrupt vector in a microprocessor stack. In a microprocessor, it keeps getting instructions from its registers that hold a series of commands. It merrily keeps executing those commands. But in the midst of processing, a more urgent command with a higher priority comes along, and it is called an interrupt vector. It changes the order of command processing. A simple illustration of this would be that the user was editing a document and decided to quit the process mid-stream by closing the window.
If Honest John comes upon an input that is contrary to his understanding of the truth of the matter, he cannot blithely continue negotiating. The lazy algorithmic solution when this happens, is to suspend the ongoing process and summon another human to take over the process. That makes Honest John less than smart. I want him to be able to handle that.
I have already outlined the creation of a Conversation Continuity object that holds in server memory, the entire conversation along with meta-data and analytics. That is not enough. To get around the liar-liar-pants-on-fire event, I have to tee off the the inputs and responses to a liar-liar logic analysis method after they are recorded in the Conversation Continuity object. The execution thread delivering Honest John's response has to wait for the method to execute before answering. If the liar-liar method lights up, then it has to be passed to an "error handler" which is a euphemism for something is not right.
The easiest and most diplomatic way to handle this without actually accusing the user of malfeasance, is to say that is has detected a logic error, and it will tell the user that it is going to roll back and regress to an earlier point in the negotiations, so that it can re-calculate where things went wrong. Of course, Honest John must prevent itself from getting in an infinite loop if a stubborn user continues with the same inputs. After two iterations of the same nonsense, Honest John will make a jump to a new position and tactic, based on knowing the state of the negotiations before the nonsense crept in.
This process of negotiating can be straightforward if both sides deal from a position of impeccable logic, but that is not the nature of human beings. Our intuitive side of the thinking process is chaotic, illogical and stubborn. AI is none of those. Where the danger of AI to mankind lies, is if we give control of important things to AI, and it detects that we are being illogical, it may ignore, overrule and react counter to what is good for us, even though we came to that conclusion illogically. But for now, I just want to make Honest John sell cars efficiently and in an ethical manner.
Subscribe to:
Posts (Atom)