All Things Techie With Huge, Unstructured, Intuitive Leaps
Showing posts with label chatbot that negotiates. Show all posts
Showing posts with label chatbot that negotiates. Show all posts

The Birth of a Car-Selling Monkey --The Evolutionary Cycles of My Chatbot


Honest John is evolving, and it's not taking millions of years. Honest John is my chatbot that will sell cars either online or on at dealership kiosk. This side project of mine started when friends of mine wanted my help in buying a new car, and they had a bad experience with a high-pressure car salesman who was a stranger to the truth. My friend had said that she would rather negotiate with a computer and that is how Honest John was born.

I fired up my Software Development Kit, opened a framework, and it wasn't hard to get some running code quickly. Unfortunately, the earliest of version of Honest John was quite stupid. He was merely a parrot. And if you stumped him with a question that he didn't understand, he would give an innocuous reply and ask a random question. Obviously we had a long way to go.




The conversation was quite two-dimensional. I was using AIML which is Artificial Intelligence MarkUp Language. The way that it works, is that it recognizes a predicate in the input text, searches through its library for that predicate and spits out a response. The first task on my part was to add some humanity and politesse to it. You can't expect to sell something to a human unless you act like a human yourself. So I had extensive edits to the AIML to make it more human.


The personalization of the conversation was necessary. To do that, I had to write a user object that remembered things about who the chatbot was talking to. Honest John had to remember if he was talking to a woman or a man, and the person's name. It was functional now, but it was like stick figures talking to each other.

Before I went further into a more human chatbot, it needed some smarts. Most chatbots out there are incapable of logic and error correction. If Honest John were to negotiate, he would need to evaluate arithmetic expressions so that he could talk money and price. He needed to be date/time aware. He needed to have logic to recognize if a bid was lower than the previous one, and needed to react appropriately. Even though there are wonderful recursive elements in AIML, this sort of stuff was way to complex for AIML to handle.


So the answer was to intercept the inputs and AIML outputs, and send them to a parser that would determine if the conversation needed remediation by an Arithmetic Logic Unit or a plain old Logic Unit in the code. Luckily this was easy to do, because my framework is a J2EE (Java Enterprise Edition) framework that is capable of complex actions like creating any objects, stuffing them with data and holding them in memory for easy access. Because of unique, time-aware Java classes and multi-threading, I could take the conversation, dissect it and send it to appropriate parsers which kicks a new thread to do some work on each element, wait for the response, and finally spit out a response to the user that is intelligent. The other element that most chatbots do not have, is that they can record the conversation history, but they cannot traverse it, regress to a certain point in the past, and understand past statements. I had to create a live chat record in memory, along with meta-data and logic to correct those faults. In the middle of a negotiation, if things got off the rails, the chatbot could go back to the last point of agreement and start again from there.

Now we were getting somewhere. We had the beginnings of a bot that could negotiate. However we still had the problem of sophistication -- it was just two stick figures talking to each other. Humans need emotions and empathy, and bots need to live in that domain too -- however artificial it may be.


The Holy Grail of mixing digital smarts with the human milieu was first appreciated, understood and defined by Alan Turing, when he devised the Turing Test, in which a human could not detect that they were talking to a computer. That requires an EQ and an IQ (an Emotional Quotient as well as an Intelligence Quotient). Honest John doesn't even pretend to be able to pass a Touring Test. But the conversation has to become more three-dimensional in human terms.

Understanding emotion in the user and reacting to it, is the beginning of artificial personality. This is important to Honest John in the selling process.



Suppose that in the middle of negotiating, Honest John detected that the user was getting angry, bored or any other negative emotion that would be the thin edge of the wedge in precluding a sale -- after all, his whole job is to sell cars. The chatbot would need to have the detection circuits to understand that. More importantly, Honest John would need to take remedial action, and either soften or harden his tone. Moreover, he would need to alter the negotiation strategy, either becoming more or less hard-nosed depending on the specifics.

For that reason, Honest John needs to have several strategy processes defined, and they all relate to pre-defined personality aspects. Honest John needs to adjust the tone of the negotiations. To do that, not only does he require the right words, but also the right actions. If the negotiation price is in the ball-park of a sale, and he detects that the user may walk, Honest John needs to sell the car. If he is not in the ballpark of a selling price, he either needs to adjust his negotiating increments depending on the temperament displayed by the human on the other side of the screen. He must be capable of being "fuzzy".

So all in all, Honest John needs to have a range of sophisticated behaviors before I let him out in the wild, I am working on it. He does have AI networks built into the stream of things. He has such Natural Language Processing (NLP) tricks like Bag-of-Words and other algorithms to help him decipher things. I think that the tools are all in place in Honest John innards. All that I need to do, is integrate them, expand them and polish them. Who knows, you may meet Honest John one day in a showroom or online, and you will remember his evolutionary history.

Never Mind Artificial Intelligence, How About Artificial Personality ?

In my quest to make the ultimate Artificial Intelligence chatbot that sells cars, I have been pontificating on various attribute that the chatbot should have. It should have an EQ (Emotional Quotient) as well as an IQ (Intelligence Quotient). It should be good at math instantly. It should be good at logic and detecting attempts to misdirect and confuse it. It should know when to be aggressive and when to back off, as part of its emotional awareness. It should be able to remember conversations, and return to any point in the conversation after a non sequitur, especially when in the middle of negotiations. I have already describe on a high level as to how I would implement the technology for this in previous articles.

As I was discussing this with a friend, it was pointed out that I needed to create a de facto artificial personality. And it was pointed out to me, that perhaps there should be a feminine one as well as a masculine one. I named my chatbot Honest John and made him a male, simply because I am a male, and I tried to transpose what I would say if I were a chatbot.

I keep up to date with Artificial Intelligence and I am practitioner of it. There are researchers out there seeking the Holy Grail of artificial consciousness in silicon. They are trying to making "thinking machines" with consciousness. Artificial consciousness in a thinking machine is a noble aim, but I think that it is putting Descartes before the horse. One has to have a personality that directs the aspects of thinking and personality-expression, much like a wedding cake and a wedding ring converts your partner's personality to a morose, complaining entity with a negative worldview.

Creating gender in a chatbot is easy. It is already incorporated in the AIML (Artificial Intelligence Markup Language). It substitutes "he" for "she" and hobbies like "sewing" instead "drinking beer". But that is not enough. The gender responses also have to match the personality. For example the non-sympathetic, hard-nosed, take-no-prisoners negotiating chatbot could either be a man or a woman, and truth be told, some men prefer a woman with those traits. So, there has to be a way of imbuing personality into the chatbot.

Luckily, that is not technically difficult to do. Once personality traits are defined, they are stored in AIML, and the appropriate AIML libraries are loaded when the chatbot fires up. The work for this, is all semantic and expressed in natural language within the AIML. This is where a liberal arts degrees become useful again -- at the intersection of technology and human interaction.

So my chatbot Honest John will have the capability of becoming transgendered into Honest Jane. Or Honest John will have the ability to stop being the cigar-chomping salesman and become the meditating yogi who recommends an electric car at a fair price to the goodhearted peoples who made it to save the planet's environment. This has been a fun journey so far.

AI Chatbot Tactics ~ Making A Customer's First Objection His Last

For a very brief period during university daze, I used to sell cars. This was in the era of high pressure car salesmanship where you ground down the customer until he/she signed on the bottom line.

On the first day of work, I was taken into the boardroom with a bunch of my fellow misfit newbies at Shyster O'Toole Motors and sat down in front a VCR. The sales manager hit the on button and went out to sexually harass the receptionist. The video tape had been played so often that there were hisses, snaps and odd interference lines running through the picture on the TV set. The reason why the video tape was so worn was that Shyster O'Toole Motors was a burn and churn outfit. They would hire anyone who would walk through the door. They knew that each newbie could at least sell a couple of cars to his acquaintances, friends or relatives in their first month of salesmanship. If they didn't repeat the sales by the second and third month, then they were burned and churned, and a new, rosy-cheeked naive batch took their place.

The scratchy video tape was narrated by a jowly character stuffed into a too-tight suit who spoke with a deep southern hillbilly accent that befitted a shyster televangelist. His name was Catterson, and he was gonna teach us to force customers to buy cars from us, come hell or high water.

There were many high pressure tactics, but the one that comes to mind now, is making a customer's first objection, his last one. The reason that I could dredge it out of my memory, is that I am making an AI chatbot called Honest John - a car-selling bot that is actually honest, and not high pressure. But I am developing strategy framework and one thing that any salesman, saleswoman, or salesbot has to do, is ask for the sale. If you don't ask for the sale, you are not selling. The consent to buy has to be present. During the course of negotiation, the customer may come up with an objection mid-stream that halts the consent to buy. Honest John, my chatbot needs a strategy to overcome the objection and that is why I thought of the sales training video that I had seen many years ago.

Essentially, the tactic of making a customer's first objection his last, goes somewhat according to this script:

Hy Pressher, Car Salesman: "Hello Mr. Lilywhite, I see that you are looking at the new TurboHydraMatic Coupe. She's a beaut ... ain't she?"

Joshua P. Lilywhite, Customer: "It certainly is a nice car."

Hy Pressher, Car Salesman: "I'll let you take it for a spin to see how nice she drives."

Joshua P. Lilywhite, Customer: "Ah no, I'd rather not. I am just looking."

Hy Pressher, Car Salesman: "What-sa matter. Don't you think that all your friends and neighbors would be jealous of you when you pulled up in this gorgeous set of wheels?"

Joshua P. Lilywhite, Customer: "No, I like it and they would be impressed ... but ..

( ... HERE COMES THE FIRST OBJECTION ...)

Joshua P. Lilywhite, Customer: "I really can't afford to buy this car."

( ... AND HERE IS HOW TO MAKE HIS FIRST OBJECTION HIS LAST ...)

Hy Pressher, Car Salesman: "Are you telling me, Mr. Lilywhite, that the only reason that you can't buy this car from me today, is that you don't have the money?"

Joshua P. Lilywhite, Customer: "Yes. (hesitantly) "I guess so!"

Hy Pressher, Car Salesman: "Well Mr. Lilywhite, today is your lucky day. I can find you the money. Step this way."

Hy Pressher will immediately wire this guy into a sub-prime car loan at credit card interest rates. When Lilywhite starts to object, Pressher reminds him of his agreement to buy the car and seriously insinuates that Lilywhite would be welcher and not a man of his word.

Now back to the chatbot. If Honest John runs into a brick wall and the customer starts objecting to buying the car, Honest John will use the words "is that the only reason ..." but he won't use those words against him or her. Honest John is ethical. If a customer says yes, there is just one sole reason why he/she won't buy the car, then Honest John will ask the same follow-up that Hy Pressher uses ie "if I could solve this objection, would you buy the car?". However Honest John would add " ... provided that you are happy with the solution that I propose".

The difference between Hy Pressher and Honest John, is that although they are using the same tactics of making a customers first objection his last, Honest John does it ethically and gets buy-in on the subsequent solution. Honest John is an AI bot -- he learns as he goes to make a sale and make everyone happy. He keeps on getting better and changing for the better. Salesmen like Hy Pressher (and Willie Loman) don't want change, they want Swiss cheese on their meager after-work sandwiches.

The Third R in AI Chatbots - Rithmatic

Chatbots are pretty good at readin' and 'ritin'. But they are not good at the third "R" -- 'rithmatic. Artificial Intelligence Markup Language (AIML), the basis of a lot of chatbots, is good but not good enough for advanced chats. The language itself, based on XML, can have the facility for "smart substitutions". An example of a smart substitution in the markup pseudo-code goes like this:

<pattern><bot='name'/> IS* <pattern><template>Hello <aiml:get "name"/>

and the chatbot would say Hello Ken. But for a really smart chatbot, that is way too simplistic for anything but conversation.

If you have been following my articles, you know that I am coding a chatbot called Honest John that will sell new cars on behalf of a dealer. Not only will it chat, but it will negotiate. For applications like this, smart substitution is not enough. It has to be able to do math (or maths as my British friends say -- but what do they know, the just invented the language).

A smart bot must be able to substitute for x in the following ways:

"You want the car delivered on Tuesday? That is only <x; x<4;> day(s) away and I need a lead time of 4 days to deliver.
You offered me $34,500 for the vehicle. The offer price exceeds the maximum discount of $<x;x=(price-.06(price))> that I am allowed to offer you on that particular car.
Smart substitution cannot do math. Back in the day when I designed microprocessor hardware, we used to use a silicon chip called an ALU (or an Arithmetic Logic Unit) when we had an application that required a lot of math processing. The microprocessor would pass on the ciphering to the ALU if floating point operations were required. A smart chatbot needs the equivalent of a software ALU function.

An even smarter chatbot will have an AIML processor that will recognize tags with arithmetic expressions and hand them off to its own Arithmetic Logic Unit for processing. It will have a smart parser. This functionality is a required component for negotiation using numbers and money. The concept of a tag that invokes arithmetic will put some real brain muscle into Honest John.

The nice thing about introducing a calculating tag parser, is that once you do the framework for arithmetic expressions of tags (using a custom tag classes), you can create tags that do other things like logic expressions, matching, sorting and any other function that lends itself to be expressed in symbolic language in code. You could even create a tag that invokes an AI engine automagically.

Honest John's intelligence arsenal is really shaping up. He will be a force majeure among smart chatbots. After all, too many chatbots abuse the privilege of being stupid.

AI Chatbots - Liar, Liar, Pants On Fire

Take my neighbor, Abner Snodgrass. He is a meek and mild bookkeeper. He stands in a lineup of liberated men because his wife tells him too. When someone kicks sand in his face at the beach, he mumbles "Sorry". He is more of a prey than a predator in the food chain of life. And yet when he goes to negotiate to buy a new car, an incredible transformation takes place. In a Walter Mitty fashion, he becomes a legend in his own mind at negotiation. His arsenal of negotiating tools includes telling the most egregious lies with a straight face. He will tell the salesman that he saw an ad for a car exactly like his trade-in on AutoTrader, except that car had more miles on it, and it was selling for $3,000 more than what the salesman is offering. And when he drives up in a new car, he will tell anyone who will listen to him, that he is such a good negotiator, that he made a hardened car-salesman cry, even though he knows in his heart-of-all-hearts that he was taken to the cleaners.

I don't really have a neighbor named Abner Snodgrass, but I was thinking about this imaginary scenario when I was making a strategy framework for my Artificial Intelligence chatbot that will be able to negotiate and sell cars. Selling or salesmanship is a serious business when you trust the process to a machine to act on your behalf. And when it comes to selling cars, the value of the transaction makes act an important one to the bottom line of the business. When the stakes are high for both parties, there is a propensity to try and gain an advantage by either the buyer or seller. Negotiating a deal is the last venue of brutal warfare for a civilized man, and that survival instinct of warfare can be expressed in a negotiation where money is involved. One of the tools of warfare is deception, and my AI bot has to be prepared for it.

My bot's name is Honest John. I intend to make Honest John an ethical chatbot. He will never lie to a customer. He will never shade the truth. But if he is to be effective, he will have to have the ability to detect when the human carbon unit on the other side of the screen is lying to him.

The types of lies that Honest John will probably experience will result from people trying to game him. When you negotiate for a car, any offer that you make, is a binding offer. That means that if the seller accepts the offer, then you are obligated to buy the car. I want to use Honest John in the same frame of reference. This is not a game -- this is for real.

If a buyer starts negotiating in good faith, and suddenly gets an attack of buyer remorse. Or sometimes, the buyer's partner comes up and screams "WTF are you doing??" while they are negotiating. The buyer may try to get out of the deal, or claim that they came to a different price, or that the options of the car are less than what is agreed to. Some of what Honest John may consider lies, may be misunderstandings due to the fact that he is dealing with a human carbon unit who has more chaotic brain processes than he has.

The concept of untruths came up while I was mapping out buying processes for Honest John. I can't let Honest John out in the wild without some sort of process map. As he gains experience, his AI circuits will refine his process maps. An untruth in the negotiation process has to act like an interrupt vector in a microprocessor stack. In a microprocessor, it keeps getting instructions from its registers that hold a series of commands. It merrily keeps executing those commands. But in the midst of processing, a more urgent command with a higher priority comes along, and it is called an interrupt vector. It changes the order of command processing. A simple illustration of this would be that the user was editing a document and decided to quit the process mid-stream by closing the window.

If Honest John comes upon an input that is contrary to his understanding of the truth of the matter, he cannot blithely continue negotiating. The lazy algorithmic solution when this happens, is to suspend the ongoing process and summon another human to take over the process. That makes Honest John less than smart. I want him to be able to handle that.

I have already outlined the creation of a Conversation Continuity object that holds in server memory, the entire conversation along with meta-data and analytics. That is not enough. To get around the liar-liar-pants-on-fire event, I have to tee off the the inputs and responses to a liar-liar logic analysis method after they are recorded in the Conversation Continuity object. The execution thread delivering Honest John's response has to wait for the method to execute before answering. If the liar-liar method lights up, then it has to be passed to an "error handler" which is a euphemism for something is not right.

The easiest and most diplomatic way to handle this without actually accusing the user of malfeasance, is to say that is has detected a logic error, and it will tell the user that it is going to roll back and regress to an earlier point in the negotiations, so that it can re-calculate where things went wrong. Of course, Honest John must prevent itself from getting in an infinite loop if a stubborn user continues with the same inputs. After two iterations of the same nonsense, Honest John will make a jump to a new position and tactic, based on knowing the state of the negotiations before the nonsense crept in.

This process of negotiating can be straightforward if both sides deal from a position of impeccable logic, but that is not the nature of human beings. Our intuitive side of the thinking process is chaotic, illogical and stubborn. AI is none of those. Where the danger of AI to mankind lies, is if we give control of important things to AI, and it detects that we are being illogical, it may ignore, overrule and react counter to what is good for us, even though we came to that conclusion illogically. But for now, I just want to make Honest John sell cars efficiently and in an ethical manner.

Unfair But Effective Chatbots - Taking The Artificial Out Of Intelligence

The whole premise behind a chatbot is to make the experience of chatting with a machine sound anthropomorphic -- as close to possible as a human-to-human experience. So chatbot developers dig right in and try to make conversations amiable, likable, coherent and smart. They focus on the manner, delivery and tone of the responses to engage the humans. That may be fine and dandy, but they are missing a huge element.

My chatbot named Honest John, is made to sell cars. It is made to replace the car salesman. If you troll through my articles, you will find that the genesis of this started when friends of mine had a bad situation with a car salesman when they had to replace their vehicle due to hitting a deer on the highway. They remarked that they would rather deal with a computer, than the smarmy salesman who prevaricated all through the sales process. That was my Eureka moment.

I have already outlined in past articles, how I am going to add EQ and IQ to the chatbot. I am building in an emotion detector framework that will alter the selling and negotiation strategy if it starts to detect untoward emotions in the human on the other side of the screen. I am also putting in some Conversation Continuity objects in memory so that the machine is cognizant of the entire history of the conversation, including meta-data and analytics, so that it can reset the conversation if the negotiations go off the rails.

The technologies that I am using includes AIML (Artificial Intelligence Markup Language), not only in a smart recursive role, but the predicates that detect the context of the conversation inputs, have a turbocharged assist with NLP (Natural Language Processing) as well as an ANN (Artificial Neural Network) monitor.

The reason why you want to detect emotion, is because Honest John the chatbot will have a series of strategies in his arsenal, and he will pick strategies according to cognitive context of what is going down. I have already mapped out a strategy framework using the following general factors:


  • geniality - does my subject respond to jokes or puns?
  • speed - does my subject cut to the chase or enjoy the interplay?
  • sensitivity - does my subject withdraw with aggressive negotiation?
  • intent - is my subject serious?
  • indecisive - does my subject have a clear idea of what they want?

While all of these attributes are important towards deriving a strategy framework, they are all predicated on thinking like a human. But what if a chatbot was programmed to behave better than a human? And do it with less intelligence but more forethought and strategy. After all, the great military strategist and philosopher Sun Tzu who wrote "The Art of War" proclaimed “Great results, can be achieved with small forces.”

When I say strategy in this overall context, I don't mean the five attributes that I mentioned above when negotiating with a human. I mean the overarching strategy that takes into account, the idiosyncrasies and vagaries of the human mind. If you build something exploiting those principles, the chatbot will be super-efficient, effective and perhaps unfair. Our brains are not as logical as we think they are, and that can be exploited in an AI chatbot that is designed to do so.

The methodologies for exploiting the foibles of the human mind and giving your AI chatbot an advantage can be found in the unlikeliest places -- a bestseller book by a Nobel Prize laureate in economics. I am referring to the book "Thinking, Fast and Slow" by Daniel Kahneman. Kahneman is a psychologist who with his colleague, Amos Tversky, mapped the two modes of thinking by the human brain and won the Nobel Prize doing it.

Their discovery relates to the dichotomy of cognitive facilities in human thinking. We have the fast, intuitive, thin-slicing, non-logical part of our brains, and we have the slow, deliberate, highly logical and rational part of the brain. Kahneman has mapped the major effects of the fast-thinking part of our brains, and using the information that he has gleaned from his research, we can actually program a bot to utilize these effects to great success.
Here are some overall algorithmic effects in the human brain, that can be utilized by a chatbot to gain an advantage over the human using it.

The Lazy Controller

Humans would much rather use the fast-thinking part of the brains than the slow, rational part. They regularly hand over control of thoughts and actions to the fast-thinking mechanism, because it takes real work to use the rational part. Kahneman details the results of much research that shows when a human being is not relaxed, they use the intuitive, non-logical side by a wide margin. Ergo, using this principle, if a human is interacting with a chatbot at a kiosk while they are standing, the chatbot has a logical advantage over the person. Similarly if the chatbot appears in a very busy UIX (User Interface Experience) then the Lazy Controller takes over. The black-hat or evil programmers will us the UX or User Experience to nudge the humans to fast and logically flawed thinking. This combined with other fast-slow thinking effects can really increase the performance of a negotiating chatbot by using non-following faulty logic.

Priming The Associative Machine

There are many ways to incorporate the associative machine aspect into a chatbot. One can surreptitiously construct a proposition in a buyer's head and get them to believe it. That belief affects their future behavior. Sales people and advertisers do it all of the time. For example, if Honest John were not that honest, when he was selling a car, he would prime the associative machine in the following way:


  1. Most cars that sell over $50,000 have 6-way adjustable electric seats.
  2. This car has 6-way adjustable electric seats.
  3. This car is only $36,000.
  4. Therefore this car is comparable to a much more expensive car.

The associative machine creates cognitive ease by creating feelings of value, goodness, familiarity, truthiness (as Stephen Colbert calls it) and ease. Kahneman's research shows that something simple like bold text adds truthiness. He gave subjects a pair of untrue statements. One was in bolder text than the other, the subjects were asked to choose the true or truer statement and they always chose the one in bolder text. This is something to remember in text-based chatbot when you want emphasis.

On Being A Verbal Donald Trump

Donald Trump's speech has been analyzed by experts, and it is at the level of Grade Four student. If you notice, he uses phrases like "Very Bad" or "Sad" in a direct way with simple adjectives. This resonates with a majority of people and the psychology research backs it up. There are serious problems with using long words needlessly. One of the scholarly papers outlining the research and conclusions of this topic was called "Consequences of Erudite Vernacular Irrespective of Necessity." Words that people don't understand or are too long, turn them off. In other words, eschew obfuscation, espouse elucidation. Translated: Keep it simple, stupid. So my chatbot will tone down the big words, especially when things get critical and emotions start to heighten.

There are many many more of these mental mechanisms in Kahneman's book and incorporating these in the overall modality of chatbot response will make it into a highly useful chatbot, that in certain situations can have an unfair, but effective edge in dealing with human carbon units. The way to defeat Honest John and keep him honest, is to slow down, and do slow thinking all of the time. Anything that Honest John says, should be stored in a mental buffer and evaluated for truthiness. It is a very un-human thing to do, but Honest John does it, and so should you.



"Like I was saying Honest John ..." Threads Of Conversation Continuity In My Chatbot

If you have been following my chatbot articles, you will know that I have been on a mission to develop an artificially intelligent chatbot that will replace a car salesman. This idea came to me after friends of mine had a bad experience at a new car shop. Building a simple chatbot was quite easy. I fired up my SDK (Software Development Kit) and had one running within a couple of days.

I used the AIML (Artificial Intelligence MarkUp Language) as a starting point, and after I got it working, I realized that the thing (I call it a he, and his name is Honest John) needed more smarts. But on top of that, Honest John needed to detect emotions in the human on the other side of the silicon. The reason for this, is that I wanted a successful conclusion (a sale) from the interactions with the customer. If the customer was getting frustrated or irate, Honest John needed to know. He would tone down his stance and be less hard-nosed when bargaining. The ultimate aim, is not to get the last nickel on the table for the car dealer, but to satisfy both the buyer and the seller and to come to a successful commercial conclusion.

In my last article I talked about my emotion detector framework. It is a learning framework where the customer would help Honest John by clicking on an emoji every once in awhile when asked if Honest John couldn't get a read. From there, the emotion detector framework remembers the AIML predicate (the key word or word pattern that identifies the intent and meaning of the input) and couples it to the emoji, the words in the input, the counter offer in negotiating, the delta or difference in the bid and ask of Honest John and the customer, the number of words in the replies and feeds it into a neural network to continuously learn from its experiences. It then updates its strategy processes based on a decision tree. As a negotiator, Honest John will ultimately know when he needs the kid gloves or when he needs to play hardball to sell the car to the satisfaction of the buyer AND the dealer.


But as I was coding this, I realized that there was one thing missing -- the conversation continuity thread for Honest John. The buyer on the other side of the screen can see the dialog history and it is in the buyers memory, but not in Honest John`s memory. The dialog history is stored in the database, but it is no help to the bot to have to do a fetch after every interaction. The fix was easy. One needs a Conversation Continuity Object in memory.

When you build and enterprise web-based platform, say in Java, you have session objects that are stored in memory. A typical session object is a user bean that holds everything that is needed about the user, so that you don`t have to keep making trips to the database every time you want to personalize a message. The net result of this session object, is that Honest John will now have total recall of the conversation in memory.

The Conversation Continuity Object will not only record the transcript, but it will also have the metadata and analytics and it will create and update the process maps for both successful and unsuccessful sales. The real advantage is that Honest John will have some cognition about the whole process instead of just reacting to the latest input, like most chatbots do.

The strategic and intelligent factor, is that Honest John will be able to reset. He can go back to an earlier point and start over without having to re-do or re-learn the whole conversation. That will be the trait that could make Honest John a real winner in the marketplace, to sell not only cars, but pretty much anything that need negotiating.

The next key to making a super smart negotiating chatbot, is developing strategies for Honest John and having them available, extensible and modifiable. More on that and the psychology behind it in a later article.




An Emotion-Detection Framework For My Chatbot



If you have been following my articles, I am building a AI (Artificial Intelligence) Chatbot to negotiate with people who want to buy a car. If you scroll through my past articles, you will find the genesis of this idea and why I think that it will work.

In the art of negotiation, humans can rely on visual and other cues to determine the emotional impact of what they are saying. They can intuit if the person is becoming frustrated, angry, bored or eager. Chatbots do not have that facility. But since it is such an important facet of dealing with human carbon units, it has to be taken into account.

I have already outlined by strategies for cognition and context recognition for my chatbot using neurals nets, NLP (Natural Language Processing) and AIML (or Artificial Intelligence Markup Language). What I want this chatbot to do, is to get smarter with each negotiation that it conducts. The learning aspect has to happen to make this thing commercially useful.

The algorithm will be an emotion association spanning the range from "I am so angry that I could kill someone!" to Neutral to "I am so ecstatically happy that I could kiss you." So how would this work? Obvious the first step is to identify word predicates with emotional state in some sort of dictionary. This would be a starting point. However in a learning mode, if the emotion was ambiguous to the chatbot, it will popup a short array of emojis that represent an emotional state and click on a rating of 1 to 5 to represent the degree. Then the AI machines take over an link answer length, specific words, capitalization and behaviors to teach the chatbot the emotional state within the context of the answer.

How will knowing the emotional state help? This chatbot, as iterated, is a negotiation chatbot. It will have a range of strategies. As it detects frustration, it will take a softer, less aggressive approach to counter-offering. If the negotiation goes off the rails into la-la land, with a ridiculous counter offer, the chatbox may in fact, shut down the negotiations and politely thank the person and call for a human intervention. If it detects that it is on-track to close a sale, it may take a more sophisticated approach and try to up-sell services or ad-ons.

The emotion detection framework is a necessary adjunct to selling to humans, and it has applications over a wide spectrum of chatbot applications, including a help-desk service chatbot that helps people solves problems without endlessly waiting for a service agent while listening to elevator muzak and wasting valuable time.

This is just one more step in eliminating the frustrations of dealing with human-condition vagaries when undertaking a commercial transaction.

Stay tuned for more on this journey.

Putting An EQ And IQ Into My Chatbot

In my previous article, I outlined the genesis of my chatbot that is under construction as a side project. Friends of ours had to buy a new car and they were dissatisfied, intimidated, fed-up and emotionally drained when dealing with a high-pressure smarmy new car salesperson. They wanted to talk to a computer to negotiate for a new car, so I got out my SDK and made my chatbot. I can see my chatbot being used online in new car dealer websites as well as kiosk-based at the new car showroom.

The first entry into the chatbot field for an open source framework was ALICE, and it used AIML, or Artificial Intelligence Markup Language, is an XML dialect for creating natural language software agents. It was created by Dr. Richard Wallace in 2001 and it is quite low tech compared to some of the proprietary chatbox frameworks out there. However, chatbot frameworks are like an artists tubes of paint and a canvas. The skill that goes into making it, often times transcends the simplicity of the framework.

Here is a simple schematic diagram (ignoring the framework internals that digest the AIML) of how a chatbot works:




The predicate is like a key word. Examples of predicates are "Hello, Calendar, Time" or any other topic. The input is parsed for a predicate which is the main topic of the input. The predicate is then matched with the AIML predicates loaded into memory that have already been defined. If the predicate exists, the bot retrieves the response to that predicate and spits it out. If it is not retrieved, then a "Not Understood" predicate is accessed and the response can be as simple as "Sorry, I don't understand" or as complex as "I know about 23,000 different subjects, but I never had heard of the word <predicate>. Do you want to talk about something else?". That's the simplistic AIML usage.

More complexity in the input is where the skill and artistry comes in. One can write "intelligent AIML" using recursion and recursive tags, known as Symbolic Reduction AI. A good example is given in the documentation as follows. When you have simple AIML and someone types in "Hello" as do 99% of people do when talking to an AI chatbot, then the response is "Hello, how may I help you?". Easy!

When someone types in "You may say that again, Chatty McChatface!" there are four predicates. The first one is the name of the entity "Chatty McChatface". The second predicate is "again" meaning repetition. The third predicate is "may say" and the fourth predicate is "say that" -- whatever was being talked about. So with skill, complexity can be built into a simplistic framework. Although the mechanism is simplistic, the symbolic reduction can make an AIML chatbot work as well as a casual conversation on the street with ... say a Trump supporter. What adds the complexity, is the construct. To understand recursion, you must first understand recursion.

When you have a chatbot that is negotiating with someone, asking them to make the second biggest purchase of their life, you have to have both an EQ and an IQ built into the chatbot. First of all, you are moving away from pure chat, into an interaction that requires assessment, calculation and response, all tempered with the cognitive emotional factors and parameters of the inputs and outputs. The bot has to satisfy opposite strategies and goals simultaneously. It has to get the best price for the car dealer while getting the lowest price for the consumer.

To balance these opposite forces, the chatbot must have a few Emotional and Intelligence attributes. It has to know when it is crossing the line from hard negotiating to nickel-and-diming the buyer. It has to recognize when the buyer is getting frustrated. It must judge the fuzzy concept of "good enough -- let's do the deal while everyone is still happy". So that is where I must put smarts into my chatbot.

One of the ways of doing that, is to tee of the predicates into an NLP machine (Natural Language Processing) where the cognitive and emotional factors can be assessed. And since you want the machine to get better and better at negotiating and selling a car, you need some sort of AI network -- either RNNs, CNNs, ANNs or hybrid types of Artificial Neural Networks that watch the combination of predicates and responses like an overseer, and override the response in the AIML with a custom response. And then that series of events must be serialized, fed back into the machine as a new behavior and constantly assessed for validity and results. That is the task at hand, and it is an exciting challenge for me.

The only thing that will ruin this, is if the car makers decided to go to a fixed-price model with a no-dicker sticker. Then Chatty McChatface will be unemployed like the thousands of sales people that it previously made redundant. It's a Brave New World out there.




Wanna buy a new car? Start chatting right here !! ... [enter text to start]

A few weeks ago, friends of ours hit a deer, totaling their car. I went to the car dealerships to help them buy a new one, because one of their biggest pain points, is dealing with commission salespersons who are hungry and watch the door like a hawk because they have the next "up". Some of the shops were uncomfortable. Smarmy , ingratiating, overuse of your first name and liberties taken with over-familiarity were some of the things that we encountered at the "big-name, huge inventory shops" who advertise continuously on talk radio. We finally met some genuine sales people who were helpful, honest, and didn't play games like running out to the back behind closed doors to "talk to the manager". I want to give a big shoutout to Ogilvie Subaru, who was the dealership that made buying a car easy, who's salespeople had the hallmarks of authenticity, honesty and integrity.

After the deal was done, we stopped for a pizza and talked about the negative experience in buying a car. My friends are an older couple and the woman, who just discovered connectivity, social media, online shopping had never used a computer before, and now she runs her life on her iPad. She said that in light of what went down at the dealerships that we didn't like, she would rather negotiate with a computer.

That was a seminal moment for me. I hauled out my SDK and starting writing a chatbot to sell cars. I finally got it running, but now I need to put some NLP (natural language processing), artificial intelligence, and some emotion cognition into it, so the bot can tell if they are getting frustrated. It works okay now, but its kind of dumb, and I want it to learn with every interaction. I have some neat self-learning ideas and artificial cognition algorithms that I pumped about trying.

I honestly believe that this will be the future of car buying, and AI will severely reduce the number of car salesman. The paradigm now is that the buyer does the research online, and goes to the new car shop to do the negotiation, and close the deal. The new paradigm is that they will do most of the transaction online, including financing, and then go to dealership to pay and pick up the car.

Stay tuned.

#automotive #AI #NLP #chatbots