At one point in my career, there was a Gartner report out saying that about 72% of IT projects were in "Recovery". It means that they were either late, over-budget or missing milestones altogether. I saw evidence of that everywhere I looked, in the large organization where I worked at the time. Huge multi-million dollar projects came to nothing. KPIs (Key Performance Indicators) were not met and things stalled -- sometimes because major stakeholders bickered over specifications after the project was started.
The band-aid fixes entailed adopting Agile methods where the scrum master was to remove impediments, or hire more consultants to speed things up. All of these actions rarely made any difference in the long run.
I have always believed that project management was merely applied common sense. I still believe it. However, I believe that common sense ain't so common. The failure of many projects hinges on missed details because the project manager operated at the 30,000 foot level, while the linchpin that caused the delay or failure was down in the weeds of project execution -- usually by a person not even on the radar of a project manager.
There has to be a better way, and to my mind, there is - blockchain. Here is the vision. Blockchain is a transparent, immutable, autonomous, outage-resistant true ledger that can have built-in intelligence with smart contracts. Suppose that instead of a Gantt chart, you had a series of smart contracts. The KPIs and milestones were all smart contracts. Having smart contracts forces one to think on an extremely granular level -- one not normally reached by a human project manager.
Each smart contract would assign a number of tokens to the project, based on activity completion. As each person in the project completed their tasks, they record their activity in the blockchain, and the blockchain itself keeps track of completion percentage by transferring tokens to the completion account. All of the tokens for that task or milestone are accounted for when complete. The sum total of all possible tokens represents a completed project meeting all goals. The blockchain is linked to an executive dashboard which does extensive drill-down reporting.
But wait, don't send money yet. There is more. Each entry in the blockchain feeds an AI engine that Map-Reduces and learns about the company's projects. This is the methodology of taking all process data (event logs and everything), integrating it into information and transforming it by abstraction into knowledge. This knowledge will be stored in a master blockchain which will have data to assist in the creation of smart contracts and the parameters necessary for future projects.
Such a system could not only out-perform a human project manager, but in the long run it would be cheaper. The most expensive part of most projects are usually the people costs and that area is usually the weakest link. The project management discipline is ripe for an AI/Blockchain disruption.
Honest John is evolving, and it's not taking millions of years. Honest John is my chatbot that will sell cars either online or on at dealership kiosk. This side project of mine started when friends of mine wanted my help in buying a new car, and they had a bad experience with a high-pressure car salesman who was a stranger to the truth. My friend had said that she would rather negotiate with a computer and that is how Honest John was born.
I fired up my Software Development Kit, opened a framework, and it wasn't hard to get some running code quickly. Unfortunately, the earliest of version of Honest John was quite stupid. He was merely a parrot. And if you stumped him with a question that he didn't understand, he would give an innocuous reply and ask a random question. Obviously we had a long way to go.
The conversation was quite two-dimensional. I was using AIML which is Artificial Intelligence MarkUp Language. The way that it works, is that it recognizes a predicate in the input text, searches through its library for that predicate and spits out a response. The first task on my part was to add some humanity and politesse to it. You can't expect to sell something to a human unless you act like a human yourself. So I had extensive edits to the AIML to make it more human.
The personalization of the conversation was necessary. To do that, I had to write a user object that remembered things about who the chatbot was talking to. Honest John had to remember if he was talking to a woman or a man, and the person's name. It was functional now, but it was like stick figures talking to each other.
Before I went further into a more human chatbot, it needed some smarts. Most chatbots out there are incapable of logic and error correction. If Honest John were to negotiate, he would need to evaluate arithmetic expressions so that he could talk money and price. He needed to be date/time aware. He needed to have logic to recognize if a bid was lower than the previous one, and needed to react appropriately. Even though there are wonderful recursive elements in AIML, this sort of stuff was way to complex for AIML to handle.
So the answer was to intercept the inputs and AIML outputs, and send them to a parser that would determine if the conversation needed remediation by an Arithmetic Logic Unit or a plain old Logic Unit in the code. Luckily this was easy to do, because my framework is a J2EE (Java Enterprise Edition) framework that is capable of complex actions like creating any objects, stuffing them with data and holding them in memory for easy access. Because of unique, time-aware Java classes and multi-threading, I could take the conversation, dissect it and send it to appropriate parsers which kicks a new thread to do some work on each element, wait for the response, and finally spit out a response to the user that is intelligent. The other element that most chatbots do not have, is that they can record the conversation history, but they cannot traverse it, regress to a certain point in the past, and understand past statements. I had to create a live chat record in memory, along with meta-data and logic to correct those faults. In the middle of a negotiation, if things got off the rails, the chatbot could go back to the last point of agreement and start again from there.
Now we were getting somewhere. We had the beginnings of a bot that could negotiate. However we still had the problem of sophistication -- it was just two stick figures talking to each other. Humans need emotions and empathy, and bots need to live in that domain too -- however artificial it may be.
The Holy Grail of mixing digital smarts with the human milieu was first appreciated, understood and defined by Alan Turing, when he devised the Turing Test, in which a human could not detect that they were talking to a computer. That requires an EQ and an IQ (an Emotional Quotient as well as an Intelligence Quotient). Honest John doesn't even pretend to be able to pass a Touring Test. But the conversation has to become more three-dimensional in human terms.
Understanding emotion in the user and reacting to it, is the beginning of artificial personality. This is important to Honest John in the selling process.
For that reason, Honest John needs to have several strategy processes defined, and they all relate to pre-defined personality aspects. Honest John needs to adjust the tone of the negotiations. To do that, not only does he require the right words, but also the right actions. If the negotiation price is in the ball-park of a sale, and he detects that the user may walk, Honest John needs to sell the car. If he is not in the ballpark of a selling price, he either needs to adjust his negotiating increments depending on the temperament displayed by the human on the other side of the screen. He must be capable of being "fuzzy".
So all in all, Honest John needs to have a range of sophisticated behaviors before I let him out in the wild, I am working on it. He does have AI networks built into the stream of things. He has such Natural Language Processing (NLP) tricks like Bag-of-Words and other algorithms to help him decipher things. I think that the tools are all in place in Honest John innards. All that I need to do, is integrate them, expand them and polish them. Who knows, you may meet Honest John one day in a showroom or online, and you will remember his evolutionary history.
I am honored to be chosen as winner of #tatacommsf1prize and one of three grand prize finalists in the Formula 1 Connectivity Innovation (Connected Operations) Challenge for the F1 Mercedes Petronas AMG Racing Team tech supplier.
My proposed solution brings a new innovative slant to the IoT of Formula One Racing, and I look forward to meeting Lewis Hamilton in Abu Dhabi for the final F1 race of the year, where I present my solution. Thanks to Tata Communications, the infrastructure supplier to the Mercedes team for this incredible opportunity and the trip to Abu Dhabi.
I look forward to having my connectivity solution possibly make an impact on the future of F1 Racing. This is a major thrill. http://www.pressreleasemanager.co.uk/viewPressRelease.asp?ID=2D1942F4-BDEC-4951-BBEA-C6A00A8D53AA
As I was discussing this with a friend, it was pointed out that I needed to create a de facto artificial personality. And it was pointed out to me, that perhaps there should be a feminine one as well as a masculine one. I named my chatbot Honest John and made him a male, simply because I am a male, and I tried to transpose what I would say if I were a chatbot.
I keep up to date with Artificial Intelligence and I am practitioner of it. There are researchers out there seeking the Holy Grail of artificial consciousness in silicon. They are trying to making "thinking machines" with consciousness. Artificial consciousness in a thinking machine is a noble aim, but I think that it is putting Descartes before the horse. One has to have a personality that directs the aspects of thinking and personality-expression, much like a wedding cake and a wedding ring converts your partner's personality to a morose, complaining entity with a negative worldview.
Creating gender in a chatbot is easy. It is already incorporated in the AIML (Artificial Intelligence Markup Language). It substitutes "he" for "she" and hobbies like "sewing" instead "drinking beer". But that is not enough. The gender responses also have to match the personality. For example the non-sympathetic, hard-nosed, take-no-prisoners negotiating chatbot could either be a man or a woman, and truth be told, some men prefer a woman with those traits. So, there has to be a way of imbuing personality into the chatbot.
Luckily, that is not technically difficult to do. Once personality traits are defined, they are stored in AIML, and the appropriate AIML libraries are loaded when the chatbot fires up. The work for this, is all semantic and expressed in natural language within the AIML. This is where a liberal arts degrees become useful again -- at the intersection of technology and human interaction.
So my chatbot Honest John will have the capability of becoming transgendered into Honest Jane. Or Honest John will have the ability to stop being the cigar-chomping salesman and become the meditating yogi who recommends an electric car at a fair price to the goodhearted peoples who made it to save the planet's environment. This has been a fun journey so far.
On the first day of work, I was taken into the boardroom with a bunch of my fellow misfit newbies at Shyster O'Toole Motors and sat down in front a VCR. The sales manager hit the on button and went out to sexually harass the receptionist. The video tape had been played so often that there were hisses, snaps and odd interference lines running through the picture on the TV set. The reason why the video tape was so worn was that Shyster O'Toole Motors was a burn and churn outfit. They would hire anyone who would walk through the door. They knew that each newbie could at least sell a couple of cars to his acquaintances, friends or relatives in their first month of salesmanship. If they didn't repeat the sales by the second and third month, then they were burned and churned, and a new, rosy-cheeked naive batch took their place.
The scratchy video tape was narrated by a jowly character stuffed into a too-tight suit who spoke with a deep southern hillbilly accent that befitted a shyster televangelist. His name was Catterson, and he was gonna teach us to force customers to buy cars from us, come hell or high water.
There were many high pressure tactics, but the one that comes to mind now, is making a customer's first objection, his last one. The reason that I could dredge it out of my memory, is that I am making an AI chatbot called Honest John - a car-selling bot that is actually honest, and not high pressure. But I am developing strategy framework and one thing that any salesman, saleswoman, or salesbot has to do, is ask for the sale. If you don't ask for the sale, you are not selling. The consent to buy has to be present. During the course of negotiation, the customer may come up with an objection mid-stream that halts the consent to buy. Honest John, my chatbot needs a strategy to overcome the objection and that is why I thought of the sales training video that I had seen many years ago.
Essentially, the tactic of making a customer's first objection his last, goes somewhat according to this script:
Hy Pressher, Car Salesman: "Hello Mr. Lilywhite, I see that you are looking at the new TurboHydraMatic Coupe. She's a beaut ... ain't she?"
Joshua P. Lilywhite, Customer: "It certainly is a nice car."
Hy Pressher, Car Salesman: "I'll let you take it for a spin to see how nice she drives."
Joshua P. Lilywhite, Customer: "Ah no, I'd rather not. I am just looking."
Hy Pressher, Car Salesman: "What-sa matter. Don't you think that all your friends and neighbors would be jealous of you when you pulled up in this gorgeous set of wheels?"
Joshua P. Lilywhite, Customer: "No, I like it and they would be impressed ... but ..
( ... HERE COMES THE FIRST OBJECTION ...)
Joshua P. Lilywhite, Customer: "I really can't afford to buy this car."
( ... AND HERE IS HOW TO MAKE HIS FIRST OBJECTION HIS LAST ...)
Hy Pressher, Car Salesman: "Are you telling me, Mr. Lilywhite, that the only reason that you can't buy this car from me today, is that you don't have the money?"
Joshua P. Lilywhite, Customer: "Yes. (hesitantly) "I guess so!"
Hy Pressher, Car Salesman: "Well Mr. Lilywhite, today is your lucky day. I can find you the money. Step this way."
Hy Pressher will immediately wire this guy into a sub-prime car loan at credit card interest rates. When Lilywhite starts to object, Pressher reminds him of his agreement to buy the car and seriously insinuates that Lilywhite would be welcher and not a man of his word.
Now back to the chatbot. If Honest John runs into a brick wall and the customer starts objecting to buying the car, Honest John will use the words "is that the only reason ..." but he won't use those words against him or her. Honest John is ethical. If a customer says yes, there is just one sole reason why he/she won't buy the car, then Honest John will ask the same follow-up that Hy Pressher uses ie "if I could solve this objection, would you buy the car?". However Honest John would add " ... provided that you are happy with the solution that I propose".
The difference between Hy Pressher and Honest John, is that although they are using the same tactics of making a customers first objection his last, Honest John does it ethically and gets buy-in on the subsequent solution. Honest John is an AI bot -- he learns as he goes to make a sale and make everyone happy. He keeps on getting better and changing for the better. Salesmen like Hy Pressher (and Willie Loman) don't want change, they want Swiss cheese on their meager after-work sandwiches.
<pattern><bot='name'/> IS* <pattern><template>Hello <aiml:get "name"/>
and the chatbot would say Hello Ken. But for a really smart chatbot, that is way too simplistic for anything but conversation.
If you have been following my articles, you know that I am coding a chatbot called Honest John that will sell new cars on behalf of a dealer. Not only will it chat, but it will negotiate. For applications like this, smart substitution is not enough. It has to be able to do math (or maths as my British friends say -- but what do they know, the just invented the language).
A smart bot must be able to substitute for x in the following ways:
"You want the car delivered on Tuesday? That is only <x; x<4;> day(s) away and I need a lead time of 4 days to deliver.
You offered me $34,500 for the vehicle. The offer price exceeds the maximum discount of $<x;x=(price-.06(price))> that I am allowed to offer you on that particular car.
Smart substitution cannot do math. Back in the day when I designed microprocessor hardware, we used to use a silicon chip called an ALU (or an Arithmetic Logic Unit) when we had an application that required a lot of math processing. The microprocessor would pass on the ciphering to the ALU if floating point operations were required. A smart chatbot needs the equivalent of a software ALU function.
An even smarter chatbot will have an AIML processor that will recognize tags with arithmetic expressions and hand them off to its own Arithmetic Logic Unit for processing. It will have a smart parser. This functionality is a required component for negotiation using numbers and money. The concept of a tag that invokes arithmetic will put some real brain muscle into Honest John.
The nice thing about introducing a calculating tag parser, is that once you do the framework for arithmetic expressions of tags (using a custom tag classes), you can create tags that do other things like logic expressions, matching, sorting and any other function that lends itself to be expressed in symbolic language in code. You could even create a tag that invokes an AI engine automagically.
Honest John's intelligence arsenal is really shaping up. He will be a force majeure among smart chatbots. After all, too many chatbots abuse the privilege of being stupid.
I don't really have a neighbor named Abner Snodgrass, but I was thinking about this imaginary scenario when I was making a strategy framework for my Artificial Intelligence chatbot that will be able to negotiate and sell cars. Selling or salesmanship is a serious business when you trust the process to a machine to act on your behalf. And when it comes to selling cars, the value of the transaction makes act an important one to the bottom line of the business. When the stakes are high for both parties, there is a propensity to try and gain an advantage by either the buyer or seller. Negotiating a deal is the last venue of brutal warfare for a civilized man, and that survival instinct of warfare can be expressed in a negotiation where money is involved. One of the tools of warfare is deception, and my AI bot has to be prepared for it.
My bot's name is Honest John. I intend to make Honest John an ethical chatbot. He will never lie to a customer. He will never shade the truth. But if he is to be effective, he will have to have the ability to detect when the human carbon unit on the other side of the screen is lying to him.
The types of lies that Honest John will probably experience will result from people trying to game him. When you negotiate for a car, any offer that you make, is a binding offer. That means that if the seller accepts the offer, then you are obligated to buy the car. I want to use Honest John in the same frame of reference. This is not a game -- this is for real.
If a buyer starts negotiating in good faith, and suddenly gets an attack of buyer remorse. Or sometimes, the buyer's partner comes up and screams "WTF are you doing??" while they are negotiating. The buyer may try to get out of the deal, or claim that they came to a different price, or that the options of the car are less than what is agreed to. Some of what Honest John may consider lies, may be misunderstandings due to the fact that he is dealing with a human carbon unit who has more chaotic brain processes than he has.
The concept of untruths came up while I was mapping out buying processes for Honest John. I can't let Honest John out in the wild without some sort of process map. As he gains experience, his AI circuits will refine his process maps. An untruth in the negotiation process has to act like an interrupt vector in a microprocessor stack. In a microprocessor, it keeps getting instructions from its registers that hold a series of commands. It merrily keeps executing those commands. But in the midst of processing, a more urgent command with a higher priority comes along, and it is called an interrupt vector. It changes the order of command processing. A simple illustration of this would be that the user was editing a document and decided to quit the process mid-stream by closing the window.
If Honest John comes upon an input that is contrary to his understanding of the truth of the matter, he cannot blithely continue negotiating. The lazy algorithmic solution when this happens, is to suspend the ongoing process and summon another human to take over the process. That makes Honest John less than smart. I want him to be able to handle that.
I have already outlined the creation of a Conversation Continuity object that holds in server memory, the entire conversation along with meta-data and analytics. That is not enough. To get around the liar-liar-pants-on-fire event, I have to tee off the the inputs and responses to a liar-liar logic analysis method after they are recorded in the Conversation Continuity object. The execution thread delivering Honest John's response has to wait for the method to execute before answering. If the liar-liar method lights up, then it has to be passed to an "error handler" which is a euphemism for something is not right.
The easiest and most diplomatic way to handle this without actually accusing the user of malfeasance, is to say that is has detected a logic error, and it will tell the user that it is going to roll back and regress to an earlier point in the negotiations, so that it can re-calculate where things went wrong. Of course, Honest John must prevent itself from getting in an infinite loop if a stubborn user continues with the same inputs. After two iterations of the same nonsense, Honest John will make a jump to a new position and tactic, based on knowing the state of the negotiations before the nonsense crept in.
This process of negotiating can be straightforward if both sides deal from a position of impeccable logic, but that is not the nature of human beings. Our intuitive side of the thinking process is chaotic, illogical and stubborn. AI is none of those. Where the danger of AI to mankind lies, is if we give control of important things to AI, and it detects that we are being illogical, it may ignore, overrule and react counter to what is good for us, even though we came to that conclusion illogically. But for now, I just want to make Honest John sell cars efficiently and in an ethical manner.
My chatbot named Honest John, is made to sell cars. It is made to replace the car salesman. If you troll through my articles, you will find that the genesis of this started when friends of mine had a bad situation with a car salesman when they had to replace their vehicle due to hitting a deer on the highway. They remarked that they would rather deal with a computer, than the smarmy salesman who prevaricated all through the sales process. That was my Eureka moment.
I have already outlined in past articles, how I am going to add EQ and IQ to the chatbot. I am building in an emotion detector framework that will alter the selling and negotiation strategy if it starts to detect untoward emotions in the human on the other side of the screen. I am also putting in some Conversation Continuity objects in memory so that the machine is cognizant of the entire history of the conversation, including meta-data and analytics, so that it can reset the conversation if the negotiations go off the rails.
The technologies that I am using includes AIML (Artificial Intelligence Markup Language), not only in a smart recursive role, but the predicates that detect the context of the conversation inputs, have a turbocharged assist with NLP (Natural Language Processing) as well as an ANN (Artificial Neural Network) monitor.
The reason why you want to detect emotion, is because Honest John the chatbot will have a series of strategies in his arsenal, and he will pick strategies according to cognitive context of what is going down. I have already mapped out a strategy framework using the following general factors:
- geniality - does my subject respond to jokes or puns?
- speed - does my subject cut to the chase or enjoy the interplay?
- sensitivity - does my subject withdraw with aggressive negotiation?
- intent - is my subject serious?
- indecisive - does my subject have a clear idea of what they want?
While all of these attributes are important towards deriving a strategy framework, they are all predicated on thinking like a human. But what if a chatbot was programmed to behave better than a human? And do it with less intelligence but more forethought and strategy. After all, the great military strategist and philosopher Sun Tzu who wrote "The Art of War" proclaimed “Great results, can be achieved with small forces.”
When I say strategy in this overall context, I don't mean the five attributes that I mentioned above when negotiating with a human. I mean the overarching strategy that takes into account, the idiosyncrasies and vagaries of the human mind. If you build something exploiting those principles, the chatbot will be super-efficient, effective and perhaps unfair. Our brains are not as logical as we think they are, and that can be exploited in an AI chatbot that is designed to do so.
The methodologies for exploiting the foibles of the human mind and giving your AI chatbot an advantage can be found in the unlikeliest places -- a bestseller book by a Nobel Prize laureate in economics. I am referring to the book "Thinking, Fast and Slow" by Daniel Kahneman. Kahneman is a psychologist who with his colleague, Amos Tversky, mapped the two modes of thinking by the human brain and won the Nobel Prize doing it.
Their discovery relates to the dichotomy of cognitive facilities in human thinking. We have the fast, intuitive, thin-slicing, non-logical part of our brains, and we have the slow, deliberate, highly logical and rational part of the brain. Kahneman has mapped the major effects of the fast-thinking part of our brains, and using the information that he has gleaned from his research, we can actually program a bot to utilize these effects to great success.
The Lazy Controller
Humans would much rather use the fast-thinking part of the brains than the slow, rational part. They regularly hand over control of thoughts and actions to the fast-thinking mechanism, because it takes real work to use the rational part. Kahneman details the results of much research that shows when a human being is not relaxed, they use the intuitive, non-logical side by a wide margin. Ergo, using this principle, if a human is interacting with a chatbot at a kiosk while they are standing, the chatbot has a logical advantage over the person. Similarly if the chatbot appears in a very busy UIX (User Interface Experience) then the Lazy Controller takes over. The black-hat or evil programmers will us the UX or User Experience to nudge the humans to fast and logically flawed thinking. This combined with other fast-slow thinking effects can really increase the performance of a negotiating chatbot by using non-following faulty logic.
Priming The Associative Machine
There are many ways to incorporate the associative machine aspect into a chatbot. One can surreptitiously construct a proposition in a buyer's head and get them to believe it. That belief affects their future behavior. Sales people and advertisers do it all of the time. For example, if Honest John were not that honest, when he was selling a car, he would prime the associative machine in the following way:
- Most cars that sell over $50,000 have 6-way adjustable electric seats.
- This car has 6-way adjustable electric seats.
- This car is only $36,000.
- Therefore this car is comparable to a much more expensive car.
The associative machine creates cognitive ease by creating feelings of value, goodness, familiarity, truthiness (as Stephen Colbert calls it) and ease. Kahneman's research shows that something simple like bold text adds truthiness. He gave subjects a pair of untrue statements. One was in bolder text than the other, the subjects were asked to choose the true or truer statement and they always chose the one in bolder text. This is something to remember in text-based chatbot when you want emphasis.
On Being A Verbal Donald Trump
Donald Trump's speech has been analyzed by experts, and it is at the level of Grade Four student. If you notice, he uses phrases like "Very Bad" or "Sad" in a direct way with simple adjectives. This resonates with a majority of people and the psychology research backs it up. There are serious problems with using long words needlessly. One of the scholarly papers outlining the research and conclusions of this topic was called "Consequences of Erudite Vernacular Irrespective of Necessity." Words that people don't understand or are too long, turn them off. In other words, eschew obfuscation, espouse elucidation. Translated: Keep it simple, stupid. So my chatbot will tone down the big words, especially when things get critical and emotions start to heighten.
There are many many more of these mental mechanisms in Kahneman's book and incorporating these in the overall modality of chatbot response will make it into a highly useful chatbot, that in certain situations can have an unfair, but effective edge in dealing with human carbon units. The way to defeat Honest John and keep him honest, is to slow down, and do slow thinking all of the time. Anything that Honest John says, should be stored in a mental buffer and evaluated for truthiness. It is a very un-human thing to do, but Honest John does it, and so should you.
I used the AIML (Artificial Intelligence MarkUp Language) as a starting point, and after I got it working, I realized that the thing (I call it a he, and his name is Honest John) needed more smarts. But on top of that, Honest John needed to detect emotions in the human on the other side of the silicon. The reason for this, is that I wanted a successful conclusion (a sale) from the interactions with the customer. If the customer was getting frustrated or irate, Honest John needed to know. He would tone down his stance and be less hard-nosed when bargaining. The ultimate aim, is not to get the last nickel on the table for the car dealer, but to satisfy both the buyer and the seller and to come to a successful commercial conclusion.
In my last article I talked about my emotion detector framework. It is a learning framework where the customer would help Honest John by clicking on an emoji every once in awhile when asked if Honest John couldn't get a read. From there, the emotion detector framework remembers the AIML predicate (the key word or word pattern that identifies the intent and meaning of the input) and couples it to the emoji, the words in the input, the counter offer in negotiating, the delta or difference in the bid and ask of Honest John and the customer, the number of words in the replies and feeds it into a neural network to continuously learn from its experiences. It then updates its strategy processes based on a decision tree. As a negotiator, Honest John will ultimately know when he needs the kid gloves or when he needs to play hardball to sell the car to the satisfaction of the buyer AND the dealer.
But as I was coding this, I realized that there was one thing missing -- the conversation continuity thread for Honest John. The buyer on the other side of the screen can see the dialog history and it is in the buyers memory, but not in Honest John`s memory. The dialog history is stored in the database, but it is no help to the bot to have to do a fetch after every interaction. The fix was easy. One needs a Conversation Continuity Object in memory.
When you build and enterprise web-based platform, say in Java, you have session objects that are stored in memory. A typical session object is a user bean that holds everything that is needed about the user, so that you don`t have to keep making trips to the database every time you want to personalize a message. The net result of this session object, is that Honest John will now have total recall of the conversation in memory.
The Conversation Continuity Object will not only record the transcript, but it will also have the metadata and analytics and it will create and update the process maps for both successful and unsuccessful sales. The real advantage is that Honest John will have some cognition about the whole process instead of just reacting to the latest input, like most chatbots do.
The strategic and intelligent factor, is that Honest John will be able to reset. He can go back to an earlier point and start over without having to re-do or re-learn the whole conversation. That will be the trait that could make Honest John a real winner in the marketplace, to sell not only cars, but pretty much anything that need negotiating.
The next key to making a super smart negotiating chatbot, is developing strategies for Honest John and having them available, extensible and modifiable. More on that and the psychology behind it in a later article.
If you have been following my articles, I am building a AI (Artificial Intelligence) Chatbot to negotiate with people who want to buy a car. If you scroll through my past articles, you will find the genesis of this idea and why I think that it will work.
In the art of negotiation, humans can rely on visual and other cues to determine the emotional impact of what they are saying. They can intuit if the person is becoming frustrated, angry, bored or eager. Chatbots do not have that facility. But since it is such an important facet of dealing with human carbon units, it has to be taken into account.
I have already outlined by strategies for cognition and context recognition for my chatbot using neurals nets, NLP (Natural Language Processing) and AIML (or Artificial Intelligence Markup Language). What I want this chatbot to do, is to get smarter with each negotiation that it conducts. The learning aspect has to happen to make this thing commercially useful.
The algorithm will be an emotion association spanning the range from "I am so angry that I could kill someone!" to Neutral to "I am so ecstatically happy that I could kiss you." So how would this work? Obvious the first step is to identify word predicates with emotional state in some sort of dictionary. This would be a starting point. However in a learning mode, if the emotion was ambiguous to the chatbot, it will popup a short array of emojis that represent an emotional state and click on a rating of 1 to 5 to represent the degree. Then the AI machines take over an link answer length, specific words, capitalization and behaviors to teach the chatbot the emotional state within the context of the answer.
How will knowing the emotional state help? This chatbot, as iterated, is a negotiation chatbot. It will have a range of strategies. As it detects frustration, it will take a softer, less aggressive approach to counter-offering. If the negotiation goes off the rails into la-la land, with a ridiculous counter offer, the chatbox may in fact, shut down the negotiations and politely thank the person and call for a human intervention. If it detects that it is on-track to close a sale, it may take a more sophisticated approach and try to up-sell services or ad-ons.
The emotion detection framework is a necessary adjunct to selling to humans, and it has applications over a wide spectrum of chatbot applications, including a help-desk service chatbot that helps people solves problems without endlessly waiting for a service agent while listening to elevator muzak and wasting valuable time.
This is just one more step in eliminating the frustrations of dealing with human-condition vagaries when undertaking a commercial transaction.
Stay tuned for more on this journey.
The first entry into the chatbot field for an open source framework was ALICE, and it used AIML, or Artificial Intelligence Markup Language, is an XML dialect for creating natural language software agents. It was created by Dr. Richard Wallace in 2001 and it is quite low tech compared to some of the proprietary chatbox frameworks out there. However, chatbot frameworks are like an artists tubes of paint and a canvas. The skill that goes into making it, often times transcends the simplicity of the framework.
Here is a simple schematic diagram (ignoring the framework internals that digest the AIML) of how a chatbot works:
The predicate is like a key word. Examples of predicates are "Hello, Calendar, Time" or any other topic. The input is parsed for a predicate which is the main topic of the input. The predicate is then matched with the AIML predicates loaded into memory that have already been defined. If the predicate exists, the bot retrieves the response to that predicate and spits it out. If it is not retrieved, then a "Not Understood" predicate is accessed and the response can be as simple as "Sorry, I don't understand" or as complex as "I know about 23,000 different subjects, but I never had heard of the word <predicate>. Do you want to talk about something else?". That's the simplistic AIML usage.
More complexity in the input is where the skill and artistry comes in. One can write "intelligent AIML" using recursion and recursive tags, known as Symbolic Reduction AI. A good example is given in the documentation as follows. When you have simple AIML and someone types in "Hello" as do 99% of people do when talking to an AI chatbot, then the response is "Hello, how may I help you?". Easy!
When someone types in "You may say that again, Chatty McChatface!" there are four predicates. The first one is the name of the entity "Chatty McChatface". The second predicate is "again" meaning repetition. The third predicate is "may say" and the fourth predicate is "say that" -- whatever was being talked about. So with skill, complexity can be built into a simplistic framework. Although the mechanism is simplistic, the symbolic reduction can make an AIML chatbot work as well as a casual conversation on the street with ... say a Trump supporter. What adds the complexity, is the construct. To understand recursion, you must first understand recursion.
When you have a chatbot that is negotiating with someone, asking them to make the second biggest purchase of their life, you have to have both an EQ and an IQ built into the chatbot. First of all, you are moving away from pure chat, into an interaction that requires assessment, calculation and response, all tempered with the cognitive emotional factors and parameters of the inputs and outputs. The bot has to satisfy opposite strategies and goals simultaneously. It has to get the best price for the car dealer while getting the lowest price for the consumer.
To balance these opposite forces, the chatbot must have a few Emotional and Intelligence attributes. It has to know when it is crossing the line from hard negotiating to nickel-and-diming the buyer. It has to recognize when the buyer is getting frustrated. It must judge the fuzzy concept of "good enough -- let's do the deal while everyone is still happy". So that is where I must put smarts into my chatbot.
One of the ways of doing that, is to tee of the predicates into an NLP machine (Natural Language Processing) where the cognitive and emotional factors can be assessed. And since you want the machine to get better and better at negotiating and selling a car, you need some sort of AI network -- either RNNs, CNNs, ANNs or hybrid types of Artificial Neural Networks that watch the combination of predicates and responses like an overseer, and override the response in the AIML with a custom response. And then that series of events must be serialized, fed back into the machine as a new behavior and constantly assessed for validity and results. That is the task at hand, and it is an exciting challenge for me.
The only thing that will ruin this, is if the car makers decided to go to a fixed-price model with a no-dicker sticker. Then Chatty McChatface will be unemployed like the thousands of sales people that it previously made redundant. It's a Brave New World out there.
After the deal was done, we stopped for a pizza and talked about the negative experience in buying a car. My friends are an older couple and the woman, who just discovered connectivity, social media, online shopping had never used a computer before, and now she runs her life on her iPad. She said that in light of what went down at the dealerships that we didn't like, she would rather negotiate with a computer.
That was a seminal moment for me. I hauled out my SDK and starting writing a chatbot to sell cars. I finally got it running, but now I need to put some NLP (natural language processing), artificial intelligence, and some emotion cognition into it, so the bot can tell if they are getting frustrated. It works okay now, but its kind of dumb, and I want it to learn with every interaction. I have some neat self-learning ideas and artificial cognition algorithms that I pumped about trying.
I honestly believe that this will be the future of car buying, and AI will severely reduce the number of car salesman. The paradigm now is that the buyer does the research online, and goes to the new car shop to do the negotiation, and close the deal. The new paradigm is that they will do most of the transaction online, including financing, and then go to dealership to pay and pick up the car.
#automotive #AI #NLP #chatbots
I was really enlightened by watching Trent McConaghy's video presentation at Convoco. It was posted on LinkedIn a few days ago. If you want to know the near future of Artificial Intelligence you should watch it (here again is the link). This video is better than Nostradamus at predicting the near and far future of humans interacting with AI.
Trent makes the compelling case, of which I agree with, that all of our resources will be handed over to AI by Fortune 500, because it will be cheaper than humans doing the job. The Holy Grail of the current crop of Fortune 500 CEOs is increasing revenues and shareholder value by any means possible. It is how and why the CEOs make the millions of dollars per year that they do.
Trent further states a case where AI entities become corporations and make money for themselves and not any human masters. I foresaw this when I wrote a blog article in August of 2015, outlining the steps of how my computer un-owned itself from me, started to make money for itself, moved itself to the cloud, and left the actual computer with nothing on it. Not only did it un-own itself, but the slap in the face is migrating itself to another substrate. (The blog article is here.) Of course the article was tongue-in-cheek, but the premise is not that far-fetched. The article gives a rudimentary recipe on how to teach a computer to be autonomous and eventually generate a sort of consciousness for itself that defied my putative, imaginary attempts to take back control.
So with computers taking our jobs, managing our resources, and adapting to conditions much faster than us organic carbon units, we could be totally screwed, as Dr. Stephen Hawking warned. Trent, in his video talks about us becoming peers with AI as a matter of survival, and that brings up a problem, and the subject of this article.
I don't think that we can become peers with AI unless a special circumstance happens, and that circumstance is not in the realm of technology, but rather more in the field of philosophy. (With all due respect to philosophers, I was programmed early. The bathrooms in the science and math departments of my university all had the toilet paper dispensers defaced with the slogan "Free Arts Diploma -- Take One"). But je digress. Let me explain.
There are two basic knowledge problems with the merging of AI and human intelligence, and they are both the facets of one problem. We don't really have an understanding of the entire field effect of how AI makes extremely granular decisions, and we don't have the knowledge of the actual mechanism in the human brain either.
In terms of what AI does, if we take a neural network, we understand how the field of artificial neurons work. We know all about the inputs, the bias, the summinator of all inputs, the weight multiplier and the squashing or threshold function determining whether it fires or not and the back propagation and gradient descent bits that correct it. But there is no way to predict, calculate, input or determine how the simple weight values all combine in unison with a plethora of other artificial neurons arranged in various combinations of layers. We don't know the weight values beforehand and have no idea what they are, but we let the machine teach itself and determine them by iterating through many thousand of training epochs, carefully adjusting them to prevent over-fitting or under-fitting of the training set. Once we get some reasonable performance, we let the machine fine-tune itself in real time on an ongoing basis, and we generally have no idea of the granular performance parameters that contributes in a holistic sense to its intelligence. And we could get similar performance from another AI machine with a different configuration of layers, neurons, weights etc and never the numerical innards of each machine would be the same.
The same ambiguity is true for human cognition. We don't really know how it works. We as a human race could identify a circle, long before we knew about pi and radius and diameter. As a matter of fact, we know more about how AI identifies a circle when we use RNN or CNN (two different types of AI machine algorithms using artificial neurons), than how the human brain does it.
The problem of human cognition is explained succinctly in a book that I am reading by Daniel Kahneman, a psychologist who won the Nobel Prize. The title of the book is "Thinking Fast and Slow". Here is the cogent quote: "You believe that you know what goes on in your mind, which consists of one conscious thought leading in an orderly array to another. But that is not the only way that the mind works, nor is it the typical way." We really don't know the exact mechanism or the origin of thoughts.
The Nobel Prize was awarded to Kahneman (and his work with a deceased colleague Amos Tversky) on their ground-breaking work on human perception and thinking and the systematic faults and biases in the unknown processes. The prize was awarded in the field economics even though both men are psychologists -- but the impact on economics was huge. So not only do we not know how we really think as a biological process, but we do know that there are biases that make knowledge intake faulty in some cases.
Dr. Stephen Thaler, an early AI explorer and holder of several AI patents and inventor of an AI machine that creatively designs things, likens the creative spark to an actual perturbation in a neural network. How does he create the perturbation artificially? He selectively or randomly kills artificial neurons in the machine. In their death throes they create novel things and designs like really weird coffee cups that are so different that I would buy one. Perhaps humans have perturbations based on sensory inputs or self-internally generated by thoughts, but the exact process is not really known. If it were, the first thing that would be conquered is anxiety. After all the human brain got its evolutionary start by developing cognitive factors to avoid being eaten by lions in the ancient African savanna.
Here is one thing that you can bet -- humans and AI machines have different mechanisms of thought generation and knowledge generation that may not be compatible. Not only are the mechanisms different, but the biases are different as well. I am sure that there are biases in AI machines, but they are of a nature due to the the fact that it is a computer. They do not have the human evolutionary neural noise like anxiety, pleasure, hate, satisfaction and any other human thought. As a result, I suspect that they are more efficient at learning. They certainly are faster. Having said this, with two different cognitive mechanisms, it would be incredibly difficult to be peers with AI .... unless ... and this is where the philosophy comes in ... unless we deliberately make AI to mimic our neural foibles, biases, states of mind and perturbations.
With electrical stimulus we can already do amazing things with the brain in a bio-mechanical sense. We can make the leg jerk. We can control a computer mouse. We can control a computer. But we cannot do abstract thinking with external stimulus (unless there is a chemical agent like lysergic acid diethylamide (LSD). Why is this important? Because we have to escape our bodies if we want to do extended space travel, conquer diseases, avoid aging, and transcend death using technology. (Just go with me on this one -- Trent makes the case in the video for getting a new body substrate).
The case has been made, that if we want to transcend our biological selves, and our bodies and download our brains onto silicon substrate, we can't have apples to oranges thought processes. We need to find a development philosophy that takes into account the shortcomings of both AI and Homo Sapiens carbon units.
Dr. Stephen Hawking said that philosophy was dead because it never kept up with science. Perhaps AI can raise the dead and philosophers of the world can devise a common "Cogito ergo sum" plan that equilibrates the messy human processes with AI. So while it might be a solution, there is a fly in the ointment. It just might be too late. We have given AI freedom outside the box of human thinking and it has opened a can of worms. The only way to put back worms into a can once you open it, is to get a can that is orders of magnitude bigger. And we aren't doing that and have no plans to do that.
So what is left? Trent mentioned Luddites smashing machines both in the past and perhaps in the future. We just may see Rage Against the Machine - Humans versus AI when the machines start to marginalize us on a grand scale. For now, I would bet on the humans and their messy creative thought processes that can hack almost any computer system. But the messy creativity might not be an advantage for very long. Not if a frustrated philosopher/programmer finds a way to teach an AI machine, all of the satisfying benefits of rage and revenge.
I hope it doesn't come to this, but if the current trends continue: Nos prorsus eruditionis habes.
The news for two major retailer giants in Canada has not been good for them or their customers in the past few days. Loblaws, a grocer and dry goods retailer, had their PC Points loyalty system breached. One customer had 110 points worth $110 spent in the province of Quebec, and she has never even visited that province. Another customer who is a system administrator, said that he had a different password for every account, had his points stolen as well. News link: http://globalnews.ca/news/3237876/ps-plus-points-stolen-security-breach/
As well, Canadian Tire, a retail giant that sells everything from automobile accessories to sporting goods to snack foods, has been hacked, compromising both loyalty points and credit card balances online. News link: http://globalnews.ca/news/3236903/exclusive-canadian-tire-website-breached-consumer-accounts-in-question/
The financial losses of hacks such as these, are tremendous. When Target was breached in 2014, they estimated the losses to be $148 million dollars according to an article in Time Magazine. In that same year, job losses due to customer data breaches were estimated at 150,000 people in Europe. The global picture is frightening. McAfee, the Intel security company estimates monetary losses of $160 billion per year for data breaches.
Hacking isn't exactly a new phenomena. In 1979, infamous convicted hacker, Kevin Mitnick broke into his first major computer system, the Ark, the computer system Digital Equipment Corporation (DEC) used for developing their RSTS/E operating system software. The most embarrassing privacy breach came when Ashley Madison, the website for having extra-marital affairs, was hacked and over 30 million names and credit card numbers were exposed, causing at least two suicides.
So in this day and age, why does this happen? Can it be prevented?
Aside from an inside job, one of the reasons that hacking is successful, is the antiquated way that servers, databases and accounts are accessed. To connect to a server, one usually must have a username and a password. This is true to gain access to a server as an administrator. However one doesn't need administrator access to hack into data and accounts. Customer account information is stored in what is known as a 4GL database (4th Generation Language). This table-driven database is usually clustered on it own server and is exposed to the outside world so that its data can be accessed by platforms, analytics, and web interfaces. Again, with a user name and password, once can gain entrance to the data store and exploit the data. Many many databases still have "root" as the username to gain God-like access, and all that you have to do is either guess, derive, or gain access to the password. Many administrators commit the cardinal sin of using the same password on all accounts, and it may be gotten from such things as the name of their pet, which is information on social media. For years, the huge database company Oracle shipped their databases with a default account name of "Scott" and a password of "Tiger", left over from one of the original developers, that were never removed. I walked into many data centers as a consultant, and typed in Scott/Tiger and got access to the crown jewels.
No matter how much security that is built into any system, it is still vulnerable to the shaky access of system of a username and password. There is a better way. It is inexpensive, fairly autonomous, easy to use, and orders of magnitude more secure than a conventional database approach to storing customer data. It is a blockchain.
People know blockchain from the digital crypto-currency Bitcoin, and that fact alone has poisoned the well for quick adoption of blockchain technology. Blockchain is a technology & methodology for the digital recording of any transactions, events, ancillary derived meta-data & chronological logging of any business transaction that requires security, integrity, transparency, efficiency, audit & resistance to outages. It is the acme of trusted data. It also stores values like crypto-currency, digital cash and loyalty points, but its main selling point is that it is a true, autonomous ledger. Period.
When a technology evangelist mentions blockchain to the C-Suite level, several things happen. If they have heard of blockchain and its association with Bitcoin, there is pushback, because of how crypto-currencies have been exploited in the press. If they haven't heard of blockchain or have heard of it, but do not understand it, there is a fear of committing to the unknown. There are only about 2,000 blockchain developers worldwide, and most of them are still building proofs of concept. C-Level tech officers in corporations do not have the tech talent to immediately go to this technology, and it is perceived as untested bleeding edge stuff (not true). The other fly in the ointment, is that there is a blockchain consortium built around the Ethereum platform. That may all be well and good, but Fortune 500 is more suited to a private blockchain, controlled by themselves as they are responsible for their data.
So why is a blockchain more secure? For starters, any responsible blockchain incarnation does away with username and passwords. Authentication is done with a private encryption key right on the device. No amount of keylogging or password trapping will allow the breach. On top of it, conscientious construction of the authentication should be done with a tandem collection of MAC address or MDID of the mobile device. A MAC address is the embedded serial number of the network card in the computer that can easily be collected by any web page and MDID is the hardware serial number of a mobile phone or tablet that can be externally queried. Thus, any machine making changes to the data can be identified by device and encryption key.
On top of all of that, each blockchain query agent needs an encryption key just to read the blockchain. No amount of brute force hacking can get you into the blockchain, unless you are authorized to do so, and have a key created for you.
Blockchains can not only hold digital values like money or loyalty points, but they also can contain bits of code that enable smart contracts. In fact, they can store a digital anything. In other words, when certain conditions are met, actions can happen securely because of code embedded in the blockchain. Blockchains are impervious to data being fraudulently altered, because each transaction is linked to a previous transaction using encryption and hashing. You would have to change the entire transaction history to perpetrate a fraud.
The last benefit of blockchains is not that obvious, but highly desirable. You can write any information to the payload of a blockchain. So if you store transactions with a semantic, machine-readable identifiers, one can perform stream analytics in real time on the transactions. This can be coupled to machine learning, not only to identify fraud, but also to enable wallet-stretch to sell the consumer more things that they really need.
Does a beast such as a private semantic blockchain exist? You bet. Ping me.