All Things Techie With Huge, Unstructured, Intuitive Leaps

Dawkins, Wasps, Artificial Intelligence, Evolution, Memorability and Artificial Consciousness

There were two items of interest in the past few weeks that came to my attention, which have relevance to artificial intelligence. The first was a Twitter feed that announced that MIT had created a deep learning neural net tool to predict the memorability of a photo at near-human levels. Here are some gifs that accompanied the article:


The ramifications of this algorithm, is that a whole bunch of feed-forward neural networks can accurately judge a human emotional response to an image.  These neural networks consist of artificial neurons that essentially sift though mounds of pics that have been annotated by humans for areas of memorability.

The ironic part of this whole artificial neural network stuff, is that mathematicians cannot adequately describe in integrated empirical detail,  how these things operate in concert to do incredibly complex stuff.  Each little artificial neural net sums the input multiplied by the weight and uses something like a sigmoid function as an activation function to determine whether it fires or not.  In supervised learning, after the entire mousetrap game of neural nets have finished their jobs, the result is compared to the "correct" answer and the difference is noted. Then each of the weights in the neural nets are adjusted to be a little more correct (back propagated) using stochastic gradient descent (the methodology to determine how much the weights should be adjusted by).  Eventually the whole claptrap evolves into a machine that gets better and better until it can function at near-human, human, or supra-human levels, depending on the training set, the learning algorithms and even the type of AI machine chosen to do the task.

This process actually reminds me of my early days in electrical engineering, using transistors to create logic gates, and then building up the logic gates to build things like flip-flops or data latches on a data bus. Once you had latches, and memories and such, you could go on to build a whole computer from first principles.  The only difference is that with transistors, one must map out the logic and scrupulously follow the Boolean formulas to get the functionality.  In Artificial Neural Networks, these things sort themselves out solely by back-propagating a weight adjustment.  One of the tenets of this universe that we inhabit, is that you can get incredible complexity from a few simple building blocks.

So the upshot for AI, is that this dumb machine has evolved its neural networks to be able to judge the human emotional impact of an image.

In the old days before all of this machine learning hoopla, if I had told you that a machine would be able to judge human emotional response, and then I asked you to theoretically describe how the machine did it, a natural assumption would be that one would have to teach the machine a limited range of emotions, and then one would have to catalog responses to those emotions. One would then have to teach the machines the drivers of those emotional responses in humans (ie cute kittens etc).  It would be a fairly vast undertaking with the input of many cognitive scientists, programmers, linguists, natural language processing freaks etc.  In other words, today's machine learning, deep learning, recurrent neural nets, convolutional neural nets do an end run around first principles learning with underlying knowledge. It's sort of a cheat-sheet around generally accepted terms of human cognition.

In the same way, with a more familiar example, take Google search.  If , twenty years ago, I laid out the requirements for a smart system of having a text box and as you typed letters it guessed what you wanted to search for with incredibly accuracy, one would think that there would be an incredible AI machine behind it with vast powers of natural language processing.   Instead, it is a blazing fast SQL database lookup coupled to probabilities.  Artificial Intelligence is not the anthropomorphic entity that was portrayed in the sci-fi movies.  It ain't no HAL like in 2001, A Space Odyssey. There is no person-like thing that is smart.  It's all a virtual mousetrap game coming up with the right answers, most of the time.

So this brings me to Richard Dawkins and wasps.  I am reading his new book called "Brief Candle In The Dark".  I never knew that he was a biologist first and foremost. The atheism is just a recent sideline.  In recounting his adventures in science, he talks about the waspmanship and games theory.  It goes like this.  A certain species of wasp digs a hole in the ground.  It then fetches insects known as katydids, and paralyzes them with venom. It hauls the living, paralyzed katydid into the burrow.  It lays in a supply of paralyzed katydids and lays eggs.  The eggs will hatch and the katydid will be fresh food  for the larvae when the eggs hatch, The katydids will not rot, because they are not dead, just paralyzed.

Occasionally, instead of digging a new burrow, it will find a burrow dug by another wasp. Sometimes the new burrow will have katydids in them from a previous tenant.  The wasp will go on catching more katydids, and filling the burrow.  This works out quite well, as the find is valuable from a energy budget and reproduction economics point of view.  The wasp has saved herself the task of fetching a pile of katydids.

However, if the burrow-originating wasp comes back, a fight ensues.  One would think that the winner would be random between the two, but the wasp who brought the least number of katydids to the burrow, is the one that is first to give up fighting.  The one who has invested the most, is the most prepared to conduct all-out warfare and keep fighting.  In decision theory and economics, this is called the sunken cost fallacy.  Instead of giving up and building a new "Katydid Koma" burrow, the wasps will fight and perhaps risk dying because of their previous investment.

So why have wasps evolved this way?  Further analysis and research has shown counting the overall katydid number in a burrow is computationally expensive from a biological point of view.  Running food-counting mathematics in that tiny brain takes more resources than simply counting the number of katydids that one personally has dragged back to the burrow.  One can be based on say memory, while the other requires mathematical abstraction. It is like generic brands of personal computers leaving out the fancy math co-processing chips that that the more expensive computers have.

 To quote Dawkins, animal design is not perfect, and sometimes a good-enough answer fills the overall bill better than having the ability to accurately and empirically give an accounting of the situation. Each wasps knows what they put into it, and they have a fighting-time threshold based on their investment only.  Even if the hole was fully of katydids, and was a true egg-feeding goldmine, if a certain wasp was the junior contributor to that stash, that lesser number is all that goes into their war effort computation.

That good-enough corollary has applications that we are seeing in AI.  Google didn't go and teach its search engine the entire dictionary, semantics, and natural language processing.  They do quick word look-ups based on probability.  MIT didn't teach their machine all about emotions.  They let the machine learn patterns of how humans tag emotions.  It is the dumbest intelligence that anyone would want to see, because it doesn't understand the bottom-up first principles. It just apes what humans do without the inherent understanding.  You cannot ask these kinds of intelligence "Why?" and get a coherent answer.

In essence, it is the easy way out for making intelligent machines. It is picking the low hanging fruit. It is like teaching monkeys to type Shakespeare by making them do it a million times until they get it right.  It may be Shakespeare once they finish the final correct training epoch (a training epoch in AI, is letting the machine run through an example, calculate what it thinks is the right answer, and correcting it using back propagation), but to the monkeys, it is just quitting time at the typewriter, and end result is not any different to them, than the first time that they typed garbage.

So, the bottom line is that artificial intelligence, with today's machines, is truly artificial intelligence. It can do human things a lot better than humans can do, but it doesn't know why.  Artificial neural networks at this stage of development, do not have the ability to abstract. They do not have the ability to derive models from their functioning neural nets.  In other words, they do not have consciousness and the ability to abstract yet, which is a pre-requisite for abstraction and learning, or self-learning from abstract first principles.

But suppose that development model for artificial consciousness resembles the same model that the wasps have and the same model that the current crop of machines doing human-like things possess.  Suppose that you faked artificial consciousness in a way now that machines fake intelligence and are able to do human-like task very well within a human frame of reference. Suppose that you developed artificial consciousness to cut corners of cognition and be parsimonious with compute resources.  For example, suppose you taught a computer to be worried when the CPU was almost plugged up with tasks and computation ability and bandwidth was stunted. It wouldn't know why the silicon constipation was a bad idea, and it couldn't, in its primitive state, reason it out.  It just knew that it was bad. These are the first baby steps to artificial consciousness.

The wasp can't count the total number of katydids in the burrow. It cannot make a rational fighting choice based on overall resources.  It's consciousness circuits have evolved using shortcuts and they are not perfect.  Yet the wasp has evolved a superb living creature capable of incredible complex behaviors.  In a similar fashion, we will have artificial consciousness.  Sure as shooting it will come sooner than you think. It will be primitive at first, but it will do amazing things.

So when you look for artificial consciousness, the seeds of it will be incredibly stupid.  However, you can bet that it too will evolve, and at some point, it will not matter whether it knows what it knows from human first principles.  It really won't matter.

And this is why Stephen Hawking says that Artificial Intelligence is dangerous.  Suppose that your AI self-driving car decides to kill you instead of four other people when an accident is inevitable.  Suppose a war drone, meant to replace attack infantry has a circuit malfunction and goes on a rampage killing civilians due to that malfunction.  Suppose that a fire suppression system decides that an explosion is the best way to extinguish a raging refinery fire, but can't detect if humans are in proximity or not.  Don't forget, these things can't abstract. They will be taught to reason, but like all other AI developments, both in the biological and silicon worlds, we will take huge shortcuts in the evolution of artificial intelligence and artificial consciousness. Those shortcuts may be dangerous. Or those shortcuts, like the wasp lacking the ability to do total counts, may be absurd, however they will be adequate for a functioning system.

The big lessons from Dawkins' book is that if I read between the lines, I can get some incredible insights into AI and biomimicry to create it.  As an AI developer, what Dawkins' example has taught me, that like Mother Nature, it is sometimes okay to skimp on computational resources when evolving complex AI.  This is the ultimate example of the end justifying the means.

No comments:

Post a Comment