All Things Techie With Huge, Unstructured, Intuitive Leaps

How To Build Free Will Into Artificial Neural Networks Using A Worm Brain As A Model

In March of 2015, there was a fascinating study published in Cell, conducted by Rockefeller University.  The study was a brain analysis of a worm, specifically how a single stimulus can trigger different responses in a worm.  This may have huge ramifications for artificial intelligence and thinking machines.

A worm is not burdened with a whole lot of neural nets. This particular specimen ( Caenorhabditis elegans  ) has 302 neurons and about 7,000 synapses or connections between the neurons.  This microscopic worm was the first to have its entire connectome, or neural wiring diagram completely detailed.  The researchers found that if a worm is offered an enticing food smell, it usually stops to investigate.  However it doesn't stop all of the time.

There are three neurons in the worm brain that signals the body to a food detour.  The collective state of these neurons determine the likelihood of the worm doing a fast food drive through. By stimulating the states of the various permutations and combinations of the three neurons, the researchers could figure out the truth table of meal motivation.

It's being touted as worm free will.  The three neurons in the worm are called AIB, RIM, and AVA. When the odor sensors pick up the smell of isoamyl alcohol, the dinner bell for this worm, the stimulus is first presented to AIB.  There are only three persistent states of these neurons.  The first is when they are all off.  The second is when they are all on, and the third is when AIB is on.  AVA is the neuron that sends the signal to the muscle of the worm to chart a new course for the food.  When all three neurons transition from on to off, the worm heads for the buffet table.

If the worm had no free will, then every time it got a whiff of isoamyl alcohol, it would head for the feeding trough. But it doesn't.  AIB is the context monitor.  It checks out the state of the network, and determines whether RIM & AVA will play. If they won't, AIB won't play either, and the food is ignored.

The human analogy that the researcher gave was that you would get a hunger pang, and you have to cross the street to get food at the restaurant.  However if the AIB equivalent fired when it was activated when it was unpleasantly cold and you didn't want to suffer the discomfort, you ignored the hunger pang.

This is really interesting in many ways for machine learning application.  In an earlier blog posting, which you can read here, I outlined how Dr. Stephen Thaler, an early pioneer of machine intelligence in design, used perturbations in neural nets to cause them to design creative things. His example was a coffee mug.  Thaler used death as a perturbation -- he would randomly kill neurons and the crippled neural network produced the perturbations that created non-linear, creative outputs.  In my blog posting, I posited that instead of killing neurons, one method was to do synaptic pruning -- just killing some connections between the neurons.

In another blog posting, which you can read here, I postulated other forms of perturbations and confabulations as a method for machine thinking and creativity.  They include substitution, insertion, deletion and frameshift of neurons in the network.

Thaler's genius, I think, is the supervisory circuits of the neural networks.  He used them to funnel the outputs of perturbed and confabulated networks into a coherent design.  Not only can they do creative work, but extrapolating with what was shown with the worm neurons, they can also add free will -- a degree of randomness in behavior that precludes hardwired behavior.

The bottom line is that the AIB neuron in the worm evaluates the context of the neural stimulations, but what if, instead of just a contextual neuron, you plugged in a Thaler-like supervisory network?    You could add a pseudo-wave function of endless eigenstates and the resultant outcome would be the collapse of the function into a single eigenstate or action, due to the output of the supervisory context evaluator network.

This is all fascinating stuff.  But wait, don't send money yet -- there's more.  And it gets even weirder yet.  And the possibilities of artificial intelligence get more fantastic with simpler constructs.

Going back to the worm studies, the connectome is all mapped.  The researchers found that for the first state in the connectome diagram, when all of the neurons were activated, they transitioned to the low state and worm got to follow its nose to eat (so to speak).  But this was not a 100% guaranteed event. It usually happened, but there were some small number of times when it didn't.  This makes is a probability function.  Knowing the number of neurons, the state of them, and having a map of the connections, then one can create a complex Bayesian calculation model.  (A very simplified explanation of a Bayesian calculation, is that the conditional probability of an event can be calculated knowing the probabilities of the previous event(s)),

So what if you created a neural network with supervisory circuits, and modeled the permutations and combination of states?  If you got good enough at it, and your model was sufficiently accurate for some sort of use, then you wouldn't actually need the neural networks.  You could string together a whole pile Bayesian calculators built on the probabilities of neural networks, without all of the necessary hardware and software to calculate the inputs and outputs of massive amounts of artificial neural network layers.  You would be faking intelligence with a bunch of equations rather than the bother of neurons and such.  A simple small device with rudimentary computation could be fairly intelligent.  In this Brave New World, the richest data scientist will be the one with the best Bayesian calculator.

But there is even more, so one more parting thought.  The worm's neural nets could be a very rudimentary model of the way that we as humans work.  The difference is that our neural networks are massively scaled up.  The human brain has 86 billion neurons and 100 trillion synapses -- give or take a few billion depending on the level of alcohol imbibition of the person. If the model holds, and there is a possibility that the brain could potentially be modeled as one humongous Bayesian calculator, what does that say about Life? To me, it says lots, and that a machine one day, could have the basis of cognition, and some sort of consciousness.

No comments:

Post a Comment