All Things Techie With Huge, Unstructured, Intuitive Leaps
Showing posts with label Computational Creativity Confabulation Using Artificial Neural Nets. Show all posts
Showing posts with label Computational Creativity Confabulation Using Artificial Neural Nets. Show all posts

How To Build Free Will Into Artificial Neural Networks Using A Worm Brain As A Model


In March of 2015, there was a fascinating study published in Cell, conducted by Rockefeller University.  The study was a brain analysis of a worm, specifically how a single stimulus can trigger different responses in a worm.  This may have huge ramifications for artificial intelligence and thinking machines.

A worm is not burdened with a whole lot of neural nets. This particular specimen ( Caenorhabditis elegans  ) has 302 neurons and about 7,000 synapses or connections between the neurons.  This microscopic worm was the first to have its entire connectome, or neural wiring diagram completely detailed.  The researchers found that if a worm is offered an enticing food smell, it usually stops to investigate.  However it doesn't stop all of the time.

There are three neurons in the worm brain that signals the body to a food detour.  The collective state of these neurons determine the likelihood of the worm doing a fast food drive through. By stimulating the states of the various permutations and combinations of the three neurons, the researchers could figure out the truth table of meal motivation.

It's being touted as worm free will.  The three neurons in the worm are called AIB, RIM, and AVA. When the odor sensors pick up the smell of isoamyl alcohol, the dinner bell for this worm, the stimulus is first presented to AIB.  There are only three persistent states of these neurons.  The first is when they are all off.  The second is when they are all on, and the third is when AIB is on.  AVA is the neuron that sends the signal to the muscle of the worm to chart a new course for the food.  When all three neurons transition from on to off, the worm heads for the buffet table.

If the worm had no free will, then every time it got a whiff of isoamyl alcohol, it would head for the feeding trough. But it doesn't.  AIB is the context monitor.  It checks out the state of the network, and determines whether RIM & AVA will play. If they won't, AIB won't play either, and the food is ignored.

The human analogy that the researcher gave was that you would get a hunger pang, and you have to cross the street to get food at the restaurant.  However if the AIB equivalent fired when it was activated when it was unpleasantly cold and you didn't want to suffer the discomfort, you ignored the hunger pang.

This is really interesting in many ways for machine learning application.  In an earlier blog posting, which you can read here, I outlined how Dr. Stephen Thaler, an early pioneer of machine intelligence in design, used perturbations in neural nets to cause them to design creative things. His example was a coffee mug.  Thaler used death as a perturbation -- he would randomly kill neurons and the crippled neural network produced the perturbations that created non-linear, creative outputs.  In my blog posting, I posited that instead of killing neurons, one method was to do synaptic pruning -- just killing some connections between the neurons.

In another blog posting, which you can read here, I postulated other forms of perturbations and confabulations as a method for machine thinking and creativity.  They include substitution, insertion, deletion and frameshift of neurons in the network.

Thaler's genius, I think, is the supervisory circuits of the neural networks.  He used them to funnel the outputs of perturbed and confabulated networks into a coherent design.  Not only can they do creative work, but extrapolating with what was shown with the worm neurons, they can also add free will -- a degree of randomness in behavior that precludes hardwired behavior.

The bottom line is that the AIB neuron in the worm evaluates the context of the neural stimulations, but what if, instead of just a contextual neuron, you plugged in a Thaler-like supervisory network?    You could add a pseudo-wave function of endless eigenstates and the resultant outcome would be the collapse of the function into a single eigenstate or action, due to the output of the supervisory context evaluator network.


This is all fascinating stuff.  But wait, don't send money yet -- there's more.  And it gets even weirder yet.  And the possibilities of artificial intelligence get more fantastic with simpler constructs.

Going back to the worm studies, the connectome is all mapped.  The researchers found that for the first state in the connectome diagram, when all of the neurons were activated, they transitioned to the low state and worm got to follow its nose to eat (so to speak).  But this was not a 100% guaranteed event. It usually happened, but there were some small number of times when it didn't.  This makes is a probability function.  Knowing the number of neurons, the state of them, and having a map of the connections, then one can create a complex Bayesian calculation model.  (A very simplified explanation of a Bayesian calculation, is that the conditional probability of an event can be calculated knowing the probabilities of the previous event(s)),

So what if you created a neural network with supervisory circuits, and modeled the permutations and combination of states?  If you got good enough at it, and your model was sufficiently accurate for some sort of use, then you wouldn't actually need the neural networks.  You could string together a whole pile Bayesian calculators built on the probabilities of neural networks, without all of the necessary hardware and software to calculate the inputs and outputs of massive amounts of artificial neural network layers.  You would be faking intelligence with a bunch of equations rather than the bother of neurons and such.  A simple small device with rudimentary computation could be fairly intelligent.  In this Brave New World, the richest data scientist will be the one with the best Bayesian calculator.

But there is even more, so one more parting thought.  The worm's neural nets could be a very rudimentary model of the way that we as humans work.  The difference is that our neural networks are massively scaled up.  The human brain has 86 billion neurons and 100 trillion synapses -- give or take a few billion depending on the level of alcohol imbibition of the person. If the model holds, and there is a possibility that the brain could potentially be modeled as one humongous Bayesian calculator, what does that say about Life? To me, it says lots, and that a machine one day, could have the basis of cognition, and some sort of consciousness.

How To Create Computational Creativity Without Killing Anything


In a previous blog post, I outlined some ideas on Computational Creativity, and the seminal work of Dr. Stephen Thaler. You can read it HERE.  What Dr. Thaler did, was create neural nets, trained them to do things like recognize coffee cups, and then created a layer of supervisory neural nets to watch the nets.  Then he would bring them to near death by killing a pile of the neurons in the layers.  In very anthropomorphic terms, the neural network in paroxysms of Near Death, would create unique designs of other coffee cups. He called this process the Creativity Machine and it was some of the first steps in Computational Creativity using Artificial Neural Nets.

What Thaler was doing by formulating a mechanism for the Eureka moment, was to create the impetus, spark and ignition of thoughts from a machine was was programmed not to think outside the box, but to slavish following a compiled set of instructions in its register stack.  His unique algorithm was to produce a perturbation in the execution of the neural network process to create a confabulation or false idea that would be new and unique.  For the time, (and it still may a valid algorithm), it was quite revolutionary.  The problem to solve, was to have to find some way to spark new information synthesis out of pre-programmed siliconized transistor pathways. After all, ideas just can't pop into a computer's circuits.

Our brains have massively parallel neural nets and just thinking about anything sparks new thoughts.  Our thinking processes undergo a perturbation of essentially interrupt vectors in staid ways of thinking. That was Thaler was looking for inside the computer, when he started the practice of committing neuroncide, and killing neurons.

In another blog article, where I try to link synaptic pruning as a method of creating perturbations in Artificial Neural Networks ( HERE ), I came up with the idea of crippling instead of killing the neurons by pruning some random inputs in the layers. I haven't tested it yet. I don't think that the resultant "ideas" or designs would be as far-out or as revolutionary as Thaler's killings, but it might prove useful.  That idea has yet to be tried.

Then it struck me, that perhaps brain damage isn't a viable algorithm in the long term.  Even though creativity can be brainlessly expressed when monkeys finger-paint and elephants do the Picasso thing with their trunks, one would want brains, even artificial ones, with all of their faculties for serious creative thought.  So there has to be a better way than Thaler's, without killing anything.

If you want to avoid killing, and near-death experience just to create something, you still need the perturbations in regularized logic activity of artificial neural networks.  Otherwise you would just get the outputs that the neural nets were trained for.  However to Thaler's credit, he did introduce another mechanism that can be useful in creating these perturbations in Artificial Neural Networks in producing unique thoughts, and that is the supervisory network atop the thinking network.

In a future blog post, I will outline how I think that supervisory networks can contribute to Machine Consciousness, but for now, they can be integrated for non-death perturbations and idea creation in a new breed of Creativity Machines.

First let's look at a simple artificial neuron:
(I stole this gif from Thaler's website:  http://imagination-engines.com/ )

By adjusting the weights and thresholds,the simple neuron is one or two of the Boolean Gates of Knowledge that computers are made of. It can be and AND Gate or an OR Gate. In this case the weights are decimals and the inputs and outputs are integers.

There is no activation function.  Usually an activation function is like a sigmoid function.  It takes the sum of the inputs multiplied by their weights and after calculating the function, the output values are usually between -2 and +2 and the activation threshold is when the curve of the function is some value > 0.


 If the threshold value for the neuron firing is set at say 0.9 or almost one, then then anything below that is ignored and the neuron doesn't fire.  But that doesn't mean that the activation function is quiescent.  It still calculates and spits out numbers, generally in a small range between -2 and +2.  So if the activation threshold is 0.9 and the result of the sigmoid function is say 0.6, it will not activated the neuron. But we could say that the neuron is in an "excited" state because the output value of the sigmoid function is near the firing threshold.  It is just on the cusp of firing. This excited state could be used as a perturbation to excite unique thoughts.  This is where the supervisory network comes in.

A supervisory circuit can be a lot more powerful than Thaler envisioned.  First of all, supervisory circuits overlaid on top of artificial neural networks placed in an n-tier of recursive monitoring are the first steps to machine consciousness.  More on that in future blog posts.

But suppose that an independently trained ANN is monitoring other circuits for semi-excited layers or neurons, and reached out creating a synaptic link to these excited neurons.  This may or may not cause the supervisory circuit to breach its firing thresholds, and get an output where none was expected.  And the discovery of the unique ideation,  is predicated on the model by Mother Nature where she plays dice and creates millions of redundant things in the event of one surviving and making something wonderful.  In a like fashion, the outputs of all networks could be ANDed or ORed with another supervisory network monitoring for unique things, and the stimulation and simultaneous firing would cause perturbations and new ideas from two unrelated neural networks.

  That would be the mechanism for a perturbation and confabulation of two fixed networks coming up with a new idea without having to kill anything like connections or any neurons.  There would be no near-death creativity, just a flash in the pan triggered by something that just might turn out to be useful.  A pseudo-schematic is shown below:


Our human brains operate on a massively parallel neural network.  This concept is a bit of bio-mimicry that extends that.

The concept of killing brain cells in the name of creativity is not exactly new in the biological world as well.  We apparently kill thousands of brain cells with an alcoholic drink or a few puffs on joint. Many people say that this is the key to creativity.  After all, Hemingway wrote his opus half-drunk and won the Nobel Prize for Literature.  However there are millions who drink and don't turn out anything creative except for a bum's life on the Nickel, sleeping between parked cars in the rain. But je digress.

So all in all, I think that this could be an alternative method for machines to dream up new ideas in the realm of Computational Creativity.  It may not be as much fun as watching things gasp out creative in their death throes, but it could be more reliable and ultimately less destructive to some perfect good artificial neural nets.

Burning Ants With Magnifying Glasses, Computational Creativity and Other Artificial Intelligence Inspirations




I was going to call this article Computational Creativity Confabulation Using Artificial Neural Nets, but the immature little boy in me made me do otherwise.

I've been fascinated by the works of Dr. Stephen Thaler and his work on Imagination Engines and a Unified Model of Computational Creativity. In the Artificial Intelligence domain, the ultimate Touring Test would be a computer that rivals a human at creativity or consciously designing creative things.  There isn't much on his work in the literature, other than in the body of patents that Thaler has been granted, but I suppose that is because he is trying to monetize them and they are competition-sensitive algorithms and applications.

When I started Googled around his work, I landed on the Wikipedia page for Computational Creativity.  Thaler has a very small section on a unifying model of creativity based on traumatized artificial neural networks. I have had a lot of experience coding and playing with my own brand of artificial neural networks, specifically the multilayer perceptron models, and let me tell you that it is both fascinating and frustrating work.

Seemingly, knowledge is stored in the overall collection of biases and weights for a dumb piece of software to make some startling, human-like decisions with just examples and no background theory in the art of whatever you are trying to make them learn.

It is quite mindblowing.  For me, the Eureka moment came when I saw an artificial neural network automonously shift output patterns without any programming other than learning cycles, based on what it was seeing. It was a profound moment for me, to see a software program on a computer, recognize a complexity and reduce it to a series of biases, weights and activations to make a fundamental decision based on inputs.  It was almost a life-changing event for me.  It made my profession significant to me.  A trivial analogy would be a watch making an adjustment to daylight savings time based on the angle of the sun hitting it at a specific time, if a watch was trained to tell the time by the position of the sun in the sky.

Thaler goes further than I would in describing behaviors of artificial neural networks in cognitive terms based on anthropomorphic characteristics like waking, dreaming, and near death.  His seminal work however, deals with training artificial neural networks to do something, and then perturbing them (a fancy term for throwing a spanner in the works) to see what happens to the outputs.  In some cases, the perturbations include external and/or internal ones like messing with the inputs, weights, biases and such, and then having supervisory circuits to throw out the junk and keep the good stuff.  For example, in his examples listed in the patent application, he has a diagram of a coffee mug being designed by perturbing an artificial neural network. His perturbations cause confabulation or confabulatory patterns.

A confabulation is a memory disturbance caused by disease or injury and the person makes up or synthesizes memories to fill in the gaps. In a psychological sense, these memories are fabricated, distorted or misinterpreted and can be caused in humans by even such things as alcoholism.

Thaler does the equivalent to neural nets what every rascally little boy does to earthworms or frogs. They put a burning match or focus a magnifying glass on various parts of the frog, worm and even ants,  and then observe how the organism reacts. It brings to mind the rock song by The Who, called "I'm a Boy!".


Creativity in humans is a funny business. Perturbations are the key.  You need to perturb your usual thought patterns and introduce new ones to come up with innovative concepts. We all know how Kekule couldn't figure out the chemical structure of benzene, until he had a dream about a snake eating his tail, and he twigged onto the idea of cyclical hydrocarbons, and organic chemistry. College students today still fail by the thousands in introductory courses to organic chemistry and the field of science uncovered by that perturbation.

Essentially creativity involves putting together diverse concepts to synthesize new ideas.  Computational creativity involves buggering up perfectly good artificial neural networks to see what they come up with. You have to introduce perturbations in "conventional thought" somehow.  Thaler believes that this paradigm beats genetic algorithms.  I was particularly impress by a genetic algorithm crunching away to design an antenna for a satellite that would work in any orientation.  Radio engineers tried and tried and then came up with several designs but all had particular shortcomings. The problem was loaded into a computer with a genetic algorithm where they would start with a basic antenna structure and then add random bits and pieces, and then run some programs to simulate and test the antenna. If its performance was better than the last iteration, it would be kept and altered randomly again. If not, the alteration was thrown out, and a new random thing was tried. The final antenna looked like a weird stick conglomeration, but worked beautifully and is flying in space. Thaler says that his computational creativity models are faster and better than genetic algorithms.

I was wondering what kind of perturbations that Thaler did to his neural nets.  The only clues that I got, came from reading the patent summaries and here is a quote: "System perturbations would produce the desired output. Such system perturbations may be external to the ANN (e.g., changes to the ANN data inputs) or internal to the ANN (e.g., changes to Weights, biases, etc.) By utilizing neural network-based technology, such identification of required perturbations can be achieved easily, quickly, and, if desired, autonomously.

I briefly touched on another type of perturbation of artificial neural nets when I talked about synaptic pruning. Essentially a baby creates all sorts of connections to biological neural networks in its brain, and as it approaches puberty, it prunes the inappropriate ones. The plethora of "inappropriate" synapses or connections to diverse concepts, is what makes a child's imagination so rich.  In my proposed method of artificial neural net perturbations, I suggested that the way synaptic pruning could take place, was to kill some inputs into the various layers of the multilayer perceptron, and then let the network run to see what comes out.

I came upon a few more methods of creating perturbations in neural networks while reading genetic mutations. An article that I was reading described some mutation methods that included substitution, insertion, deletion and frameshift. The thought struck me, that this would be another ideal way to perturb artificial neural nets. In substitution, you could swap neurons from one layer to another. Using the insertion algorithm derived from genetics, you could add another neuron or even a layer to an already-trained network. Deletion could be implemented by dropping out an entire neuron out of a layer.  Frameshift is an intriguing possibility as well. What that means is that if specific series of Perceptron/Layer pairs fed a series to an adjacent layer, you would frameshift the order. So for example if Layer3 fed a series of four perceptrons in layer 4, instead of feeding them in order, like inputs going to L4P1, L4P2, L4p3 and L4P4, you would frameshift by one and feed them into L4P2, L4p3, L4P4 and L4P1 to create these perturbations.

This entire field is utterly fascinating and may hold some of the answer to the implementation of Computational Creativity. Machines may not have the same cognitive understanding things the way that humans do, but that doesn't mean that they can't be creative.

An example of differing cognitive understanding about the problem, is given by this anecdote:

A businessman was talking with his barber, when they both noticed a goofy-looking fellow bouncing down the sidewalk. The barber whispered, "That's Tommy, one of the stupidest kids you'll ever meet. Here, I'll show you."

"Hey Tommy! Come here!" yelled the barber. 

Tommy came bouncing over "Hi Mr. Williams!" 

The barber pulled out a rusty dime and a shiny quarter and told Tommy he could keep the one of his choice. Tommy looked long and hard at the dime and quarter and then quickly snapped the dime from the barber's hand. The barber looked at the businessman and said, "See, I told you."

After his haircut, the businessman caught up with Tommy and asked him why he chose the dime.

Tommy looked at him in the eye and said, "If I take the quarter, the game is over."

In a real life setting, I would like to quote this anecdote about an actual result of a perturbation of an artificial neural network taken from Wikipedia:

In 1989, in one of the most controversial reductions to practice of this general theory of creativity, one neural net termed the "grim reaper," governed the synaptic damage (i.e., rule-changes) applied to another net that had learned a series of traditional Christmas carol lyrics. The former net, on the lookout for both novel and grammatical lyrics, seized upon the chilling sentence, "In the end all men go to good earth in one eternal silent night," thereafter ceasing the synaptic degradation process. In subsequent projects, these systems produced more useful results across many fields of human endeavor, oftentimes bootstrapping their learning from a blank slate based upon the success or failure of self-conceived concepts and strategies seeded upon such internal network damage. ( http://en.wikipedia.org/wiki/Computational_creativity )

And there you have it, so much to do, so little time to do it, and so little funding to do it. But it will get done, and it will bring us into a brave new world.