All Things Techie With Huge, Unstructured, Intuitive Leaps

How To Create Computational Creativity Without Killing Anything

In a previous blog post, I outlined some ideas on Computational Creativity, and the seminal work of Dr. Stephen Thaler. You can read it HERE.  What Dr. Thaler did, was create neural nets, trained them to do things like recognize coffee cups, and then created a layer of supervisory neural nets to watch the nets.  Then he would bring them to near death by killing a pile of the neurons in the layers.  In very anthropomorphic terms, the neural network in paroxysms of Near Death, would create unique designs of other coffee cups. He called this process the Creativity Machine and it was some of the first steps in Computational Creativity using Artificial Neural Nets.

What Thaler was doing by formulating a mechanism for the Eureka moment, was to create the impetus, spark and ignition of thoughts from a machine was was programmed not to think outside the box, but to slavish following a compiled set of instructions in its register stack.  His unique algorithm was to produce a perturbation in the execution of the neural network process to create a confabulation or false idea that would be new and unique.  For the time, (and it still may a valid algorithm), it was quite revolutionary.  The problem to solve, was to have to find some way to spark new information synthesis out of pre-programmed siliconized transistor pathways. After all, ideas just can't pop into a computer's circuits.

Our brains have massively parallel neural nets and just thinking about anything sparks new thoughts.  Our thinking processes undergo a perturbation of essentially interrupt vectors in staid ways of thinking. That was Thaler was looking for inside the computer, when he started the practice of committing neuroncide, and killing neurons.

In another blog article, where I try to link synaptic pruning as a method of creating perturbations in Artificial Neural Networks ( HERE ), I came up with the idea of crippling instead of killing the neurons by pruning some random inputs in the layers. I haven't tested it yet. I don't think that the resultant "ideas" or designs would be as far-out or as revolutionary as Thaler's killings, but it might prove useful.  That idea has yet to be tried.

Then it struck me, that perhaps brain damage isn't a viable algorithm in the long term.  Even though creativity can be brainlessly expressed when monkeys finger-paint and elephants do the Picasso thing with their trunks, one would want brains, even artificial ones, with all of their faculties for serious creative thought.  So there has to be a better way than Thaler's, without killing anything.

If you want to avoid killing, and near-death experience just to create something, you still need the perturbations in regularized logic activity of artificial neural networks.  Otherwise you would just get the outputs that the neural nets were trained for.  However to Thaler's credit, he did introduce another mechanism that can be useful in creating these perturbations in Artificial Neural Networks in producing unique thoughts, and that is the supervisory network atop the thinking network.

In a future blog post, I will outline how I think that supervisory networks can contribute to Machine Consciousness, but for now, they can be integrated for non-death perturbations and idea creation in a new breed of Creativity Machines.

First let's look at a simple artificial neuron:
(I stole this gif from Thaler's website: )

By adjusting the weights and thresholds,the simple neuron is one or two of the Boolean Gates of Knowledge that computers are made of. It can be and AND Gate or an OR Gate. In this case the weights are decimals and the inputs and outputs are integers.

There is no activation function.  Usually an activation function is like a sigmoid function.  It takes the sum of the inputs multiplied by their weights and after calculating the function, the output values are usually between -2 and +2 and the activation threshold is when the curve of the function is some value > 0.

 If the threshold value for the neuron firing is set at say 0.9 or almost one, then then anything below that is ignored and the neuron doesn't fire.  But that doesn't mean that the activation function is quiescent.  It still calculates and spits out numbers, generally in a small range between -2 and +2.  So if the activation threshold is 0.9 and the result of the sigmoid function is say 0.6, it will not activated the neuron. But we could say that the neuron is in an "excited" state because the output value of the sigmoid function is near the firing threshold.  It is just on the cusp of firing. This excited state could be used as a perturbation to excite unique thoughts.  This is where the supervisory network comes in.

A supervisory circuit can be a lot more powerful than Thaler envisioned.  First of all, supervisory circuits overlaid on top of artificial neural networks placed in an n-tier of recursive monitoring are the first steps to machine consciousness.  More on that in future blog posts.

But suppose that an independently trained ANN is monitoring other circuits for semi-excited layers or neurons, and reached out creating a synaptic link to these excited neurons.  This may or may not cause the supervisory circuit to breach its firing thresholds, and get an output where none was expected.  And the discovery of the unique ideation,  is predicated on the model by Mother Nature where she plays dice and creates millions of redundant things in the event of one surviving and making something wonderful.  In a like fashion, the outputs of all networks could be ANDed or ORed with another supervisory network monitoring for unique things, and the stimulation and simultaneous firing would cause perturbations and new ideas from two unrelated neural networks.

  That would be the mechanism for a perturbation and confabulation of two fixed networks coming up with a new idea without having to kill anything like connections or any neurons.  There would be no near-death creativity, just a flash in the pan triggered by something that just might turn out to be useful.  A pseudo-schematic is shown below:

Our human brains operate on a massively parallel neural network.  This concept is a bit of bio-mimicry that extends that.

The concept of killing brain cells in the name of creativity is not exactly new in the biological world as well.  We apparently kill thousands of brain cells with an alcoholic drink or a few puffs on joint. Many people say that this is the key to creativity.  After all, Hemingway wrote his opus half-drunk and won the Nobel Prize for Literature.  However there are millions who drink and don't turn out anything creative except for a bum's life on the Nickel, sleeping between parked cars in the rain. But je digress.

So all in all, I think that this could be an alternative method for machines to dream up new ideas in the realm of Computational Creativity.  It may not be as much fun as watching things gasp out creative in their death throes, but it could be more reliable and ultimately less destructive to some perfect good artificial neural nets.

1 comment:

  1. Neural death simulations were just the start. You might want to pick up (or steal) some patents and papers and have a read. What you will find is that there are milder, reversible forms of damage/death within neural nets that lend themselves to creativity. That was his point and not the straw man created here.