All Things Techie With Huge, Unstructured, Intuitive Leaps

Burning Ants With Magnifying Glasses, Computational Creativity and Other Artificial Intelligence Inspirations




I was going to call this article Computational Creativity Confabulation Using Artificial Neural Nets, but the immature little boy in me made me do otherwise.

I've been fascinated by the works of Dr. Stephen Thaler and his work on Imagination Engines and a Unified Model of Computational Creativity. In the Artificial Intelligence domain, the ultimate Touring Test would be a computer that rivals a human at creativity or consciously designing creative things.  There isn't much on his work in the literature, other than in the body of patents that Thaler has been granted, but I suppose that is because he is trying to monetize them and they are competition-sensitive algorithms and applications.

When I started Googled around his work, I landed on the Wikipedia page for Computational Creativity.  Thaler has a very small section on a unifying model of creativity based on traumatized artificial neural networks. I have had a lot of experience coding and playing with my own brand of artificial neural networks, specifically the multilayer perceptron models, and let me tell you that it is both fascinating and frustrating work.

Seemingly, knowledge is stored in the overall collection of biases and weights for a dumb piece of software to make some startling, human-like decisions with just examples and no background theory in the art of whatever you are trying to make them learn.

It is quite mindblowing.  For me, the Eureka moment came when I saw an artificial neural network automonously shift output patterns without any programming other than learning cycles, based on what it was seeing. It was a profound moment for me, to see a software program on a computer, recognize a complexity and reduce it to a series of biases, weights and activations to make a fundamental decision based on inputs.  It was almost a life-changing event for me.  It made my profession significant to me.  A trivial analogy would be a watch making an adjustment to daylight savings time based on the angle of the sun hitting it at a specific time, if a watch was trained to tell the time by the position of the sun in the sky.

Thaler goes further than I would in describing behaviors of artificial neural networks in cognitive terms based on anthropomorphic characteristics like waking, dreaming, and near death.  His seminal work however, deals with training artificial neural networks to do something, and then perturbing them (a fancy term for throwing a spanner in the works) to see what happens to the outputs.  In some cases, the perturbations include external and/or internal ones like messing with the inputs, weights, biases and such, and then having supervisory circuits to throw out the junk and keep the good stuff.  For example, in his examples listed in the patent application, he has a diagram of a coffee mug being designed by perturbing an artificial neural network. His perturbations cause confabulation or confabulatory patterns.

A confabulation is a memory disturbance caused by disease or injury and the person makes up or synthesizes memories to fill in the gaps. In a psychological sense, these memories are fabricated, distorted or misinterpreted and can be caused in humans by even such things as alcoholism.

Thaler does the equivalent to neural nets what every rascally little boy does to earthworms or frogs. They put a burning match or focus a magnifying glass on various parts of the frog, worm and even ants,  and then observe how the organism reacts. It brings to mind the rock song by The Who, called "I'm a Boy!".


Creativity in humans is a funny business. Perturbations are the key.  You need to perturb your usual thought patterns and introduce new ones to come up with innovative concepts. We all know how Kekule couldn't figure out the chemical structure of benzene, until he had a dream about a snake eating his tail, and he twigged onto the idea of cyclical hydrocarbons, and organic chemistry. College students today still fail by the thousands in introductory courses to organic chemistry and the field of science uncovered by that perturbation.

Essentially creativity involves putting together diverse concepts to synthesize new ideas.  Computational creativity involves buggering up perfectly good artificial neural networks to see what they come up with. You have to introduce perturbations in "conventional thought" somehow.  Thaler believes that this paradigm beats genetic algorithms.  I was particularly impress by a genetic algorithm crunching away to design an antenna for a satellite that would work in any orientation.  Radio engineers tried and tried and then came up with several designs but all had particular shortcomings. The problem was loaded into a computer with a genetic algorithm where they would start with a basic antenna structure and then add random bits and pieces, and then run some programs to simulate and test the antenna. If its performance was better than the last iteration, it would be kept and altered randomly again. If not, the alteration was thrown out, and a new random thing was tried. The final antenna looked like a weird stick conglomeration, but worked beautifully and is flying in space. Thaler says that his computational creativity models are faster and better than genetic algorithms.

I was wondering what kind of perturbations that Thaler did to his neural nets.  The only clues that I got, came from reading the patent summaries and here is a quote: "System perturbations would produce the desired output. Such system perturbations may be external to the ANN (e.g., changes to the ANN data inputs) or internal to the ANN (e.g., changes to Weights, biases, etc.) By utilizing neural network-based technology, such identification of required perturbations can be achieved easily, quickly, and, if desired, autonomously.

I briefly touched on another type of perturbation of artificial neural nets when I talked about synaptic pruning. Essentially a baby creates all sorts of connections to biological neural networks in its brain, and as it approaches puberty, it prunes the inappropriate ones. The plethora of "inappropriate" synapses or connections to diverse concepts, is what makes a child's imagination so rich.  In my proposed method of artificial neural net perturbations, I suggested that the way synaptic pruning could take place, was to kill some inputs into the various layers of the multilayer perceptron, and then let the network run to see what comes out.

I came upon a few more methods of creating perturbations in neural networks while reading genetic mutations. An article that I was reading described some mutation methods that included substitution, insertion, deletion and frameshift. The thought struck me, that this would be another ideal way to perturb artificial neural nets. In substitution, you could swap neurons from one layer to another. Using the insertion algorithm derived from genetics, you could add another neuron or even a layer to an already-trained network. Deletion could be implemented by dropping out an entire neuron out of a layer.  Frameshift is an intriguing possibility as well. What that means is that if specific series of Perceptron/Layer pairs fed a series to an adjacent layer, you would frameshift the order. So for example if Layer3 fed a series of four perceptrons in layer 4, instead of feeding them in order, like inputs going to L4P1, L4P2, L4p3 and L4P4, you would frameshift by one and feed them into L4P2, L4p3, L4P4 and L4P1 to create these perturbations.

This entire field is utterly fascinating and may hold some of the answer to the implementation of Computational Creativity. Machines may not have the same cognitive understanding things the way that humans do, but that doesn't mean that they can't be creative.

An example of differing cognitive understanding about the problem, is given by this anecdote:

A businessman was talking with his barber, when they both noticed a goofy-looking fellow bouncing down the sidewalk. The barber whispered, "That's Tommy, one of the stupidest kids you'll ever meet. Here, I'll show you."

"Hey Tommy! Come here!" yelled the barber. 

Tommy came bouncing over "Hi Mr. Williams!" 

The barber pulled out a rusty dime and a shiny quarter and told Tommy he could keep the one of his choice. Tommy looked long and hard at the dime and quarter and then quickly snapped the dime from the barber's hand. The barber looked at the businessman and said, "See, I told you."

After his haircut, the businessman caught up with Tommy and asked him why he chose the dime.

Tommy looked at him in the eye and said, "If I take the quarter, the game is over."

In a real life setting, I would like to quote this anecdote about an actual result of a perturbation of an artificial neural network taken from Wikipedia:

In 1989, in one of the most controversial reductions to practice of this general theory of creativity, one neural net termed the "grim reaper," governed the synaptic damage (i.e., rule-changes) applied to another net that had learned a series of traditional Christmas carol lyrics. The former net, on the lookout for both novel and grammatical lyrics, seized upon the chilling sentence, "In the end all men go to good earth in one eternal silent night," thereafter ceasing the synaptic degradation process. In subsequent projects, these systems produced more useful results across many fields of human endeavor, oftentimes bootstrapping their learning from a blank slate based upon the success or failure of self-conceived concepts and strategies seeded upon such internal network damage. ( http://en.wikipedia.org/wiki/Computational_creativity )

And there you have it, so much to do, so little time to do it, and so little funding to do it. But it will get done, and it will bring us into a brave new world.

1 comment:

  1. It's interesting to hear peoples' reaction to the "eerie Christmas carol anecdote" from Wikipedia. Most say it is pure computational coincidence. Others point out that it has been generated at the level of characters, making the chance of it forming from random perturbations astronomically low. That's the power of ANNs to glean repeating features and constraint relations from data, and in Creativity Machines, slightly bend rules to create new and meaningful content. That's why they beat GAs, especially when the problem dimensions soar into the thousands or millions! Thanks, Stephen Thaler and those who brought us the MLP.

    ReplyDelete