Showing posts with label Computational Creativity. Show all posts
Showing posts with label Computational Creativity. Show all posts
Will Computers Be Able To Have Children?
Dr. Stephen Hawking says that we should be afraid of creating Artificial Intelligence that can become a threat to man. My contention, is that we are already on that path. That Pandora's Box or Can of Worms is already opened. The only way to close a can of worms is with a bigger can, and nobody has one when it comes to the progress of technology.
When Ray Kurzweil's book, "The Age of Spiritual Machine's" came out, I thought that it was a bunch of bosh -- until I got to a seminal part of the book for me. It was a small appendix of a few pages about building an intelligent machine in three easy paradigms. That book changed my life. One of my daughter's gave the book for Christmas, and it was the book that started me on the path to programming artificial intelligence and playing with machine learning. I never once thought that I would use Machine Learning in my job, and I was wrong.
Machine Learning has a long way to go, as does Artificial Intelligence, but we are making great headway. In previous blog entries, I make the case for every Operating System, or OS to have an artificial neural network embedded in it. I also make the case for standardized neural network notation so that I can transfer, or sell what my machine has learned to your machine. And I make the case in this blog post, that we can evolve smarter and smarter machines, if every time that we need to load a new operating system, we let an existing operating system impart its neural nets to the new machine. One of the differences between humans and other animals, is that knowledge is not passed from generation to generation. If we do that with computers, we are well on the way to make scary intelligent machines.
So if a computer can pass on knowledge to a new generation of computers, by passing down knowledge embedded in Artificial Neural Networks, can one say that the new computer is a child of the old computer?
I have opined on how to create Artificial Consciousness (more in a later blog topic on how I can make a computer have the worry emotion). I also have talked about Computational Creativity and Dr. Stephen Thaler's work. So if we evolve computer intelligence to the point that it can seed other computer's with that intelligence, then we are on the way to computers having virtual children.
The way that I see Artificial Intelligence evolving, is that no computer can be an expert on everything. As computers become more and more intelligent, there will be specialization among the ranks of computers, as there is in human endeavor. Some computers will trade securities. Some will diagnose illness. Others will run power plants. There will be a hierarchy of computer intelligence as there is in humans now. And the progeny of each computer will be a mirror of its parents. It's hard to imagine, but if computers do acquire consciousness, intelligence, personality and creativity, then the internet will become a computer society mirroring human society. And that is when we will have to fear it.
Alan Turing never knew what he was getting into when he proposed his machines and the capability of passing a Turing Test. We are on the cusp of something mind boggling, but at the moment, I would be content on creating an Artificial Neural Network that makes money for me while I ruminate about Artificial Intelligence.
How To Make Scary Smart Computers
If you have ever visited this blog before, you will know that I am heavily into Artificial Intelligence. I have been playing with artificial neural networks for about ten years. Recently buoyed by the Alan Turing movie "The Imitation Game" and "The Innovators" by Walter Isaacson, I have decided to start mapping out what it would take to make the truly intelligent, conscious, creative computer that would easily pass a Turing Test.
In previous blog entries, like the one immediately below, I outline the need for an autonomous master controller that can stop and start programs based on what an embedded set of artificial neural networks come up with. I have also started thinking about a standardized artificial neural net, that can be fed into any machine already trained, so there can be a market for trained artificial neural nets. I have outlined a possible algorithm for perturbations in artificial neural nets that can create computation creativity. The list goes on and on.
But here is another essential element, If computers have embedded artificial neural networks in them, then for the machine to become scary smart, it has to be able to pass on what it has learned to the next generation. So how do you accomplish that? Easy.
Every time that an Operating System or OS is upgraded, it is upgraded by a predecessor machine who passed on the trained and learned neural nets that it has learned in its artificial lifetime. It is the equivalent of a parent teaching a child.
In the biological world, what makes humans different from the animals, is that we can pass on wisdom, knowledge and observation. In the animal kingdom, each new generation starts from where its parents did -- near zero. Animals learn from their parents. But I can go and read a book, say written by the Reverend Thomas Bayes who wrote in the 1700's on Bayesian theory, and I can read last weeks journals. I can pick and choose to learn whatever I want from the human body of knowledge. But first and foremost, I get my first instruction from my parents.
So if a new Operating System is loaded into a computer from an existing one with artificial intelligence, then it won't have to start from scratch. And if you embed the ability of the artificial neural networks to read and learn stuff by crawling the internet, soon you will have a scary smart computer. The key is that each machine and each server is capable of passing on stuff that it has learned to new computers.
I do believe what Dr. Stephen Hawking says, is that one of the threats to mankind will be some of the artificial intelligence that we create. But like nuclear fission, we have to build it for the sake of knowledge and progress, even if it has the potential to do the human species grave harm.
We have already opened the can of worms of artificial intelligence. Once opened, there is no way to close it unless you have a bigger can. Unfortunately, the contents of that can of worms is expanding faster than we can keep up with it. The best way to control artificial intelligence, is to have a hand in inventing a safe species of it.
How To Create Computational Creativity Without Killing Anything
In a previous blog post, I outlined some ideas on Computational Creativity, and the seminal work of Dr. Stephen Thaler. You can read it HERE. What Dr. Thaler did, was create neural nets, trained them to do things like recognize coffee cups, and then created a layer of supervisory neural nets to watch the nets. Then he would bring them to near death by killing a pile of the neurons in the layers. In very anthropomorphic terms, the neural network in paroxysms of Near Death, would create unique designs of other coffee cups. He called this process the Creativity Machine and it was some of the first steps in Computational Creativity using Artificial Neural Nets.
What Thaler was doing by formulating a mechanism for the Eureka moment, was to create the impetus, spark and ignition of thoughts from a machine was was programmed not to think outside the box, but to slavish following a compiled set of instructions in its register stack. His unique algorithm was to produce a perturbation in the execution of the neural network process to create a confabulation or false idea that would be new and unique. For the time, (and it still may a valid algorithm), it was quite revolutionary. The problem to solve, was to have to find some way to spark new information synthesis out of pre-programmed siliconized transistor pathways. After all, ideas just can't pop into a computer's circuits.
Our brains have massively parallel neural nets and just thinking about anything sparks new thoughts. Our thinking processes undergo a perturbation of essentially interrupt vectors in staid ways of thinking. That was Thaler was looking for inside the computer, when he started the practice of committing neuroncide, and killing neurons.
In another blog article, where I try to link synaptic pruning as a method of creating perturbations in Artificial Neural Networks ( HERE ), I came up with the idea of crippling instead of killing the neurons by pruning some random inputs in the layers. I haven't tested it yet. I don't think that the resultant "ideas" or designs would be as far-out or as revolutionary as Thaler's killings, but it might prove useful. That idea has yet to be tried.
Then it struck me, that perhaps brain damage isn't a viable algorithm in the long term. Even though creativity can be brainlessly expressed when monkeys finger-paint and elephants do the Picasso thing with their trunks, one would want brains, even artificial ones, with all of their faculties for serious creative thought. So there has to be a better way than Thaler's, without killing anything.
If you want to avoid killing, and near-death experience just to create something, you still need the perturbations in regularized logic activity of artificial neural networks. Otherwise you would just get the outputs that the neural nets were trained for. However to Thaler's credit, he did introduce another mechanism that can be useful in creating these perturbations in Artificial Neural Networks in producing unique thoughts, and that is the supervisory network atop the thinking network.
In a future blog post, I will outline how I think that supervisory networks can contribute to Machine Consciousness, but for now, they can be integrated for non-death perturbations and idea creation in a new breed of Creativity Machines.
First let's look at a simple artificial neuron:
(I stole this gif from Thaler's website: http://imagination-engines.com/ )
By adjusting the weights and thresholds,the simple neuron is one or two of the Boolean Gates of Knowledge that computers are made of. It can be and AND Gate or an OR Gate. In this case the weights are decimals and the inputs and outputs are integers.
There is no activation function. Usually an activation function is like a sigmoid function. It takes the sum of the inputs multiplied by their weights and after calculating the function, the output values are usually between -2 and +2 and the activation threshold is when the curve of the function is some value > 0.
If the threshold value for the neuron firing is set at say 0.9 or almost one, then then anything below that is ignored and the neuron doesn't fire. But that doesn't mean that the activation function is quiescent. It still calculates and spits out numbers, generally in a small range between -2 and +2. So if the activation threshold is 0.9 and the result of the sigmoid function is say 0.6, it will not activated the neuron. But we could say that the neuron is in an "excited" state because the output value of the sigmoid function is near the firing threshold. It is just on the cusp of firing. This excited state could be used as a perturbation to excite unique thoughts. This is where the supervisory network comes in.
A supervisory circuit can be a lot more powerful than Thaler envisioned. First of all, supervisory circuits overlaid on top of artificial neural networks placed in an n-tier of recursive monitoring are the first steps to machine consciousness. More on that in future blog posts.
But suppose that an independently trained ANN is monitoring other circuits for semi-excited layers or neurons, and reached out creating a synaptic link to these excited neurons. This may or may not cause the supervisory circuit to breach its firing thresholds, and get an output where none was expected. And the discovery of the unique ideation, is predicated on the model by Mother Nature where she plays dice and creates millions of redundant things in the event of one surviving and making something wonderful. In a like fashion, the outputs of all networks could be ANDed or ORed with another supervisory network monitoring for unique things, and the stimulation and simultaneous firing would cause perturbations and new ideas from two unrelated neural networks.
That would be the mechanism for a perturbation and confabulation of two fixed networks coming up with a new idea without having to kill anything like connections or any neurons. There would be no near-death creativity, just a flash in the pan triggered by something that just might turn out to be useful. A pseudo-schematic is shown below:
Our human brains operate on a massively parallel neural network. This concept is a bit of bio-mimicry that extends that.
The concept of killing brain cells in the name of creativity is not exactly new in the biological world as well. We apparently kill thousands of brain cells with an alcoholic drink or a few puffs on joint. Many people say that this is the key to creativity. After all, Hemingway wrote his opus half-drunk and won the Nobel Prize for Literature. However there are millions who drink and don't turn out anything creative except for a bum's life on the Nickel, sleeping between parked cars in the rain. But je digress.
So all in all, I think that this could be an alternative method for machines to dream up new ideas in the realm of Computational Creativity. It may not be as much fun as watching things gasp out creative in their death throes, but it could be more reliable and ultimately less destructive to some perfect good artificial neural nets.
Burning Ants With Magnifying Glasses, Computational Creativity and Other Artificial Intelligence Inspirations
I was going to call this article Computational Creativity Confabulation Using Artificial Neural Nets, but the immature little boy in me made me do otherwise.
I've been fascinated by the works of Dr. Stephen Thaler and his work on Imagination Engines and a Unified Model of Computational Creativity. In the Artificial Intelligence domain, the ultimate Touring Test would be a computer that rivals a human at creativity or consciously designing creative things. There isn't much on his work in the literature, other than in the body of patents that Thaler has been granted, but I suppose that is because he is trying to monetize them and they are competition-sensitive algorithms and applications.
When I started Googled around his work, I landed on the Wikipedia page for Computational Creativity. Thaler has a very small section on a unifying model of creativity based on traumatized artificial neural networks. I have had a lot of experience coding and playing with my own brand of artificial neural networks, specifically the multilayer perceptron models, and let me tell you that it is both fascinating and frustrating work.
Seemingly, knowledge is stored in the overall collection of biases and weights for a dumb piece of software to make some startling, human-like decisions with just examples and no background theory in the art of whatever you are trying to make them learn.
It is quite mindblowing. For me, the Eureka moment came when I saw an artificial neural network automonously shift output patterns without any programming other than learning cycles, based on what it was seeing. It was a profound moment for me, to see a software program on a computer, recognize a complexity and reduce it to a series of biases, weights and activations to make a fundamental decision based on inputs. It was almost a life-changing event for me. It made my profession significant to me. A trivial analogy would be a watch making an adjustment to daylight savings time based on the angle of the sun hitting it at a specific time, if a watch was trained to tell the time by the position of the sun in the sky.
Thaler goes further than I would in describing behaviors of artificial neural networks in cognitive terms based on anthropomorphic characteristics like waking, dreaming, and near death. His seminal work however, deals with training artificial neural networks to do something, and then perturbing them (a fancy term for throwing a spanner in the works) to see what happens to the outputs. In some cases, the perturbations include external and/or internal ones like messing with the inputs, weights, biases and such, and then having supervisory circuits to throw out the junk and keep the good stuff. For example, in his examples listed in the patent application, he has a diagram of a coffee mug being designed by perturbing an artificial neural network. His perturbations cause confabulation or confabulatory patterns.
A confabulation is a memory disturbance caused by disease or injury and the person makes up or synthesizes memories to fill in the gaps. In a psychological sense, these memories are fabricated, distorted or misinterpreted and can be caused in humans by even such things as alcoholism.
Thaler does the equivalent to neural nets what every rascally little boy does to earthworms or frogs. They put a burning match or focus a magnifying glass on various parts of the frog, worm and even ants, and then observe how the organism reacts. It brings to mind the rock song by The Who, called "I'm a Boy!".
Creativity in humans is a funny business. Perturbations are the key. You need to perturb your usual thought patterns and introduce new ones to come up with innovative concepts. We all know how Kekule couldn't figure out the chemical structure of benzene, until he had a dream about a snake eating his tail, and he twigged onto the idea of cyclical hydrocarbons, and organic chemistry. College students today still fail by the thousands in introductory courses to organic chemistry and the field of science uncovered by that perturbation.
Essentially creativity involves putting together diverse concepts to synthesize new ideas. Computational creativity involves buggering up perfectly good artificial neural networks to see what they come up with. You have to introduce perturbations in "conventional thought" somehow. Thaler believes that this paradigm beats genetic algorithms. I was particularly impress by a genetic algorithm crunching away to design an antenna for a satellite that would work in any orientation. Radio engineers tried and tried and then came up with several designs but all had particular shortcomings. The problem was loaded into a computer with a genetic algorithm where they would start with a basic antenna structure and then add random bits and pieces, and then run some programs to simulate and test the antenna. If its performance was better than the last iteration, it would be kept and altered randomly again. If not, the alteration was thrown out, and a new random thing was tried. The final antenna looked like a weird stick conglomeration, but worked beautifully and is flying in space. Thaler says that his computational creativity models are faster and better than genetic algorithms.
I was wondering what kind of perturbations that Thaler did to his neural nets. The only clues that I got, came from reading the patent summaries and here is a quote: "System perturbations would produce the desired output. Such system perturbations may be external to the ANN (e.g., changes to the ANN data inputs) or internal to the ANN (e.g., changes to Weights, biases, etc.) By utilizing neural network-based technology, such identification of required perturbations can be achieved easily, quickly, and, if desired, autonomously.
I briefly touched on another type of perturbation of artificial neural nets when I talked about synaptic pruning. Essentially a baby creates all sorts of connections to biological neural networks in its brain, and as it approaches puberty, it prunes the inappropriate ones. The plethora of "inappropriate" synapses or connections to diverse concepts, is what makes a child's imagination so rich. In my proposed method of artificial neural net perturbations, I suggested that the way synaptic pruning could take place, was to kill some inputs into the various layers of the multilayer perceptron, and then let the network run to see what comes out.
I came upon a few more methods of creating perturbations in neural networks while reading genetic mutations. An article that I was reading described some mutation methods that included substitution, insertion, deletion and frameshift. The thought struck me, that this would be another ideal way to perturb artificial neural nets. In substitution, you could swap neurons from one layer to another. Using the insertion algorithm derived from genetics, you could add another neuron or even a layer to an already-trained network. Deletion could be implemented by dropping out an entire neuron out of a layer. Frameshift is an intriguing possibility as well. What that means is that if specific series of Perceptron/Layer pairs fed a series to an adjacent layer, you would frameshift the order. So for example if Layer3 fed a series of four perceptrons in layer 4, instead of feeding them in order, like inputs going to L4P1, L4P2, L4p3 and L4P4, you would frameshift by one and feed them into L4P2, L4p3, L4P4 and L4P1 to create these perturbations.
This entire field is utterly fascinating and may hold some of the answer to the implementation of Computational Creativity. Machines may not have the same cognitive understanding things the way that humans do, but that doesn't mean that they can't be creative.
An example of differing cognitive understanding about the problem, is given by this anecdote:
A businessman was talking with his barber, when they both noticed a goofy-looking fellow bouncing down the sidewalk. The barber whispered, "That's Tommy, one of the stupidest kids you'll ever meet. Here, I'll show you."
"Hey Tommy! Come here!" yelled the barber.
Tommy came bouncing over "Hi Mr. Williams!"
The barber pulled out a rusty dime and a shiny quarter and told Tommy he could keep the one of his choice. Tommy looked long and hard at the dime and quarter and then quickly snapped the dime from the barber's hand. The barber looked at the businessman and said, "See, I told you."
After his haircut, the businessman caught up with Tommy and asked him why he chose the dime.
Tommy looked at him in the eye and said, "If I take the quarter, the game is over."
In a real life setting, I would like to quote this anecdote about an actual result of a perturbation of an artificial neural network taken from Wikipedia:
In 1989, in one of the most controversial reductions to practice of this general theory of creativity, one neural net termed the "grim reaper," governed the synaptic damage (i.e., rule-changes) applied to another net that had learned a series of traditional Christmas carol lyrics. The former net, on the lookout for both novel and grammatical lyrics, seized upon the chilling sentence, "In the end all men go to good earth in one eternal silent night," thereafter ceasing the synaptic degradation process. In subsequent projects, these systems produced more useful results across many fields of human endeavor, oftentimes bootstrapping their learning from a blank slate based upon the success or failure of self-conceived concepts and strategies seeded upon such internal network damage. ( http://en.wikipedia.org/wiki/Computational_creativity )
And there you have it, so much to do, so little time to do it, and so little funding to do it. But it will get done, and it will bring us into a brave new world.
Subscribe to:
Posts (Atom)