Creating A Technology Singularity In Three Easy Paradigms
If you are only vaguely aware of what a technological singularity is, let me give you a short refresher and quote Wikipedia. "A computer, network, or robot would theoretically be capable of recursive self-improvement (redesigning itself), or of designing and building computers or robots better than itself on its own. Repetitions of this cycle would likely result in a runaway effect – an intelligence explosion – where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a superintelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence."
Quoting Wikipedia again on recursive self improvement: "Recursive self-improvement is the speculative ability of a strong artificial intelligence computer program to program its own software, recursively. This is sometimes also referred to as Seed AI because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve the design of its constituent software and hardware. Having undergone these improvements, it would then be better able to find ways of optimizing its structure and improving its abilities further. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities."
So the bottom line is that a Seed AI would start either smartening itself up, or replicating itself and learning everything that it comes in contact with. And its library is the entire world if that Seed AI is connected to the internet.
We do have a biological model for replicating intelligence and it is the gene. Richard Dawkins has stated that in the case of the animals, for example an elephant, a gene has to build an entire elephant to propagate itself. A human has to build another human so that it has a brain capable of intelligence. Not intelligence itself, mind you, but a capability (sometimes, for some people). Here is the difference. Suppose that we have children, and our children are born with the intelligence that we, the parents have gleaned over our life time. And suppose it repeated itself cumulatively. Their children would have their accumulated intelligence plus ours. And so on. In just a few generations, the great grandparents would seem like primitive natives with a limited range of knowledge of how the world, the universe and everything operates. Unfortunately, intelligence is not transmitted genetically -- only the capability for intelligence (in some cases).
There is, however, some intelligence or know-how that is genetically transmitted. Animals and bugs have some sort of software or firmware built into them. Insects know how to fly perfectly after hatching and know enough to avoid a swat. Mosquitoes know how to track humans for blood. Leafcutter ants know enough to chop leaves and ferment them to grow fungus for food without being taught. Animals are prepared with all the BIOS firmware that they need to operate their lives, and they pass it on. I suspect that if humans were to continuously evolve according to Darwinian evolution, we just might pass on intelligence, but that information has been left out of our genetic code. Instead we have the ability to learn and that's it. If we crack the genetic code of how embedded behaviors are passed on in say insects, and apply that to man, we could in fact win the Intelligence Arms Race between humans and computers. The limitation would be the size of the human genome in terms of carrying coded information.
But Darwinian evolution is dead in the cyber world. In the biological world, the penalty for being wrong is death. If you evolved towards a dead end failure, you too would die and so would your potential progeny. In the virtual world, since cycles and generations are arbitrary, failure has no consequences. I myself have generated thousands, if not millions of neural nets only to throw them out when the dumb things were too thick to learn (wrong number of layers, wrong starting values for weights and biases, incomplete or dirty data for training sets, etc, etc). But there are lessons to be learned from mimicking Nature. One of the lessons to be learned, is how biological brains work. More on this a bit later.
So, if a technological singularity is possible, I would like to start building the Seed AI necessary for it? Why? Why do people climb Everest? I am incorrigibly curious. As a child, I stuck a knife into an electrical socket to see what would happen. I could not believe the total unpleasantness of the electric shock that I got. Curiosity killed the cat, but Satisfaction brought him back. I did go on to electrical engineering, proving that Freudian theory is bunk and aversion therapy doesn't work. So without further ado, I would like to present the creation of a Technological Singularity in Three Easy Paradigms.
Paradigm One ~ Perturbation: Create supervisory neural nets to observe or create a perturbation in an existing Artificial Neural Network. The perturbation could be anything. It could be a prime number buried within the string of the resultant calculation of gradient descent. (Artificial neurons generate long strings of numbers behind a decimal place.) It could be an input that throws a null type exception. It could be a perfect sigmoid function value of 1.000 in an activation function. It could be anything, but you need a perturbation seed recognized by the supervisory circuit.
Paradigm Two ~ Instantiation & Linking : Once the supervisory circuit recognizes the perturbation, it goes into the Instantiation & Linking mode. When I homebrewed my own AI, I went counter to convention ANN (Artificial Neural Net) programming. I went totally object oriented. Each neuron and/or perceptron was an object. Each layer was an object with the neuron. The axons, storing the outputs of the previous layers were objects. Then I made controllers with many modes and methods of configuration, and then I could instantiate a brand new AI machine or a part of another one just by calling the constructor of the controller with some configuration parameters.
So, once a perturbation was recognized, I would throw up a new ANN. Its inputs would be either random, or according to a fuzzy rules engine. The fuzzy rules engine would be akin the hypothalamus in the brain that creates chemo-receptors and hormones and such. Not everything is a neural net, but everything is activated by neural nets. With current research in ANNs, you could even have a Long Short Term Memory neural network (google the academic papers) remembering inputs that just don't fit into what the machine knows, and feed those into the newly made network. Or you could save those stimuli inputs for a new neural network to come up with a completely different function.
Paradigm Three ~ Learning: So now that you have perturbed the network to throw up some new neural nets and you have done the linkages, to can train it by back propagation to learn, calculate or recognize stuff. It could be stuff from the Long Short Term Memory net. It could be stuff gleaned from crawling the internet, or by plumbing the depths of data in its own file system. It could be anything. Here's the good part. Suppose that the new neural nets are dumber than doorposts. Suppose they can't learn beyond a reasonable threshold of exactitude. So what do you do? They could be re-purposed, or simply thrown out. Death of neurons means nothing to a machine. To a human, we supposedly kill 10,000 with every drink of alcohol, and still have a pile left. The fact that heavy drinkers turn into slobbering idiots in their old age won't happen with machines, because unlike humans, they can make more neurons.
So once you have an intelligent machine, and it has access to the cloud, it itself becomes a virus. Every programming language has a CLONE() function. If the neural networks are objects in memory that can be serialized, then the whole shebang can be cloned. If you are not into programming, what the previous sentence means, is that with Object Oriented Programming, the Objects are held in memory. For example you could have an object called "bike". Bike has several methods like Bike.getColor() or Bike.getNumberOfGears(). Even though these objects are in dynamic memory, they can be frozen into a file on the disk and resurrected in their previous state. So if you had a blue bike and did Bike.setColor("red") and serialized your bike, it would remember it was red when you brought it back to life in memory.
Having a neural network machine that could clone itself and pass on its intelligence -- well that's the thin edge of the wedge to getting to a Technology Singularity. That's where you start to get Franken computer. That is where you don't know what your machine knows. And with almost every object connected with the Internet of Things and the Internet of Everything, the intelligence in the machine could figure out a way to act and do things in the real world. It could smoke you out of your house by sending a command to your NEST thermostat to boil the air in the house. That's when thinks get exciting and that's would scares Dr. Stephen Hawking. Remember he relies on a machine to speak for him. The last thing that he wants, is that machine to have a mind of its own.
I'd write more on the subject, but I have to go to my software development kit and start writing a perturbation model for my object oriented neural nets. I ain't afraid -- YET!