All Things Techie With Huge, Unstructured, Intuitive Leaps
Showing posts with label technology singularity. Show all posts
Showing posts with label technology singularity. Show all posts

A New Branch of Artificial Intelligence Mathematics Is Needed For Artificial Neural Networks


Isaac Newton described himself as a "natural philosopher".  He did experiments in motion and gravity by rolling balls down inclines and measuring and extrapolating the results. He was acquainted with Archimedes of Syracuse's Method of Exhaustion to calculate the area inside of a circle. He studied the works of French mathematician Pierre de Fermat and his work with tangents. He was willing to work with infinite series and alternate forms of expressing them.  He gravitated towards the idea (sorry for the pun, it it is fitting, so to speak), of the infinitesimal.  He determined the area under a curve by extrapolating it from the rate of change of it, and invented calculus.  Leibniz also invented calculus independently of Newton, and he too, introduced notation that made the concepts understandable and calculable.

It was Thomas Bayes who defined Bayesian inference and rules of probability. Although he developed some of the ideas in relation to proving the existence of a divine being, his formulas have been co-opted as the basis of machine learning.

The process repeated itself again with George Boole, who created a representational language and branch of mathematical logic called Boolean Algebra that is the basis of the information age.

Mathematics is a universal language with its own abstract symbology that describes concepts, observations, events, predictions and values based on numbers.  The whole shebang is independent of spoken cultural communications and human experience. It is the purest form of description in most concise manner possible.

Those who are bound by the strictures of conventional thinking have the idea that almost everything that we know about mathematics has been described or discovered.  I beg to differ.

The very-recently invented mathematical field of topological modularity had to used to solve Fermat's last theorem https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem There are new discoveries almost every day in mathematics, and the velocity of discovery will increase.

It is my feeling that the field of mathematics, as used to describe how thing work on earth and the universe is evolving to be a more precise way of stating imprecise events -- which happens to be the macro human experience on earth, and may also be the static of quantum dynamics.

The reason that I bring this up, is that I just read a piece about DeepMind - Google's new AI acquisition.  It just beat the European Go champion five times in a row. Go is an ancient Chinese board game.  If you are thinking "So What?", I'd like to take this opportunity to point out that the number of possible permutations and combinations in the game Go, outnumber the number of atoms  in the universe.  Obviously this was not a brute force solution.

The way that DeepMind created this machine, was to combine search trees with deep neural networks and reinforcement learning.  The program would play games across 50 computers and would learn with each game.  This is how a journalist John Naughton from the Guardian described what was happening:

The really significant thing about AlphaGo is that it (and its creators) cannot explain its moves. And yet it plays a very difficult game expertly. So it’s displaying a capability eerily similar to what we call intuition – “knowledge obtained without conscious reasoning”. Up to now, we have regarded that as an exclusively human prerogative. It’s what Newton was on about when he wrote “Hypotheses non fingo” in the second edition of his Principia: “I don’t make hypotheses,” he’s saying, “I just know.”

But if AlphaGo really is a demonstration that machines could be intuitive, then we have definitely crossed a Rubicon of some kind. For intuition is a slippery idea and until now we have thought about it exclusively in human terms. Because Newton was a genius, we’re prepared to take him at his word, just as we are inclined to trust the intuition of a mother who thinks there’s something wrong with her child or the suspicion one has that a particular individual is not telling the truth.  Link: http://www.theguardian.com/commentisfree/2016/jan/31/google-alphago-deepmind-artificial-intelligence-intuititive

There is something deep in me that rebels at the thought of intuition at this stage of progress in Artificial Intelligence.  I think that intuition, as well as artificial consciousness may eventually be built with massively parallel massive neural networks, but we are not there yet. We haven't crossed the Rubicon as far as that is concerned.

What we really need is new branch of mathematics to describe what goes on in an artificial neural network.  We need a new Fermat, Leibnitz, Boole or Newton to start playing with the concepts and logic states of neural networks and synthesize the universal language, theories and concepts that will describe the inner workings of a massively parallel artificial neural network.

This new field could start anywhere.  The math of an individual neuron is completely understood. We know the basic arithmetic that multiplies weights with inputs and sums them, We understand how to use a sigmoid activation function or a rectified linear function to fire activation of the artificial network. We understand how to use back propagation, gradient descent and a variety of other tools to adjust the weights and thresholds to a more correct overall response.  We even understand how to put the neurons into a Cartesian coordinate matrix and do transformation matrices. But that is as far it goes.

We cannot define an overall function of the entire field in mathematical terms. We cannot define the field effects coupled to the inner workings and relationships of the neural networks to specific values and outcomes.  We cannot representationally take the macro function that neural networks solve, and couple them to other macro functions and have useful computational ability from a theoretical or a high level point of view.

The time has come. We need some serious work on the mathematics of Artificial Intelligence.  We need to be able to describe what the nets have learned and be able to replicate that with new nets. We need a new language, a new understanding and a new framework using this branch of mathematics.  It is this field that will prevent a technology singularity.





Creating A Technology Singularity In Three Easy Paradigms


If you are only vaguely aware of what a technological singularity is, let me give you a short refresher and quote Wikipedia.   "A computer, network, or robot would theoretically be capable of recursive self-improvement (redesigning itself), or of designing and building computers or robots better than itself on its own. Repetitions of this cycle would likely result in a runaway effect – an intelligence explosion – where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a superintelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence."

Quoting Wikipedia again on recursive self improvement:  "Recursive self-improvement is the speculative ability of a strong artificial intelligence computer program to program its own software, recursively. This is sometimes also referred to as Seed AI because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve the design of its constituent software and hardware. Having undergone these improvements, it would then be better able to find ways of optimizing its structure and improving its abilities further. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities."

So the bottom line is that a Seed AI would start either smartening itself up, or replicating itself and learning everything that it comes in contact with.  And its library is the entire world if that Seed AI is connected to the internet.

We do have a biological model for replicating intelligence and it is the gene.  Richard Dawkins has stated that in the case of the animals, for example an elephant, a gene has to build an entire elephant to propagate itself.  A human has to build another human so that it has a brain capable of intelligence.  Not intelligence itself, mind you, but a capability (sometimes, for some people).  Here is the difference.  Suppose that we have children, and our children are born with the intelligence that we, the parents have gleaned over our life time.  And suppose it repeated itself cumulatively. Their children would have their accumulated intelligence plus ours. And so on. In just a few generations, the great grandparents would seem like primitive natives with a limited range of knowledge of how the world, the universe and everything operates. Unfortunately, intelligence is not transmitted genetically -- only the capability for intelligence (in some cases).

There is, however,  some intelligence or know-how that is genetically transmitted.  Animals and bugs have some sort of software or firmware built into them. Insects know how to fly perfectly after hatching and know enough to avoid a swat.  Mosquitoes know how to track humans for blood.  Leafcutter ants know enough to chop leaves and ferment them to grow fungus for food without being taught.  Animals are prepared with all the BIOS firmware that they need to operate their lives, and they pass it on.  I suspect that if humans were to continuously evolve according to Darwinian evolution, we just might pass on intelligence, but that information has been left out of our genetic code.  Instead we have the ability to learn and that's it.  If we crack the genetic code of how embedded behaviors are passed on in say insects, and apply that to man, we could in fact win the Intelligence Arms Race between humans and computers.  The limitation would be the size of the human genome in terms of carrying coded information.

But Darwinian evolution is dead in the cyber world. In the biological world, the penalty for being wrong is death.  If you evolved towards a dead end failure, you too would die and so would your potential progeny. In the virtual world, since cycles and generations are arbitrary, failure has no consequences.  I myself have generated thousands, if not millions of neural nets only to throw them out when the dumb things were too thick to learn (wrong number of layers, wrong starting values for weights and biases, incomplete or dirty data for training sets, etc, etc).  But there are lessons to be learned from mimicking Nature. One of the lessons to be learned, is how biological brains work. More on this a bit later.

So, if a technological singularity is possible, I would like to start building the Seed AI necessary for it? Why? Why do people climb Everest? I am incorrigibly curious. As a child, I stuck a knife into an electrical socket to see what would happen.  I could not believe the total unpleasantness of the electric shock that I got.  Curiosity killed the cat, but Satisfaction brought him back.  I did go on to electrical engineering, proving that Freudian theory is bunk and aversion therapy doesn't work. So without further ado, I would like to present the creation of a Technological Singularity in Three Easy Paradigms.

Paradigm One ~ Perturbation: Create supervisory neural nets to observe or create a perturbation in an existing Artificial Neural Network.  The perturbation could be anything.  It could be a prime number buried within the string of the resultant calculation of gradient descent. (Artificial neurons generate long strings of numbers behind a decimal place.)  It could be an input that throws a null type exception. It could be a perfect sigmoid function value of 1.000 in an activation function. It could be anything, but you need a perturbation seed recognized by the supervisory circuit.

Paradigm Two ~ Instantiation & Linking : Once the supervisory circuit recognizes the perturbation, it goes into the Instantiation & Linking mode.  When I homebrewed my own AI, I went counter to convention ANN (Artificial Neural Net) programming.  I went totally object oriented.  Each neuron and/or perceptron was an object. Each layer was an object with the neuron. The axons, storing the outputs of the previous layers were objects.  Then I made controllers with many modes and methods of configuration, and then I could instantiate a brand new AI machine or a part of another one just by calling the constructor of the controller with some configuration parameters.

So, once a perturbation was recognized, I would throw up a new ANN.  Its inputs would be either random, or according to a fuzzy rules engine.  The fuzzy rules engine would be akin the hypothalamus in the brain that creates chemo-receptors and hormones and such. Not everything is a neural net, but everything is activated by neural nets.  With current research in ANNs, you could even have a Long Short Term Memory neural network (google the academic papers) remembering inputs that just don't fit into what the machine knows, and feed those into the newly made network.  Or you could save those stimuli inputs for a new neural network to come up with a completely different function.

You also create links from other networks. Creative knowledge is synthesized by linking two seemingly separate things to come up with a third.  A good example is Kekule's dream of a snake eating its own tail and that was the basis for cyclical hydrocarbons and organic chemistry.  Biological brains work and learning by linking.  An artificial neural network can simulate linkage by inputting and exciting layers of different neural networks with a layer of something completely different like the diagram below:


Paradigm Three ~ Learning:  So now that you have perturbed the network to throw up some new neural nets and you have done the linkages, to can train it by back propagation to learn, calculate or recognize stuff. It could be stuff from the Long Short Term Memory net. It could be stuff gleaned from crawling the internet, or by plumbing the depths of data in its own file system. It could be anything.  Here's the good part.  Suppose that the new neural nets are dumber than doorposts. Suppose they can't learn beyond a reasonable threshold of exactitude. So what do you do?  They could be re-purposed, or simply thrown out.  Death of neurons means nothing to a machine. To a human, we supposedly kill 10,000 with every drink of alcohol, and still have a pile left.  The fact that heavy drinkers turn into slobbering idiots in their old age won't happen with machines, because unlike humans, they can make more neurons.

So once you have an intelligent machine, and it has access to the cloud, it itself becomes a virus.  Every programming language has a CLONE() function.  If the neural networks are objects in memory that can be serialized, then the whole shebang can be cloned.  If you are not into programming, what the previous sentence means, is that with Object Oriented Programming, the Objects are held in memory.  For example you could have an object called "bike".  Bike has several methods like Bike.getColor() or Bike.getNumberOfGears().  Even though these objects are in dynamic memory, they can be frozen into a file on the disk and resurrected in their previous state.  So if you had a blue bike and did Bike.setColor("red") and serialized your bike, it would remember it was red when you brought it back to life in memory.

Having a neural network machine that could clone itself and pass on its intelligence -- well that's the thin edge of the wedge to getting to a Technology Singularity.  That's where you start to get Franken computer.  That is where you don't know what your machine knows.  And with almost every object connected with the Internet of Things and the Internet of Everything, the intelligence in the machine could figure out a way to act and do things in the real world.  It could smoke you out of your house by sending a command to your NEST thermostat to boil the air in the house.   That's when thinks get exciting and that's would scares Dr. Stephen Hawking.  Remember he relies on a machine to speak for him.  The last thing that he wants, is that machine to have a mind of its own.

I'd write more on the subject, but I have to go to my software development kit and start writing a perturbation model for my object oriented neural nets.  I ain't afraid -- YET!