Showing posts with label massively parallel systems. Show all posts
Showing posts with label massively parallel systems. Show all posts
Asynchronous Neural Nets Are Primitive Cave Man Neural Nets
The first electronic circuit that I ever designed was a logic fall-through. (The circuit was for a team quiz-show type of game which determined which button was pressed first by what member of what team.) It was asynchronous. That means that when a signal arrived at the inputs to the silicon chip, it was processed right away. It wasn't held for a state change of the chip like modern digital systems are today.
Modern digital systems have a bus architecture. That means that every chip on the circuit board is connected to a central set of traces or wires called a bus. The chips share the traffic or signals on the bus. The way this happens, is that there is a regular clock signal generator which controls timing. So if an input receives a signal, it in turn sends a signal to the bus controller that it needs to put its data on the bus, and that needs to go to a certain scratchpad register or memory unit to hold that data.
The bus controller signals all of the other chips that it is going to commandeer the bus. All of the other chips finish up what they are doing, and clear the decks. The bus controller then signals the necessary registers to receive the data, and signals the originating input to load its data on the bus. Once the data is loaded, it signals the register to process the data, and clears the decks for the next operation. All operations are controlled by a nice orderly clock signal, that is a square wave that rises up and down. And that square wave represents computer binary language of zeros and ones. So a clock signal in computer talk, always looks like this: 01010101010101010101 etc,
but the important thing is that there is a frequency or a timing between the state changes of zero and one, or the up and down of the waveform.
This frequency is important. If you remember back to the days of dial-up internet (if you are old enough), you would remember the distinctive sound of the modem connecting to the internet through the phone line. It would be an oscillating sound. The binary signals were converted to a certain frequency that the modem that the other end understood to be a zero or a one. This was called frequency shift keying, and it was a way to turn the binary computer language into a sound that could traverse a telephone line. And it could preserve the coherent computer data by having a set frequency, and a set timing of that frequency. It was all rather ingenious, and it was baby steps to where we are today with high-speed internet.
Well a good idea can always be re-used. FSK or frequency shift keying took us from asynchronous to synchronous systems, and it could be used to make Artificial Neural Networks a lot smarter.
I was just reading some of the latest in brain science research with real neural networks in our brain. They are fall-through asynchronous in general, meaning that when a nerve sends a signal, it is immediately fed into a massively parallel network of neurons. However, it has been discovered that frequency also plays a factor in the neural network.
The stimulation is almost like a palimpsest that Isaac Asimov talked about in his book "Contact". In its truest sense a palimpsest is an ancient manuscript of a book, back in the day before paper, that had another book written on it. The old words were scraped off, and new words were written. However you could still read the old words, and the book carried double the information. In the book and movie "Contact", the information to build the time machine was a palimpsest where additional information was encoded in the polarity or the rotation of the wave form.
Apparently the brain in humans uses frequency as an additional information encoder. It has been measured in studying emotion response in the brain, where frequency plays a huge part. This component is entirely missing in computer Artificial Neural Networks. All computer neurons are asynchronous fall-through.
I am by no means suggesting that they become synchronous in the sense of a clock system in a computer (although that could be a possible paradigm), but that somehow frequency be incorporated as an additional tool, paradigm, algorithm or species for the neurons. A good start would be to incorporate Frequency Shift Keying into Artificial Neural Networks. I don't have an exact methodology on how to do this yet, but you can be sure that I will devote some of my internal brain cycles to try and figure this thing out.
As a matter of fact, it is a fascinating thought experiment to contemplate on how a Von Neumann machine might behave if it were frequency-aware. New, ingenious compute dynamics such as frequency awareness are fascinating to think about.
Obviously a lot more research needs to happen, but here is a venue worth exploring for Machine Learning, Deep Learning, Artificial Neural Networks, and Artificial Consciousness. More ruminations on this topic to come.
Making Your Computer Into a Parallel You

Where gramlets will come to shine, is in the evolution of the personal computer. Personal computers have come a long way, and soon your phone will be your personal computer. Or it might be in the fabric of your jacket, or stuck in your ear -- it doesn't matter. The personal computer will continue to evolve.
It will become smaller and smaller, and can even off-load large processing tasks to the cloud. Douglas Adams may not have been far off the mark, when he said in the Hitchhiker's Guide to the Galaxy that the Earth was one huge computer.
The operating system for the personal computer must evolve as well. Gone will be the days of the dumb desktop. Your personal computer will be a learning machine that will learn all about you, anticipate your needs and handle most of the digital aspects of your life autonomously.
To become a parallel you, the operating system of the personal computer will have to ditch the ancient Microsoft paradigm and turn into a massively parallel system, just like the human brain.
Input will be fed into your personal computer via any means (web, data entry by scanner, camera etc) and the computer will take care of it. The input will be fed into all of the gramlets. They will fire if they recognize that the data is meant for them and process it.
For example, an image is fed in. The word processor and speech processor gramlet ignore it. The Filing Gramlet puts it away with the rest of the pictures, but first it passes it to the ident unit which analyses the picture to see if it is a picture of you, friends, family or whatever. Then the image is passed to the ImageQualityGramlet which adjusts the white balance, removes the red eye and passes it back for filing in the appropriate place. In the process, every Gramlet saw this input, and if the threshold was met, the gramlet recognized the data and did its job on it.
A parallel system of gramlets that learn by watching you will be the hallmark of the new personal computer operating system.
Anyone dealing with you will not be able to tell if it is really you, or the parallel you embedded in your personal computer. This will be like the graphic at the top of this post. Are the lines parallel or not? The answer will surprise you.
Subscribe to:
Posts (Atom)