All Things Techie With Huge, Unstructured, Intuitive Leaps

What Meditation Has Taught Me About Artificial Consciousness & Intelligence - The Making of Cognitive Computing


Neural nets and multi-layer perceptrons are amazing. Sure they have their limitations, but advances in deep learning and big, fast GPUs for processing have given them new life.  However large the networks get,  artificial neural networks will remain as nothing but virtual calculating machines until they get some complexity in the form of abstraction, ideation, equating and association.  All of these cognitive functions cannot happen without multi-dimensional memory.  Before an artificial neural network can gain any consciousness at all, it needs a memory machine.  A memory machine is not enough. To further complete the picture, one needs massive parallelism and a few other things outlined below.

I was just reading about Hibbean memory creation, where if you see a dog, and that dog bites you and you feel massive fear and pain, then you will develop neural nets of dog fear and dog aversion.  The parallel discovery or learning experiences in the same time domain links the two neural networks and creates a memory that is triggering by the dog input.

This insight gives one huge insight into the eventual construction of a cognitive, conscious artificial intelligence.  One must have a temporal time domain controller that creates links between separate, unrealated events that happen simultaneously or as a result of, immediately before or after another memory forming event.  In artificial intelligence parlance, this means that when a link like this is created in the time domain, the back propagation or learning is not a mere 10% or 5% like in the AI machines of today.  It is 100%, and those circuits are almost never altered again, unless we go through a rigorous unlearning process.

Crucial to the artificial neural network, is the need for straight non-neural net memory.  However neural networks must link to this memory.  In other words, we do not re-create a memory every time we need it.  We can access it through neural net ideation. For example, we cannot if we cannot remember the name of a childhood neighbor, we can visualize images, recall our memories of his or her house, and eventually we will bootstrap a neural net connected to the memory address and we will think of the name.

This temporal controller that links time domain events is important, because we get context from a timeline. And our brains are timeline aware.  We know that we didn't something before we gained knowledge of it. In fact, this is metadata knowledge about metadata of an event connected to a timeline.

Time awareness and how time fits into the context of knowledge gives us the ability to abstract. Like Yogi Berra's deja vue all over again, once we realize that we are in a recognized sequence, we can begin to abstract about that knowledge and figure out wheres and whys.  The idea of abstraction is the true mark of intelligence.  To get the necessary brain MIPS (millions of instructions per second) or flops for abstraction, we need an ideation tool.  In other words, the main difference between the artificial intelligence of today and the true cognitive computing, is that the machine must keep on thinking even when it has no inputs to its layers of neurons.  Ideation must be self generated.

And strangely enough, it is the practice of mediation that was the germ of an idea for machine ideation.  In meditation, one tries to give the brain a rest by not thinking of anything. Usually one just keeps the thought process on breathing, or an inner visual cue, or by repeating a meaningless, non-cognitive load mantra.  This is an incredibly difficult thing to do.  The stream of consciousness keeps popping up random thoughts in your head, and people just starting the practice of meditation have a very difficult time with random thoughts.  However the key is not to sweat them.  You just observe them, and let them go without further investing in them.

I began to analyse the ideation that intruded on my mediation and it gave me some powerful insights that have application to artificial intelligence.  The first was the time controller or time domain awareness.  After sitting for awhile, my mind would begin to wonder how long I had sat.  Then it would try to get me to open my eyes to sneak a peek at my watch.  Once I let those thoughts go as an observer only, I would start to think that the meditation was quite pleasant, and it would take me off to a time and place where I had felt pleasant before.  Here was self generated, internal idea generation. Again, the time domain played a bit factor, as well as memory.  However it was the opposite of abstraction.  A pleasant abstract feeling triggered a concrete memory. This is the knowledge integration cycle in reverse.

Again this is something that doesn't happen in artificial neural nets.  They can abstract into higher context, but they don't usually go backwards.  This is another necessary key to cognitive computing.  It is almost like Le Châtelier's principle of dynamic equilibrium in the chemistry world, where when a chemical reaction is taking place, it goes both forward and backward once it reaches a point of homeostasis.   This element would be huge in artificial intelligence and a key to random ideation.

The last key to random ideation, is built on biomimicry. We humans have 5 universal senses, or sensory apparatus, and they are always on (ears, nose, eyes, touch, & taste).  These sensors generate an interrupt vector in my meditation to tell me that my nose is itchy, and I better quit this mindfulness and scratch it.  If I disobey the sensor signal processor, it belligerently intensifies the itch until I no longer can ignore, and sits back with a smugness of a job well done in interrupting my ability to quiet the brain.

So, to this point in our virtual AI thought machine,  we have the need for neural nets directly linked to non-volatile memory. Then we have a time domain controller linking contextually unrelated events to the time domain.  That aids in the ability of abstraction and puts artificial consciousness into the real domain of the arrow of time which is the chief feature of the universe.  Then we have the ability to go from abstraction to concrete and back again. Finally we have a core sensors that are always on to provide input to the neural network.  This is how a cognitive machine will be built.

This sounds like a lot of effort and theory, but I hearken back to my electronic digital circuits days.  You start with three or four boolean logic gates built out of transistors.  Once you have the gates you start combining them, and you get a flip-flop, or a latch that can hold transient data. You have the beginnings of a compute machine. You gang them together.  You are still using the same basic simple building blocks, but as you start to step and repeat and combine, and grow the transistors, you get incredible complex behavior that lets you go to the moon or visit Pluto with a binary machine.

This all exemplifies Shuster's Law where if you can think something, it will eventually become inventable -- without exception.

It is highly ironic that trying not to think, has taught me things about teaching machines to think.




No comments:

Post a Comment