All Things Techie With Huge, Unstructured, Intuitive Leaps

Synaptic Pruning in Artificial Neural Networks and Multilayer Perceptrons

What happens in a baby's mind is fascinating.  While the baby is sleeping, it processes all of the information that its senses took in, and puts in through a huge Mixmaster creating all sorts of connections to memory, storage, logic and emotions.  I love the way that Mother Nature plays dice.  The baby's brain makes synaptic connections between bits of data that are also inappropriate. This is hugely beneficial because once these connections are made, then the logic circuits can evaluate if they are sound and reflect the outside world.  A baby's brain multiplies in size 5 times until it reaches adulthood, largely from creation of synapses or links to neurons (plus other biological infrastructure functions).  This is why a child's imagination is so fertile.

Then we have synaptic pruning near the onset of puberty. ( ). Once we start thinking about sex, we start pruning the synapses that we think are inappropriate.  The cartoon below gives a very simplistic diagram of pruning inappropriate synapses.  I use the word inappropriate in the sense of what is considered inappropriate by adults and keen rationalists or fairy-tale dogmatics.

How did I get onto this?  I saw a tweet by a hard-core religion fundamentalist who stated that neuroplasticity was the deity's way of fixing a brain.  (In that context, I think that he was implying neural re-wiring to fix apostasy, homosexuality, atheism, and everything else that he didn't approve of.)  I had heard of neuroplasticity but I googled it to ascertain the current scientific thinking of it.  Simply, neuroplasticity is the rewiring or creation of synapsis to take over functions of the brain that have been destroyed by trauma, injury and/or accident.  For example, it has been reported that brain function controlling say motor activity has been discovered in a portion of the brain not known for that activity in an accident victim.  The term synaptic pruning was in this article, and I had to investigate the term.

Once I googled it, it reminded me of the works of Dr. Stephen L. Thaler, PhD.  He has a raft of scientific discovery and patents, and he was an early adopter of artificial neural networks. ( ). In a nutshell, he did some work in Cognition, Consciousness and Creativity in artificial neural networks for which he holds patents.  He discovered that if you randomly destroyed neurons in a massive array of artificial neural networks, as the network was expiring, it came up with creative outputs or solutions.  As a result, he added another layer of neurals nets to observe this.  In essence by killing off neurons randomly, he was doing synaptic pruning of a sort.

Let me quote from Dr. Thaler's website:

After witnessing some really great ideas emerge from the near-death experience of artificial neural networks, Thaler decided to add additional nets to automatically observe and filter for any emerging brainstorms. From this network architecture was born the Creativity Machine (US Patent 5,659,666). Thaler has proposed such neural cascade as a canonical model of consciousness in which the former net manifests what can only be called a stream of consciousness while the second net develops an attitude about the cognitive turnover within the first net (i.e., the subjective feel of consciousness). In this theory, all aspects of both human and animal cognition are modeled in terms of confabulation generation. Thaler is therefore both the founder and architect of confabulation theory and the patent holder for all neural systems that contemplate, invent, and discover via such confabulations.

The idea then struck me, that perhaps it wasn't necessary to destroy the neuron in the network to achieve what Dr. Thaler saw, but rather just do the synaptic pruning, by randomly destroying inputs (and as a result their weights) in the hidden layers of multilayer perceptrons.

After the connection was destroyed, you would still run the AI machine including back propagation and see what comes out.  What a fascinating concept, and I am itching to try this once I find the time.

I am sure that all sorts of people might think that Dr. Thaler is a nutbar, but those were the same people who thought that Benoit Mandelbrot's ideas on fractal geometry were child's play with no practical applications.  Or we see how the works of the Rev. Thomas Bayes who is a relative unknown, publishing only two papers in his lifetime, and dying in 1761 postulated the important Bayesian inference used in Machine Learning.

So Artificial Neural Networks come and go in popularity in the computing field.  I am sure that Dr. Thaler is onto something, and for some strange reason, his theories may pan out to be seminal in the field of machine consciousness that way that Alan Turing's ideas became pivotal in this modern age of technology.  And somewhere in there, synaptic pruning will take place, and it just may not be a footnote in the development of artificial consciousness.

If you are looking for ideas for a master or doctoral thesis, you are welcome.

1 comment:

    Note the synaptic pruning at work in Figure 3!