Introduce yourself!

Hi all.

I am an embedded systems designer, both hardware and software.

Many years ago I wanted to learn about the soul and did much self-studies on neurology and the brain. At some point I noticed that I had accounted for everything that I could think of that makes us human - memory, emotion, intentionality & volition, all of the senses. It was a letdown that nothing was leftover to be a soul.

When I started to study electronics in the 1970’s (and got an associate’s degree) I was drawn to the sexy new field of microprocessors. I work with those critters to this day and love tinkering with them; ARM processors are amazing!

Between reading science fiction and some interesting articles in Scientific American I drifted into neural networks and AI in general. My take on AI has always been viewed through the lens of my earlier studies in neurology. I have read a few dozen books on neural networks and AI and understand the technology pretty well. I have been forming a general model of the brain over these years and always thought that the various neural network models were missing at least two key elements - hierarchical organization, and what I have been calling “song memory” - sequential processing.

I read the On Intelligence book shortly after it came out and was impressed but the dependence on the lower brain structures for sequential memory did not match up with what I knew and put me off a bit.

Time passed.

With the big splash of deep learning and the easy availability of Tensor Flow and the MS Computational Toolkit I was excited to see that the technology was producing interesting results and moving into alignment with what I have been thinking about how the various maps in the cortex work together.

I remembered that On Intelligence had been one of the first places to really evangelize a hierarchical organization of the cortical maps so I went back and read it again - this time prepared to receive it with an open mind. Digging into what Jeff has been doing since he wrote the book - BAM! SDRs models match up with real dendrites better than anything else I have seen. Sequential memory is built in and biologically plausible; feed-forward, feed-back, pattern and sequential memory all in one package - what not to love!

I have been struggling with unlearning what I know about using traditional neural networks to build AIs and starting over with SDRs - it’s been tough sledding but I think it is totally worth the effort.

In the first month of reading several things have jumped out at me:

1: the topographical organization of the synapses is important. As the dendrites snake between the columns and pick up connections they are sampling part of the pattern that is influenced by the classic Mexican hat shape that is a well-known property AND perhaps more importantly - as the dendrite stretches in a direction from a cell body - cell bodies in “that” direction may have dendrites extending back towards the original cell body. This reciprocal connection has the interesting property that they can reinforce a pattern that they share but are influenced by the patterns that each has in dendrites extending in “other” directions away from their shared pattern. This leads to some interesting possibilities in pattern landscapes. To support this idea I am proposing a modification of the SDR dendrite model: Add a moderately sized table of canned dendrite patterns. These can be very large patterns without much computation or storage costs. In the storage structure of the individual dendrites a pointer into this pattern table gives this dendrite’s
connections as a delta position address based on the parent cell body location - assigned once during map initialization and never changed after that. The dendrite table of syntactical connections adds one low cost step of indirection through the dendrite pattern table to learn what cell body that synapse is connected to during processing. It gives a permanent list of cell bodies to examine for activity when learning new connections without the huge memory cost of recording unused connections.

2: Learning - it looks like the standard HTM model is using straight Hebbian learning. We know that patient HM learned that way without a hippocampus. Most of us have good one-shot learning. What is it that the hippocampus brings to the party and how do we bring that to the HTM model? I am spending a fair amount of time thinking about this. Good one-shot learning would go a long way towards silencing HTM nay-sayers.

Also - Jeff describes how the brain smoothly resonates with things it recognizes and somehow signals when it is having (neuro) cognitive dissonance. I propose that the reciprocal projections to the RAS (Reticular Activating System) are in an ideal place to gate on more of whatever it is that is causing the fuss in the first place - in essence “to sip from the firehose” and amp up learning and attention.

3: The Executive Function - We talk about the vision, auditory, tactile, and other senses that project to various areas around the edge of the dinner napkin. I propose that the old brain projects to the forebrain much the same way as the senses. The old brain worked fine for lizards; these older structures were good decision makers and pattern drivers. The older brain has always directed activity through much of the evolutionary path - I don’t see any reason why it ever would have stopped. It senses the body needs and can project that as a sort of a goal directed sensory projection to the front edge of the napkin-the forebrain. A point to support this assertion - I go back to the proposal that the cortex is the same everywhere; I don’t see anything that suggests the cortex does anything but remember and through sequence memory - predict.

4: Local dendrite control of sparsity and synapse maintenance - There is no need to do this through a global function. In a dendrite maintenance phase - metabolism and chemical signaling should be enough to establish spacing, density of connections, and pruning.

5: The H in hierarchy reconsidered: The perceptron was shown to have serious limitation in the book “perceptions.” Making it part of a larger system dramatically enhanced its function.

Layers have been the breakthrough that given deep leaning some of its spectacular successes.

In much the same way - when I am reading through some of the details of the current implementations it seems to me that there is some tweaking to make it work that would not be needed if more consideration was given to layers of interacting maps. The “filling in” / auto-completion function that are a large part of the consideration of authors like Calvin and Marr are a natural thing if you have a functioning hierarchy.

I have some more ideas but I am curious to see what people think of the items I have put out here.

12 Likes