In an earlier post, I mentioned that I have been trying to work out how to form the cortical.io retina using HTMs.
This effort has led me to work through what operations are being performed by HTMs and how to compute with these primitives. The basic task of the conversion between a stream of data and a spatial distribution of the semantically related SDRs requires that the somewhere in the computations there is a sense of location in the cell bodies.
I also wanted to work within the restriction of biological plausibility when considering mechanisms; this is what I came up with.
Starting with the standard HTM cell assembly:
Changing to a straight-on point of view to the cortical sheet with a cell body in blue and the related SDR connections in gray:
I have overlayed an activation field. Note that the cell body could be anywhere in the sheet - there is no way for the SDR to learn how the cell is related to another or the applied activation field.
I propose that this could be addressed by making the SDRs for a given cell path based. A reasonable sized collection of path shapes could be maintained in a small library. The cell structure would have a pointer to its path shape(s) and that is a permanent part of that cell definition. This path is relative to the cell body position in the array. I further assume that there is a small table of synaptic connections that is an index into the position in the path. These new synapses are added to the table as connections are learned. This roughly corresponds to Numenta’s adding segments for learning.
What does this give us?
Consider this cell with three path based SDRs:
The red and blue SDRs are sampling the activation pattern, with the blue having a much higher overlap.
When I say overlap I am using the traditional Perceptron Eigenvector alignment language. This allows us to use the very rich mathematical tools that have been developed for analyzing the Perceptron. Consider the receptive field of the green path based SDR:
When working through the possible computation available this yields the possibility that two cells could reinforce each other in learning a pattern. In this example, the blue and orange cell share receptive fields so they present a pair of semantically meaningful activations to the next higher layer. What this adds over Numenta’s typical implementation is the possibility of preserving topology as you move up the H of HTM.
BTW: There is no requirement that the path is continuous - I am only showing that in this presentation because real dendrites are continuous and it is easier to visualize. Each potential synapse location is stored as an XY offset from the cell body. The path in the library is a list of XY offset pairs.
A possible extension is a list length parameter - a busy dendrite could grow.
An interesting side note on path based SDRs is the biologically plausible enforcement of scarcity - there could also be the metabolic equivalent of a growth promoter shared over the length of each path SDR. An “empty” dendrite could grow a lot from whatever activation it receives - a “full” dendrite would starve older synapses to reinforce new synapse learning.
Another tweak is that the distance between two connections could be an influencing factor in learning to enforce spacing of synapse growth.