Path based SDRs (Dendrite Trees)


In an earlier post, I mentioned that I have been trying to work out how to form the retina using HTMs.
This effort has led me to work through what operations are being performed by HTMs and how to compute with these primitives. The basic task of the conversion between a stream of data and a spatial distribution of the semantically related SDRs requires that the somewhere in the computations there is a sense of location in the cell bodies.

I also wanted to work within the restriction of biological plausibility when considering mechanisms; this is what I came up with.

Starting with the standard HTM cell assembly:

Changing to a straight-on point of view to the cortical sheet with a cell body in blue and the related SDR connections in gray:

I have overlayed an activation field. Note that the cell body could be anywhere in the sheet - there is no way for the SDR to learn how the cell is related to another or the applied activation field.

I propose that this could be addressed by making the SDRs path based. A reasonable sized collection of path shapes could be maintained in a small library. The SDR structure would have a pointer to its path shape and that is a permanent part of that SDR definition. This path is relative to the cell body position in the array.

What does this give us?
Consider this cell with three path based SDRs:

The red and blue SDRs are sampling the activation pattern, with the blue having a much higher overlap.

When I say overlap I am using the traditional Perceptron Eigenvector alignment language. This allows us to use the very rich mathematical tools that have been developed for analyzing the Perceptron. Consider the receptive field of the green path based SDR:

When working through the possible computation available this yields the possibility that two cells could reinforce each other in learning a pattern. In this example, the blue and orange cell share receptive fields so they present a pair of semantically meaningful activations to the next higher layer:

BTW: There is no requirement that the path is continuous - I am only showing that in this presentation because real dendrites are continuous and it is easier to visualize. Each potential synapse location is stored as an XY offset from the cell body. The path in the library is a list of XY offset pairs.

A possible extension is a list length parameter - a busy dendrite could grow.

An interesting side note on path based SDRs is the biologically plausible enforcement of scarcity - there could also be the metabolic equivalent of a growth promoter shared over the length of each path SDR. An “empty” dendrite could grow a lot from whatever activation it receives - a “full” dendrite would starve older synapses to reinforce new synapse learning.

Another tweak is that the distance between two connections could be an influencing factor in learning to enforce spacing of synapse growth.

Trading byte size for noise
Thoughts about topology
What is the best way to represent a set of synapses per dendrite per cell?

Sounds familiar to temporal memory and growing synapses between cells based on previous activation. Am I missing something?


What may be novel is the fact that the spatial relationship between the cell and its surroundings is captured - it is inherently part of the semantic meaning of what those synapses learn.

I felt uncomfortable calling each path a dendrite but maybe that would make the idea more approachable.

What I am researching today is how this modification allows HTMs to form SOMs.[1][2] The formation of SOMs seems to be an important step in the generation of the semantic folding map (retina).

Using sparse based SOMs seems to be a thing in sparse theory [3][4] Nobody wants to see their carefully handcrafted HTM code miss out on nifty research!

The next step will be to apply HTMs to do the temporal to spatial mapping needed to populate the retina structure. The basic HTM/SDR model clearly solves the temporal & pattern formation/matching part of the problem - physical location of the cell bodies & dendrite trees speaks to the spatial component of this transformation.



Were you ever able to work out how to form something like retna using HTMs? The location factor of the semantic folding process I could never recreate efficiently myself (“snip-its are distributed over a two dimensional grid in such a way that snip-its with similar meaning are placed close to one another”). Because of this, the word SDRs that I am able to create encode semantics properly, but lack topology.

(BTW, reading the papers now)

Using attractors to distill topology from semantics

I have been working on the intersection of Calvin’s hex organization, grid cell encoding and sparse/HTMs as the data coding method to pull this off.

Rather than a particular bit I am thinking of a distribution more like a sparse grid where clusters of words each form their grids and are presented together to form the SOM landscape.

In particular - the training method described in the “threes streams” paper looks like the key to squaring this circle. The downside is that does not exactly map to the methods described above so I am having to work out the common details that tie all this together.

The line I am working is moving from the vision model to a speech/word model to train the retina version of Wernicke’s and Broca’s areas.

There is a lot of heavy lifting to combine the lines and the work progresses slowly. So far lots of notes and sketches, many more wadded up papers and a growing small core of ideas that seem to be the right path.


Cool, hope you have some success.

Thanks for posting this thread, BTW. I think I have a way to use the concept of paths in my variation of semantic folding to capture topology (note that I am less concerned with biological plausibility in my case, but will share my results in case they are useful)


One of the possible “paths” is a circular distribution around the cell body.
This could also be constrained to a pie or cartoid shape to allow different “dendritic” samplings.


Let’s try some numbers and see if this is important or not.
Start with an image processing application, using a modest 1K x 1K map.

Each cell may have 8 dendrites for proximal and 16 for distal, Each dendrite may have a large number of potential synapse locations, say 1024.

The data structure for each synapse is a connection location and a permanence value. For a large map (more than 65K neurons) that means a permance byte for each location and a 32-bit address for the connection location.
1,000,000 x 24 x 1024 x 3 bytes = 73,728,000,000

Using paths means a single address for the path and just the list of permanence values, a three-fold reduction in storage space. The path table should be able to fit in the L1 cache for a dramatic speedup in memory access.
1,000,000 x 24 x (1024 + 2) x 1 bytes = 24,624,000,000

24 gb is a large memory footprint but not ridiculous for modern hardware.
You don’t see a lot of 75 gb main memory machines.


I thought this seemed obvious - my bad.
The paths are stored in an array just like any other data structure in the program.

I am using random walks with direction bias to generate the paths now.
I am thinking of adding a branch feature where after a certain size run I start a new run from a prior generated node selected at random. This would change the sampling density vs distance from the cell body.
You can get fancy with this.
One Rule to Grow Them All: A General Theory of Neuronal Branching and Its Practical Application

Each dendrite has a random selection of the generated paths, fixed at the time of map creation.

The reason for multiple paths is that a single fixed pattern has aliasing issues - like a Moiré pattern.

There is a surprising amount of literature (to me anyway) that discusses dendrite shapes and how they get that way.

The single dendritic branch as a fundamental functional unit in the nervous system

Generation, description, and storage of dendritic morphology data

Assisted morphogenesis: glial control of dendrite shapes

Conserved properties of dendritic trees in four cortical interneuron subtypes

Check out the reference list on this link:
Modelling Dendrite Shape from Wiring Principles

There is a lot going on inside the dendrite - it’s not just a passive wire. I continue my studies to see if any of this aids in learning patterns. An example:

Dendritic geometry shapes neuronal cAMP signaling to the nucleus