Multiple proximal dendrites and neural model for SP



Doubts regarding proximal dendrites:

  • Why are there multiple proximal dendrites? Is the reason that a layer can get input from multiple sources of receptive fields- each proximal dendrite being connected to each source?
  • Does each proximal dendrite only receive a single input bit or a single electrochemical pulse as it were, per time-step?
  • Are multiple synapses on the proximal dendrites connected to the same input bit(pulse source)?
  • Do neurons actually linearly sum multiple input bits on proximal dendrites and then decide whether to fire or not?
  • Can a layer of individual neurons make a model structure for spatial pooling if we assume that each neuron can sum the inputs from proximal dendrites and fire according to a threshold?
    In this case, each proximal dendrite will be connected to a subset of the same input space. The lateral connection properties of neurons will remain the same in the model.

Trying to implement HTM theory using Julia

You mean in your brain? I think it is just natural because neurons grow many dendrites. Those segments of dendrites close to the cell body are proximal. It is probably more accurate to call them “proximal dendritic segments” than “dendrites”. I have a good visualization of them all here.

Yes, that is one of the major functions of spatial pooling: to map an input space to another space by using these separate receptive fields. The input source is the same, but each minicolumn has a different perspective of the input, and learns to respond to different patterns.

No, any dendritic segment has many synapses. Each segment can have none, one, or many active synapses. So a proximal segment will only cause the neuron to fire if a certain number of them activate and bring the cell’s polarity high enough to cause an action potential (again, see the animation I made above about dendritic spikes).

Pretty much. They also process distal input in the same way, which will cause cells to fire faster than their neighbors when they are “predictive”.

I think you are on the right track here, but does the rest of my post make sense to you?


Is there a need to include individual distal dendrites on neurons or just have to add synaptic connections without dendritic grouping but with a threshold? For SP and for TM. :thinking:


For SP, we use one proximal segment for each minicolumn. But for distal connections, a cell can have many segments, each segment having many synapses. But you could use more segments from the SP to the input space. Also in many simple cases, more than one distal segment won’t grow between cells in the TM. So it depends on the randomness of the input data how many segments a cell has.


It seems that I am not clear on the application of proximal dendrites.
The confusion boils down to:

  1. What is the use of multiple proximal dendrites per neuron? Are they necessary?
  2. Every proximal dendrite only carries one electrical impulse at a time right? Every cell in HTM receives only one input bit, right?

What does NuPIC use with respect to the above questions?


Remember there are minicolumns involved. So all neurons within a minicolumn (in NuPIC anyway) will share one proximal segment that gets a projection from the input space. I have drawings of this in this video.


I was aware of the single proximal dendritic model in NuPIC. But since, and I assume its for optimisation and ease, I thought there might be different configurations and reasons in the neocortical model.


We don’t use multiple proximal segments per minicolumn it because we have not found it necessary to do prediction (theoretically). We do, however, find it necessary to model many distal segments between neurons for temporal memory.


Thank you for the elaborate reply and also for merging the questions.

I liked the visualisation. So does this mean that there is only one distinct feed-forward dendrite per neuron which divides into multiple segments, in the brain?

But is this true for each neuron’s proximal dendrite? I am not referring to the shared proximal dendrite, as is used in NuPIC.

Agreed, but does that mean that every segment’s multiple synapses activate due to different feedforward input bits or are these synapses excess synapses results of synaptogenesis between the input bit source and the segment?

Yes. But, I still have some questions. :sweat_smile:
And is this neural model used before for spatial pooling?


No. In the brain, neurons have many dendrites. They usually branch 4-5 times. “Segments” are the distances of dendrites between these branches. The dendritic segments between the cell body and the first branch are proximal. The others segments past that first segment are distal. So in the brain, there are many proximal dendritic segments (4-5 usually, I think).

In NuPIC, we do not model multiple proximal dendritic segments. We model only one, because we haven’t needed more than one for HTM theory to work, and we’re just trying to keep it simple. Distal segments are another story, however. We must have many distal segments.

There may come a time in the future, or a type of input, where we’ll need multiple proximal dendritic segments. Who knows?

The answer is we don’t know. HTM is a theory of intelligence based on what we do know, but this is theory, and theory is otherwise known as “our best possible guess”. It looks to me very possible that cortical layers are getting joined input / split output, and it not only makes sense but opens up lots of doors to postulate new ideas about how layers work together to represent reality. The idea that grid cells can represent complex locations using properties of SDRs allows us to think of our brains’ job differently, and gives us ideas about how perception turns into memory.

We try to model what we think pyramidal neurons in the brain do, which is grow dendritic segments and axons to meet each other. We model this by creating a randomly initialized network of cells that are somewhat connected. The SP has mini-columns that are randomly connected to the input. Some synapses are connected, others are not, but might be soon. Once the SP starts running I don’t think it will grow new segments or synapses, but existing synapses can become more/less permanent. In the distal connections things are much more fluid. I don’t think we start anything connected to another, we just let input come in and distal segments/synapses will start growing between cells as spatial patterns are processed. I really don’t know if this is how it works in the brain (I am not a neuroscientist), but this is the best way I understand the logic of HTM.

I’m not sure what you mean with this question. Spatial pooling is what we think is happening in the brain, in at least two layers of cortex. I think it is important to process sensory input with spatial pooling, which allows all sensory input to be uploaded into a common format, one that can be comparable to others.


Thank you.
Your reply clears some of the doubts.

I meant to ask if SP has been computed using a 2D array of HTM neurons as I suggested instead of using a 2D array of minicolumns?


The dimensionality of the SP should match the dimensionality of the input. All NuPIC encoders encode 1-dimensional spaces (1d array), so the SPs we generally use are also 1D. But the SP is capable of handling any input dimensionality. Do you understand topology?


Yes, I understand that and also topographic maps.
Since every minicolumn neuron is connected to one bit from the input space and so the minicolumn is connected to N bits from the input space, the total number of input bits should be (no. of minicolumns * N).
Now, instead of this we could have a layer of HTM neurons that have N proximal dendrites which probably don’t have segments and each dendrite is connected to one input bit from the input space. So each HTM neuron will be connected to N input bits. Activation would be defined by proximal input threshold or inhibition rules. Inhibition rules could apply to this layer just as they apply to layers of minicolumns. Neurons could form lateral connections to make predictions.


This same layer can be multidimensional and form a TM region but, of course, with different inhibition and activation rules. Thus essentially, eliminating the need for minicolumns in SP. As an advantage in the implementation, we can reuse the same HTM neural layer model for both- TM and SP. :lying_face: :thinking:

Feedback and lateral proximal signals can also be added to this model.


I happen to be working on an implementation of HTM where the minicolumns are not functionally relevant (using a similar algorithm for proximal learning as for distal learning in TM). It is computationally more expensive than traditional SP algorithm (must iterate each cell connected to the input space, versus each minicolumn connected to the input space), but it does allow for a new pooling strategy that isn’t possible with the minicolumns defined as they are. I am in the process of cleaning it up and drawing up some diagrams to explain it. I’ll post more info when it is finished.


That’s good. Are you using HTM neurons instead of minicolumns?


Yes, same as HTM neurons, except instead of number of active synapses above threshold putting cell into predictive state as it works for distal connections, cell goes into active state for the proximal ones.


Oh, that’s good! :+1:


Interesting. All dendrite segments are functionally similar when recognizing and learning patterns so I wouldn’t worry too much about having more proximal segments. My work on Simple Cortex shows this problem is extremely simple and easy to parallelize. However, I’m curious how you managed to get the neural activation rules to properly recognize coincident proximal-distal occurrences over just proximal occurrences and then select the appropriate learning dendrites without some form of minicolumn-like behavior. This is a really tough problem because there are many properties of HTM that, as far as I can tell, cannot happen without neurons that have localized shared receptive fields and mutual inhibition. The neural activation process is HTM’s biggest parallelization bottleneck and a source of my recent frustrations so if your idea works that’d be really amazing.


Here is a way to do that: