Why don't HTM neuron synapses model long term potentiation and depression?

If we talk about the basics of synaptic plasticity, a synapse’s “strength” is measured with respect to the degree of voltage change in the post-synaptic neuron upon a successful synaptic transmission. Long term potentiation (aka “strengthening” of synapses) results in larger magnitude voltage changes in the post-synaptic neuron thus greater influence in the effort to cause an action potential. Depression, of course, is the opposite case.

HTM neuron synapses grow and decay, but the synapse is ultimately only “on” or “off” with respect to contributing to an action potential. I get that HTM networks are based on large distributed patterns of activity, but has there ever been thorough testing to justify the dismissal of real-valued synapse strength as unnecessary detail? Or is there other evidence in computational neuroscience that I’m not aware of that suggests fine-grained synapse strength to be functionally irrelevant?

On the plus side - if you are using a GPU, floats are faster than ints anyway.

1 Like

IIRC the rationale for it is along those lines :
Either the presynaptic cell is reliably involved in the pattern that the postsynaptic cell is sensing, and it will long-term tend to full strength (=1.0) ; or it is not, and it will long-term tend to null.
On a functional view, a non-int strength is thus involved only while developing this taste for a pattern… but HTM finds that the related ‘permanency’ concept already captures the essentials of learning. Therefore can keep strength as binary.

I believe this is a quite often raised question so you’ll be able to find Jeff’s answer on this in one of the videos.

3 Likes

One way to think about it is, most systems with weights have a weight for every possible connection (I think). But most neurons are not connected to most neurons, so which are connected to which is more important.

2 Likes

Yes, people are lazy. They use whatever tensorflow offers. They do not want to do the work needed to use more biologically inspired connectivity.
See the recent paper on the value of local connectivity over all to all connectivity.

3 Likes

Regards synaptic strength there have been measurements in some areas of the brain and the changes appear to be discrete, and only within a limited range. There appear to be about 26 discrete levels or less than 5 bits of information per synapse.

Application of the findings to the whole synapse distribution resulted in 26 sizes of synapses. In computer terms, this value corresponds to about 4.7 “bits” of information per synapse.
https://cns.utexas.edu/news/scientists-estimate-memory-capacity-based-on-sizes-of-brain-synapses

3 Likes

Excellent, thank you for sharing. I wonder if storing 4.7 bits of information per synapse versus 1 bit would lead to more separable representations in the SP for dissimilar input activation patterns. I will test this myself when I find the time.

2 Likes

Does it though? All synaptic transmissions in HTM neuron models are functionally equal. We know this is inconsistent with biology. I’m also uncertain what the “essentials of learning” are and how it has been evaluated that they’ve been captured successfully.

1 Like

I don’t know of any paper which would support the view that a synapse state or a voltage or a spiking rate or an activation map is binary… but I wasn’t sure it was your question.
Imho, Numenta starts from biological studies and some dose of insights to support first of all a functional-level view. Then one devises a computing model implementing the function.
What was evaluated is that the sequence-prediction functionality is indeed captured by the TM implementation.

I shall surely let actual people working on this answer to that, though.

3 Likes

I think this is pretty much right. We also look for experimental evidence to confirm/deny assumptions.

2 Likes

I’m looking forward to what you find (the synaptic strength idea is something I have thought about a few times, but never got around to actually testing). Will be interesting to compare actual results with intuitions (there are always surprises from these types of tests).

My own intuition on this is that synaptic strengths essentially approximate a network consisting of more neurons (where a greater strength is essentially an approximation of connecting to a greater number of semantically similar cells in a larger network). For example, a minicolumn connecting with 25% strength to one cell and 75% strength to another cell would be roughly equivalent (with reduced error tolerance) to a minicolumn connecting to 1 cell with one semantic meaning and 3 cells with the other semantic meaning.

image

My thought is that this would allow somewhat more granular control over the ratio of semantics in the representation. But given the SDR property which says you only need to sample a small subset of the representation to be confident in recognizing it, I’m not sure how much benefit could be gained from a higher granularity.

Devil is in the details of course – there could definitely be some beneficial findings. For example, we might see an impact on capacity, if it approximates a larger network well enough.

3 Likes

Yes it’s hard to say with certainty what will work better or worse especially since we’re not working with precise mathematical theory. Thorough testing is kind of our only hope. I know biological synapses in the CNS can grow and decay in strength very rapidly especially at the site of dendritic spines. Continuous fine-grained adjustments to synapse strengths could be an important ingredient to learning also beyond static representational capacity.

Some other thoughts:

Even with 4.7 bits of information per synapse, the activation map at each iteration would still exist as a binary SDR. Biological neurons either fire or they don’t fire because of their chemistry; there’s no such thing as a partial action potential. That much is known. What is not binary in neuroscience is the strength of synaptic transmissions and that is a place where HTM is inconsistent with biology. All HTM neuron synapse transmissions are functionally equal regardless of the corresponding synapse strength. Using floating point synapse strength information would yield a more in-depth calculation for each neuron to decide whether the input from its dendritic branches should or should not cause an action potential (talking about columns of course in the context of the SP). Instead of saying “a neuron (column) will fire if any 10 of its synapses are excited at the same time” which ignores each per-synapse strength, you could set a realistic “voltage disturbance threshold” value across each neuron (column) that is either met or not met based on the sum of voltage disturbances caused by each excited synapse who contributes with a magnitude of voltage change directly proportional to that synapse’s strength; which is consistent with the biology.

This all still leads to a binary SDR of active or non-active neurons (columns) for each timestep, though. HTM networks (and brains) are based on large distributed patterns of independently acting neurons. The contribution of a single neuron (column) to an activation pattern in an HTM network is negligible; remember the experiments in the Numenta papers where a percentage of randomly chosen neurons were turned off in a learned network and the network still recovered/performed well. Thus the precise activity of a single neuron (column) with respect to an activation pattern is functionally irrelevant with large enough size; we only need to know if it fired or didn’t fire. Fine-grained representation about the “contribution” of each fired neuron (column) to a given pattern not only does not have a biological parallel but is also functionally irrelevant according to the theory. From the perspective of number theory, binary numbers present no limitation in the amount of theoretical information they can represent compared to any other base.

I don’t know enough about the neuroscience/neurochemistry of NMDA spikes yet to comment on the relevance of floating point synapse strengths with respect to forming the predictive states of individual neurons in temporal memory. If it’s consistent with biology for predicted states of neurons to be non-binary (i.e. one predicted neuron could be more depolarized than another predicted neuron), which I suspect it is, it’d be similarly interesting to see if a floating point voltage disturbance value instead of a binary “predicted or not predicted” leads to better performance or insight.

2 Likes

I have been thinking about this exact issue on my own models.

HTM theory as currently implemented just tallies the number of responding synapses.

Contrast this with idea that a cluster of synapses along about a 40 um span is enough to set the predictive state; the distance and density between the responding synapses makes a difference.

It seems that if you allow a variable response synapses, a few strong responses close together would have the same effect as a larger number of weak responses spaced further apart. This introduces the complication of keeping track of both the strength of the response and the distance between the responding synapses.

I am still trying to work out if this offers any advantage over the current model. In practice a small strong pinpoint sensation would have the same effect as a softer, more diffuse, touch.

2 Likes

The more diffuse effect of inhibitory neurons is likely to interfere with this… so in the end it may not be as clear cut.

In light of this I’m viewing the binary HTM model as a quite on-target functional view of all those details.

2 Likes

Um, this is over the dendrite arbor which goes back to a single neuron.

The inhibitory effect is not applied to the inputs to a neuron (dendrites) but instead, to the interaction between firing neurons.

can we know that for sure ?

1 Like

Yes.

Could you elaborate? Not sure what you mean. From Neural inhibition - Scholarpedia,

The importance of inhibition in the brain is aptly illustrated by the fact that in addition to excitatory principal cells, the brain contains diverse classes of specialized inhibitory interneurons that selectively innervate specific parts of the somatodendritic surfaces of principal cells and other interneurons. In the cortex, axon terminals of interneurons release gamma amino butyric acid (GABA) onto their synaptic targets, where the inhibitory action can compete with the excitatory forces brought about by the principal cells.

Sort of as an aside, as I understand it, the function of inhibitory neurons in HTM are explained to be captured within the local inhibitory process of choosing winning columns in SP and the selective activation of predicted neurons in TM.

2 Likes

Ok, @gmirey, you are correct!

I was thinking straight HTM and not real neurons. With real neurons the dendrite sums both excitatory and inhibitory signals.

In that case a strong positive signal would overcome a weak diffuse inhibitory field.

Would it be useful to add inhibitory synapses to the basic HTM model?
That would move from a straight binary model to something else.
Training could also be more complicated.

1 Like

This is a recent example of that.

https://www.researchgate.net/publication/325792393_Spiking_neural_networks_for_computer_vision

Take a look at section 5.3. HTM alike synaptic model is the best.

3 Likes