Right way to get output from an HTM system

With Hebbian learning rules (see: spike timing dependent plasticity), neurons only learn when they activate. So inhibiting a neuron will also prevent it from learning.

  • Note: I’m ignoring the HTM learning rule’s “predictedSegmentDecrement” because it is often zero, and always at least an order of magnitude smaller than the other parameters. *In general*: neurons only learn when they activate.

The brain uses Reinforcement Learning to control which motor neurons to inhibit, thus controlling the motor behavior. For more see:

3 Likes

Thanks @dmac for clarifying that for me. I had been assuming (incorrectly it seems) that dendritic segments learned whenever they succeeded in putting a neuron into a predictive state, regardless of whether that neuron’s prediction was validated or not.

Basically I was applying the same learning algorithm as the spatial pooler to the dendritic segment. So the proximal connections learn whenever a column activates, I was assuming the distal connections would learn whenever a segment caused neuron became predictive.

If a neuron’s dendritic connections don’t learn anything unless the neuron actually activates, I see how it solves that problem I mentioned earlier about “learning to ignore the input”. But it replaces it with a different problem - I don’t see how dendritic segments get “reined in” and don’t start over-predicting in incorrect contexts, given they only learn when their neuron activates and their neuron only activates when the column activates. (either with a correct prediction or a burst)

So what causes a penalty / down-regulation for a segment that continually predicts the neuron will activate in the next time step?

(I’m specifically speaking about the software neurons modeled in HTM)

Thank you for your help and patience.

1 Like

That’s the “predictedSegmentDecrement” which I mentioned earlier. When a distal segment predicts that a neuron will activate and then the neuron does not activate then the synapses are weakened by this decrement. This decrement should be much smaller than the regular learning inc/decrement.

The ratio between the regular “permanenceIncrement” and the “predictedSegmentDecrement” controls how often the prediction needs to come true in order for the TM to keep this segment intact.

  • For example: if permanenceIncrement / predictedSegmentDecrement = 10/1 then the prediction needs to come true at least 1/10th of the time, or else the synapses will get removed.

Another factor which probably keeps real synapses from “freaking out” is that they only learn once when presented with new inputs. HTM does not implement this, so bear with me here.

In an HTM synapse, when the presynapse and postsynapse are both active, the synapse is strengthened by “permanenceIncrement”. This learning rules is applied on every cycle.

Now imaging:

  • You spend 100 minutes starring at single point on the wall.
  • From a practical training perspective: you’ve seen one image, not 100 minutes worth of different images.
  • HTM thinks you’ve seen 100 minutes of training data, and will learn about each and every input, even though they’re all the same inputs.
  • With the HTM learning rule: all of the synapses which are active would get incremented to 1 (saturation), and all of the synapses which are inactive would get decremented to 0 (the minimum).

The solution is a small modification to the learning rule: Instead of directly applying permanence changes to the synapse, look for and filter out identical changes on consecutive cycles.

  • So instead of learning from all 100 minutes of staring at a still image, the synapses should only learn about the image when you first look at it. All further learning updates will be identical so our modified learning rule will filter them out.

  • Back to your problem with the motor cortex “freaking out” because the body is intentionally not moving: even if some synapses are slightly weakened by the unexpected behavior, they won’t be removed by a single “freak out”. It would take many repeated “freak out” sessions to kill a synapse.

Source of this learning rule:

2 Likes

Thank you @dmac ! Your explanation is awesome!

1 Like