HTM constants confusion


  1. The PERMANENCE_THRESHOLD, PERMANENCE_INCREMENT and PERMANENCE_DECREMENT values are the same for synapses in both proximal and distal dendrites? In both spatial pooler and temporal memory?

  2. When a proximal dendrite segment is initialized for a mini column connecting to sensory cells, it’s populated with synapses that have a permanence value randomly distributed close to a threshold inside a range (example: 0.1 to 0.3). When a distal synapse is initialized for a segment in the temporal memory, what is the initial permanence of that synapse? Maybe, the maximum range (0.3)?

  3. What is a realistic value used for SYNAPSE_SAMPLE_SIZE? What happens if the SYNAPSE_SAMPLE_SIZE has been reached? Does it stop growing new synapses? If yes, how does this affect learning?

If the segment has fewer than SYNAPSE_SAMPLE_SIZE active synapses, grow new synapses to a subset of the winner cells from the previous time step to make up the difference.


The desired number of active synapses on a segment. A learning segment will grow synapses to reach this number. The segment connects to a subset of an SDR, and this is the size of that subset.

  1. What happens when the maximum number of segment has been grown? Does it stop growing new segments? If yes, how does this affect learning?

  2. What are some realistic values for LEARNING_THRESHOLD and ACTIVATION_THRESHOLD ?


Learning threshold for a segment. If the number of active potential synapses in a segment is ≥ this value, the segment is “matching”, and it is qualified to grow and reinforce synapses to the previous active cells.


Activation threshold for a segment. If the number of active connected synapses in a segment is ≥ this value, the segment is “active”.

As a general answer to your post:
For determining the default values for the hyperparameters, I always just copy off of The HTM Cheat Sheet.

Some answers that are a bit more specific:

You could set them as the same values, but it wouldn’t hurt to have them as separate variables just in case. :wink:

I don’t think you meant this, but still: there is no (random) initial synapses/segments in TM.
If you obviously didn’t mean that: when a new distal synapse grows from a dendritic segment, you would want it to have a very close to zero permanence, but it depends of course. :grin:
The reasoning is that it could’ve been a noise or a very short trend that might never happen again in the future.
An initial permanence of something close to zero requires few extra encounters of the same (temporal) pattern to happen, so it makes sure that the pattern was indeed not a noise and is persistent.
But if you want your system to be more responsive and catch up with the patterns that come and go rather rapidly (which isn’t ideal for HTM, I don’t think.), set it as a high value of something close to the threshold.

The hyperparameters depend a lot on what you want to do with what data. :slightly_smiling_face:

Yes, it doesn’t grow any new synapses if the number of the active synapses were the same as/exceeds SYNAPSE_SAMPLE_SIZE, but it doesn’t directly affect the maximum number of synapses of a segment as the segment could also be looking for one or two more distinct (spatial) patterns.
No, it doesn’t affect learning much because of extreme sparsity. This is why HTM does sub-sampling(having the relevant synapses just up to SYNAPSE_SAMPLE_SIZE), because the sub-samples are sufficient and robust for detecting the input(context, in this case) the segment is looking for.
Sparsity makes sure that two patterns don’t overlap in ways the system can’t tell them apart. And this allows sub-sampling to be robust enough to be used in practice. With less cost of memory and computation, you can safely assume it’s referencing a unique pattern.

1 Like

Incredibly helpful.

HTM Implementation Parameters

  • Num Columns (N): 2048
  • Num Cells per Column (M): 32
  • Num of active bits (w) : 41
  • Sparsity (w/N) : 2%
  • Dendritic Segment Activation Threshold (θ): 15
  • Initial Synaptic Permanence: 0.21
  • Connection Threshold for Synaptic Permanence: 0.5
  • Synaptic Permanence Increment and Decrement: +/- 0.1
  • Synaptic Permanence Decrement for Predicted Inactive Segments: 0.01
  • Maximum Number of Segments per Cell: 128
  • Maximum Number of Synapses per Segment: 128
  • Maximum Number of New Synapses Added at each Step: 32

Still I can’t find a value for LEARNING_THRESHOLD which is the number of potential synapses needed for a segment to be matching. Any idea what this could be?

I assume this refers to the SYNAPSE_SAMPLE_SIZE:

  • Maximum Number of New Synapses Added at each Step: 32
1 Like