How are Inhibitory Neurons in the brain accounted for in HTM theory?


My question is , if the the HTM system in modeled as non Spiking , where as the real brain is a spiking system which also has inhibitory neurons , how is this accounted in HTM ?


HTM has several steps of winners-take-all competition. In the spatial pooler, a winners-take-all competition selects a sparse set of maximally activated columns, and in the temporal memory, a winners-take-all competition selects the cells that were predicted in each column.

These competitive steps are implemented in the brain by inhibitory neurons that inhibit weakly-activated cells in response to the firing of strongly activated cells.


I don’t know if HTM fully takes in consideration the fact that there are multiple class of inhibitory neurons in the cortex. Apparently those interneurons seems to be quite different (in distribution and characteristics) [1]. In particular L1-L2/3 seems to have a combination of two interneurons (called VIP and non-VIP), L4 has mostly PV interneurons, L5b-L6 has a mix of PV and SST (see fig 2 in [1]).

SP seems to fit into PV characteristics (large, low input resistance, targets pyramidal cells, etc…). In contrast, VIP interneurons targets other interneurons, has large input resistance, etc… Are those VIP useful for averaging activity across time to narrow-down a stable “temporal” pooling? Perhaps this is important … perhaps no…

[1] R. Tremblay, S. Lee, and B. Rudy, “GABAergic Interneurons in the Neocortex: From Cellular Properties to Circuits,” Neuron, vol. 91, no. 2, pp. 260–292, 2016.


There are many types of inhibitory neurons. As a group they comprise about 20% of the neurons in the neocortex. None are known to send an axon outside of the local region/column and therefore the general belief is they are not part of the flow of information but instead play a regulatory role. We don’t believe it is that simple. As Jake points out HTM requires fast inhibition in pretty much everything it does. The spatial pooler also requires minicolumn to minicolumn inhibition and it requires a special type of double inhibition to force all the cells in a minicolumn to learn the same basic receptive field. I have spent some time looking through the literature to see if inhibitory neurons can indeed do these things. For example, some inhibitory neurons that are believed to enforce minicolumns also have learned rfs similar to the excitatory cells in the minicolumn. This is what the SP needs. So far we have found what we need, but the literature on inhibitory neurons is not very complete.

We have not written up these findings anywhere. When creating SW models we don’t need to model the inhibitory neurons per se, we achieve the equivalent result via various coded rules. For example we can just enforce sparsity with a few lines of code and not model the inhibitory neurons that achieve this in the cortex.


There are multiple schools of thought about spiking.

  1. Individual spikes and the interval between individual spikes matter, they carry information. The main argument for this is it allows more information to be conveyed.
  2. The rate of spiking is what matters, this allows the neuron to send a scalar value.
  3. Neurons have different modes of spiking, such as minibursts vs. regular spiiking.
  4. Neurons are binary, either they are relatively active or most inactive.
  5. Neurons have another depolarized state, which plays a key role in network activity.

#1 is not supported by biological observation.
There is biological support for #2 #3 #4 and #5.
HTM theory introduced #5 and also relies on #3 and #4.

Our approach is to introduce biological detail when the theory requires it. So far we have not found a need for #2. Even if the brain uses #2, SDRs can achieve the same result without rate encoding. However, just last week we found a theoretical need for rate encoding (#2) in how grid cells learn. It came from reading the SLAM paper Jake recommended. If we implement this learning feature we may still decide to model the feature using neurons without rate encoding, because…

There are many people interested in building HW for AI and machine intelligence. When creating HW there is a huge advantage to making neuron output binary, or multi-state, but not scalar.


@jhawkins i think HTM can also support spikes when we put all spikes into an integrator, which converts spikes and time interval between spikes into scalar values or we need a special spikes encoder!


Hey Deric. In response to your withdrawn question about active dendrite properties. Check out Jeff and Subutai’s paper on exactly that:

Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex.

In short, local regions of the dendrite can act as individual nonlinear pattern detectors, allowing each cell to predict its own activity by learning hundreds of different sparse patterns.


Thanks jake , it clears out most of my questions.