The HTM Spatial Pooler: a neocortical algorithm for online sparse distributed coding

A preprint of our new spatial pooler paper is available on bioRxiv In this paper, we highlighted various computational properties of the HTM spatial pooler via simulations.

The paper can be cited as
Cui Y, Ahmad S, Hawkins J (2016) The HTM Spatial Pooler: a neocortical algorithm for online sparse distributed coding. bioRxiv doi: http://dx.doi.org/10.1101/085035

Abstract

Each region in the cortex receives input through millions of axons from sensory organs and from other cortical regions. It remains a mystery how cortical neurons learn to form specific connections from this large number of unlabeled inputs in order to support further computations. Hierarchical temporal memory (HTM) provides a theoretical framework for understanding the computational principles in the neocortex. In this paper we describe an important component of HTM, the HTM spatial pooler that models how neurons learn feedforward connections. The spatial pooler converts arbitrary binary input patterns into sparse distributed representations (SDRs) using competitive Hebbian learning rules and homeostasis excitability control mechanisms. Through a series of simulations, we demonstrate the key computational properties of HTM spatial pooler, including preserving semantic similarity among inputs, fast adaptation to changing statistics of the inputs, improved noise robustness over learning, efficient use of all cells and flexibility in the event of cell death or loss of input afferents. To quantify these properties, we developed a set of metrics that can be directly measured from the spatial pooler outputs. These metrics can be used as complementary performance indicators for any sparse coding algorithm. We discuss the relationship with neuroscience and previous studies of sparse coding and competitive learning. The HTM spatial pooler represents a neurally inspired algorithm for learning SDRs from noisy data streams online.

14 Likes

Thanks for the paper Yuwei.

Small question: what type of encoder and resolution are you using in Fig6? The dependency of the encoder quality in those results can be noticeable? I was unaware of the massive effect that boost can have.

The boost formula seems a bit “communication” demanding for bio. A local based formula is not good enough? (i.e. generate the boost factor just by the deviation between the expected sparsity for the whole layer and the activation frequency of the column).

Thanks

Valentin

1 Like

@vpuente I used the scalar encoder for the passenger count, as well as datetime encoder for time of day and day of week info. This is same as described in our previous paper on sequence memory. You can find the complete set of encoder parameters here

You can set an “expected” sparsity for the whole layer and use that as the target sparsity for individual columns to speed things up. In practice I find that works without topology. When topology is enabled, the actual sparsity in a local area could deviate from the overall expected sparsity. For example, this could happen if there is little or no input on one part of the input space. I find it is more reliable to estimate the target sparsity from a column’s neighbors in that case.

1 Like

I see. Nevertheless, intuitively If we use local inhibition we might reach the equivalent column utilization even with topology. Perhaps, the root of the problem is global inhibition? Perhaps the local inhibition is the conduit to perform homeostatic control locally… really like the bio approach to explain the boosting :slight_smile:

1 Like

Are the slides available from your presentation here: https://www.youtube.com/watch?v=1r6GxDsEdd0

1 Like

The slides are available here

2 Likes

Thanks @ycui for sharing the slides and @jacobeverist for asking about them. I’ve added to our SlideShare account as well:

HTM Spatial Pooler from Numenta

1 Like

Thanks!

Considering a model of neocortex with hierarchical orders of spatial poolers, I’m wondering what assumptions and further constraints could be made on the pooling algorithm given that regions higher up will receive “ready-made” SDR input, as is not the case in the first layer. Is there any evidence from neuroscience suggesting difference in morphology due to this? I imagine the change could even be somewhat gradual corresponding to the progressive amount of sparseness achieved.

1 Like

Hi @subutai ,

I have a few questions about this paper. why we are using boosting and reinforcement, at the same time, boosting will be applied in SP and reinforcement is applied in TM? In your previous papers, you didn’t apply boosting, so we can ignore it?

In this paper, you mentioned “For the continuous scalar value prediction task, we
divided the whole range of scalar value into 22 disjoint
buckets, and used a single layer feedforward classification
network.” why 22disjoint buckets? Do you any other documents that explain more about classification?

Thanks,

1 Like

Hi @ycui,

  • For the experiments in the paper the set of inputs X is always the
    same size as the set of mini-columns / outputs Y. This is not a
    requirement of the model is it? In the brain, would you expect them to
    be the same?
  • Regarding the “PotentialInput” function, have you considered skipping
    the PotentialInput selection and just allowing all inputs to form
    connections to a mini-column, i.e. p=1? Btw, the value for p is missing
    in table 1.

Regards
– Rik

1 Like

We’ll have to wait for research into hierarchies to happen, but a
hierarchical model would presumably include not just spatial poolers but
additional processing steps of sequence memory and temporal pooling to
happen in each layer and it is not clear that these wouldn’t form more
dense representations to feed to the next layer.

1 Like

It is not a requirement. You could think of the SP as a mechanism for generating SDRs with some configurable fixed sparsity, while preserving the semantics of the input space. Its representations can be denser or sparser than the input space, and it can be sized differently (for example, 2048 minicolumns could be connected to a 1000-cell input space).

The way I see it, temporal density is accounted for in the realization that the input of realistic visual scenes is gathered along an affine manifold. It seems plausible to me that, without some sort of active re-encoding, sequences of associated patterns become internally dissimilar further up the hierarchy since the concepts the patterns represent will be related in more abstract ways. That is, by the nature of compositionality, temporal density is decreased. Furthermore I do not see how sequence memory and temporal pooling, as it is described in HTM, per se could re-introduce density. I’m sure there are other ways to look at it. What mechanisms did you have in mind?

1 Like

Answering my own question regarding the PotentialInput function. The
final version of the paper is out now with the Potentialnput function is
being called Π. The table of model parameters (table 1) shows “p=1” so
the answer is “yes”.

1 Like

Link to the final official version:

1 Like

How are synapses stored in spatial pooler implementation?

In one of the papers published about spatial pooler it is mentioned that "The synapses for the ‘i’ th SP mini-column are located in a hypercube of the input space."
So here, what is meant by hyper-cube of the input space? How are the dimensions of the hypercube decided?
How is the hypercube modeled ? Is OLAP cube used in the implementation?
Are the values of synaptic permenances of all the potential connections of the minicolumn stored in the same hypercube?

The paper I’m referring to is : The HTM Spatial Pooler – a neocortical algorithm for online sparse distributed coding . http://dx.doi.org/10.1101/085035

2 Likes

@nivedita I moved your thread here, where there is already discussion of this paper.

@ycui,

I am troubled by the absence of a discussion of the concepts of
overlap duty cycles and boosting of permanences (not to be
confused with boosting of overlaps) as discussed in BAMI
under the “Column Activity” header
and implemented as "bumpUpWeakColumns"
in nupic
.

Is this perm bumping/boosting procedure not significant for the
operation of the SP? In the interest of reproducing your results,
has perm bumping/boosting be employed in the production of the
result in this paper, and most importantly also in the sequence
prediction paper
which uses the same spatial pooler?

Thanks.

– Rik

2 Likes

@subutai, @jhawkins,

I guess @ycui is not active on this forum these days, maybe you could
help with the below? Thanks

1 Like