A preprint of our new spatial pooler paper is available on bioRxiv In this paper, we highlighted various computational properties of the HTM spatial pooler via simulations.
The paper can be cited as
Cui Y, Ahmad S, Hawkins J (2016) The HTM Spatial Pooler: a neocortical algorithm for online sparse distributed coding. bioRxiv doi: http://dx.doi.org/10.1101/085035
Abstract
Each region in the cortex receives input through millions of axons from sensory organs and from other cortical regions. It remains a mystery how cortical neurons learn to form specific connections from this large number of unlabeled inputs in order to support further computations. Hierarchical temporal memory (HTM) provides a theoretical framework for understanding the computational principles in the neocortex. In this paper we describe an important component of HTM, the HTM spatial pooler that models how neurons learn feedforward connections. The spatial pooler converts arbitrary binary input patterns into sparse distributed representations (SDRs) using competitive Hebbian learning rules and homeostasis excitability control mechanisms. Through a series of simulations, we demonstrate the key computational properties of HTM spatial pooler, including preserving semantic similarity among inputs, fast adaptation to changing statistics of the inputs, improved noise robustness over learning, efficient use of all cells and flexibility in the event of cell death or loss of input afferents. To quantify these properties, we developed a set of metrics that can be directly measured from the spatial pooler outputs. These metrics can be used as complementary performance indicators for any sparse coding algorithm. We discuss the relationship with neuroscience and previous studies of sparse coding and competitive learning. The HTM spatial pooler represents a neurally inspired algorithm for learning SDRs from noisy data streams online.
Small question: what type of encoder and resolution are you using in Fig6? The dependency of the encoder quality in those results can be noticeable? I was unaware of the massive effect that boost can have.
The boost formula seems a bit âcommunicationâ demanding for bio. A local based formula is not good enough? (i.e. generate the boost factor just by the deviation between the expected sparsity for the whole layer and the activation frequency of the column).
@vpuente I used the scalar encoder for the passenger count, as well as datetime encoder for time of day and day of week info. This is same as described in our previous paper on sequence memory. You can find the complete set of encoder parameters here
You can set an âexpectedâ sparsity for the whole layer and use that as the target sparsity for individual columns to speed things up. In practice I find that works without topology. When topology is enabled, the actual sparsity in a local area could deviate from the overall expected sparsity. For example, this could happen if there is little or no input on one part of the input space. I find it is more reliable to estimate the target sparsity from a columnâs neighbors in that case.
I see. Nevertheless, intuitively If we use local inhibition we might reach the equivalent column utilization even with topology. Perhaps, the root of the problem is global inhibition? Perhaps the local inhibition is the conduit to perform homeostatic control locally⌠really like the bio approach to explain the boosting
Considering a model of neocortex with hierarchical orders of spatial poolers, Iâm wondering what assumptions and further constraints could be made on the pooling algorithm given that regions higher up will receive âready-madeâ SDR input, as is not the case in the first layer. Is there any evidence from neuroscience suggesting difference in morphology due to this? I imagine the change could even be somewhat gradual corresponding to the progressive amount of sparseness achieved.
I have a few questions about this paper. why we are using boosting and reinforcement, at the same time, boosting will be applied in SP and reinforcement is applied in TM? In your previous papers, you didnât apply boosting, so we can ignore it?
In this paper, you mentioned âFor the continuous scalar value prediction task, we
divided the whole range of scalar value into 22 disjoint
buckets, and used a single layer feedforward classification
network.â why 22disjoint buckets? Do you any other documents that explain more about classification?
For the experiments in the paper the set of inputs X is always the
same size as the set of mini-columns / outputs Y. This is not a
requirement of the model is it? In the brain, would you expect them to
be the same?
Regarding the âPotentialInputâ function, have you considered skipping
the PotentialInput selection and just allowing all inputs to form
connections to a mini-column, i.e. p=1? Btw, the value for p is missing
in table 1.
Weâll have to wait for research into hierarchies to happen, but a
hierarchical model would presumably include not just spatial poolers but
additional processing steps of sequence memory and temporal pooling to
happen in each layer and it is not clear that these wouldnât form more
dense representations to feed to the next layer.
It is not a requirement. You could think of the SP as a mechanism for generating SDRs with some configurable fixed sparsity, while preserving the semantics of the input space. Its representations can be denser or sparser than the input space, and it can be sized differently (for example, 2048 minicolumns could be connected to a 1000-cell input space).
The way I see it, temporal density is accounted for in the realization that the input of realistic visual scenes is gathered along an affine manifold. It seems plausible to me that, without some sort of active re-encoding, sequences of associated patterns become internally dissimilar further up the hierarchy since the concepts the patterns represent will be related in more abstract ways. That is, by the nature of compositionality, temporal density is decreased. Furthermore I do not see how sequence memory and temporal pooling, as it is described in HTM, per se could re-introduce density. Iâm sure there are other ways to look at it. What mechanisms did you have in mind?
Answering my own question regarding the PotentialInput function. The
final version of the paper is out now with the Potentialnput function is
being called Î . The table of model parameters (table 1) shows âp=1â so
the answer is âyesâ.
How are synapses stored in spatial pooler implementation?
In one of the papers published about spatial pooler it is mentioned that "The synapses for the âiâ th SP mini-column are located in a hypercube of the input space."
So here, what is meant by hyper-cube of the input space? How are the dimensions of the hypercube decided?
How is the hypercube modeled ? Is OLAP cube used in the implementation?
Are the values of synaptic permenances of all the potential connections of the minicolumn stored in the same hypercube?
The paper Iâm referring to is : The HTM Spatial Pooler â a neocortical algorithm for online sparse distributed coding . http://dx.doi.org/10.1101/085035
Is this perm bumping/boosting procedure not significant for the
operation of the SP? In the interest of reproducing your results,
has perm bumping/boosting be employed in the production of the
result in this paper, and most importantly also in the sequence
prediction paper which uses the same spatial pooler?