A Fast Spatial Pool Learning Algorithm of Hierarchical Temporal Memory Based on Minicolumn’s Self-Nomination

Abstract

As a new type of artificial neural network model, HTM has become the focus of current research and application. The sparse distributed representation is the basis of the HTM model, but the existing spatial pool learning algorithms have high training time overhead and may cause the spatial pool to become unstable. To overcome these disadvantages, we propose a fast spatial pool learning algorithm of HTM based on minicolumn’s nomination, where the minicolumns are selected according to the load-carrying capacity and the synapses are adjusted using compressed encoding. We have implemented the prototype of the algorithm and carried out experiments on three datasets. It is verified that the training time overhead of the proposed algorithm is almost unaffected by the encoding length, and the spatial pool becomes stable after fewer iterations of training. Moreover, the training of the new input does not affect the already trained results.

5 Likes

These are the critiques of the Spatial Pooler:

Secondly this minicolumn activation rule will cause the spatial pool unstable, especially when the input training space is small and the spatial pool capacity is large. This is because the boosting strategy ensures that all minicolumns participate in expressing the input while ignoring the need for individual input to be stably expressed. For example, if only one input is trained and the overlap between ’s code and y i is greater than the k -th value within N i = { y 1, y 2, …, y i , y i+1, …, y i+ m }, then y i will be activated in the previous training. As the training process goes on, other minicolumns are seldom activated, and their boosting factors will become very large. Then, one of them will win in the competition and be activated. This makes the minicolumn set activated by X unstable, which leads to ineffective distal synaptic connections in the temporal pool.

IMO having too few input patterns is a contrived and unrealistic scenario. This does not hold water.


Usually, a minicolumn is involved in the expression of multiple inputs. The regulation of minicolumn synapses not only enhances the expression ability of the minicolumn for the current input but also reduces the expression ability for other inputs. If a trained input does not participate in SPL for a long time, it will lose the ability to activate the minicolumns. We can call the process of forgetting. Forgetting [… is not good for the Spatial Pooler …]

There are many other solutions to the issue of forgetting:

  1. Adjusting the permanence increment and decrement
  2. Increases the sparsity of the learning segments
  3. “Uninformative memories will prevail: the storage of correlated representations and its consequences” Uninformative memories will prevail: the storage of correlated representations and its consequences - PMC

Overall they make some interesting recommendations for how to improve the Spatial Pooler algorithm. However, I’m not convinced that their changes are actually helpful let alone grounded in biology.

I would think of it more as a corner case rather than an unrealistic scenario. In general, we would prefer an algorithm that has stable behavior under any conditions that might be experienced in production.

I think the real issue with forgetting is not that you loose access to previously stored data in its entirety, but that those specific instances are slowly averaged together with other similar instances.

It’s not like the SP is completely deleting the entire proximal dendrite and forcing a minicolumn to form a brand new set of proximal connections with every new memory. Rather, the existing set of connections are gradually modified to better select for the specific feature that all inputs similar to the original posses in some significant amount.

For example, let’s say a minicolumn is originally maximally responsive to an edge oriented at 37 degrees. Over time, as it is exposed to more inputs, it might eventually drift to respond more strongly to edges oriented at 35 degrees. The minicolumn hasn’t forgotten about 37 degree edges, it just won’t respond as strongly to them as it did in the beginning.

3 Likes