HTM and different Column Layers

I am studying Numenta’s articles. I’ve read “Why neurons have thousands of synapses” article [A] and now I’m reading “How columns learn the structure of the world” [B] article and would like to clarify a few items in order to be on the same page.

Initially my assumption was that the HTM model would remain the same across all column layers. But judging by the “How Columns Learning the Structure of the World” [B] article - this is not so. Below are the differences I found. Am I right or not?

  1. According to [A], basal dendrites are trained on the winning neurons of the previous time step. However, according to [B], the signal to the basal dendrites of layer 4 comes from layer 6, and there is no any words that the basal dendrites also are trained on the winner neurons of layer 4 itself. Is that right? Is it true that basal dendrites learn from the winner neurons of the previous time step for layers 2/3, but not for layer 4? Is it correct?

  2. As far as I understood from [A], a certain number of active columns is selected at each time step (in order to maintain sparsity). However, according to [B] for layers 2/3 (Output layer) this is not the case. There may be many active columns when a recognizable object has not yet been defined. And the sparsity occurs after N iterations, when the object is “recognized”. Is it right?

  3. I didn’t quite understand how exactly a new object can be trained (if you don’t separate the stages of training and object identification)?

    3.1 A signal arrives at layer 4 (Input layer), from which the signal goes to layer 2/3 (Output layer), in which there is no cell activation / or extremely low cell activation (if such an object is not familiar). Is it correct?

    3.2 If the object is not familiar, then a sparse array is randomly created and it means, this is a new object, for instance, a “cube”. Is it right?

That paper required separate stages for training and testing.
However there are theories for how this could happen in a continuous on-line environment.

See:

Learning Invariance from Transformation Sequences
Peter Foldiak, 1991
Physiological Laboratory, University of Cambridge
(PDF) Learning Invariance From Transformation Sequences

And I recorded this explanation of the theory:

1 Like

In answer to your first two questions: I think it’s important to understand that those two papers are about different topics. The first paper (A) describes the theoretical capability of neurons to recognize sequences. The second paper (B) is numenta’s theory about a specific group of neurons (L4 & L23) and in that theory (B) those cells are not recognizing sequences!