Temporal pooling and generalization

The sharing of info between cortical columns that represent the same object differently is fascinating, and I don’t understand how it works (even though I understand the algorithm). Maybe some day you can make a video for HTM school showing in more detail what happens in "representation space."
But here’s a thought: even though person A and person B have different representations for “horse”, if you just look at person A, his representation for horse is probably closer to his representation for “pony” than it is for “alligator”. In other words, the horse and the pony representation might share more active cells than “horse and alligator” - at least if we are just looking at one cortical column.
If that is true, then you can’t just select arbitrary patterns in layer #2. If you did, by chance, alligator might be very similar to horse, and very different from pony!

1 Like

It helps me to think of a layer in a cortical column as an independent compute module. It does not know where it is getting it’s dendritic input, so each layer must learn those signals over time. So it doesn’t really matter how things are represented in the other layers as long as each compute module is representing its output consistently.

Absolutely, this feature is essential.

It must be that way in every cortical column that has learned about alligators and horses. It happens naturally as columns learn the world. Representations associated with horses & ponies will naturally be more similar because the sensations observed about these objects are more similar.

1 Like

If I’m not mistaken, I believe @gidmeister was referring to the algorithm in the currently published paper on SMI, which does a reset and then random SDR to represent objects in the output layer. Numenta will almost certainly revisit this point in the future and rework it (there have been discussions with Numenta employees about this point on some other threads as well).

This is one area that I am experimenting with myself as well. Currently, the most promising strategy I’ve found is one that @dwrrehman described. It relies on an abstraction of SP + TM which doesn’t utilize minicolumns, which is able to establish semantics between the cells which represent the various feature/locations of an object (which should lead to semantically similar objects having overlap in their representations as desired). I’ve been a bit busy with work, so I haven’t had as much time as I’d like to devote to it, but I am pretty close to finishing my initial implementation in Javascript.

4 Likes

In our brain, we have a neuron (or a group of them) which is responsible for the abstract idea of a horse, and it can be fired in many different ways - through an English or French word, a picture or a smell of it shit. Each of this ways has its own hierarchy, which solves the issue with recognizing complex patterns and invariance at the same time.

You are totally right having the concern about the absence of invariance in the current HTM model for spacial patterns. Nevertheless, I believe you should think first of all about the simplest features in the first layer, which can be used as parts of more complex patterns higher in the hierarchy.

2 Likes

Sounds interesting, are you talking about this discussion? An Apical Depolarization for Numenta: How to Generate the Allocentric Location Signal

Yes, specifically the pooling strategy, which is an extension of doing SP + TM without minicolumns (difference from an inference layer being that distal connections are formed with active cells in current timestep instead of previous timestep … there is more detail in that thread if you read through it)

It could be that invariance is accomplished that way, by many hierarchies, but I’ve read (don’t remember where) the idea that at every level of the hierarchy there is some fuzziness (invariance) built in. For instance, at a low level, instead of just responding to a corner at a 90 degree angle, there might be a somewhat lesser response to a corner at a 80 degree angle.

That’s what I’m talking about, but not only about the fuzziness, but the capability to recognize such corner in different places, rotated, etc.

Hey @rhyolight, could you point me to any pseudo-code (or actual) for this pooling mechanism that’s going on in layer 2? If there’s any available I’m keen to get up to speed on how pooling works as Numenta sees it thus far. Thanks again :smile:

Here’s the code: https://github.com/numenta/htmresearch/blob/40af20a96caa273c2f184006517a71952cda0149/htmresearch/algorithms/column_pooler.py

I created a couple diagrams last year at the bottom of this page: https://github.com/numenta/htmresearch/blob/40af20a96caa273c2f184006517a71952cda0149/projects/l2_pooling/notebooks/SetMemory_similar_objects.ipynb

Here’s one:

2 Likes