Temporal Hierarchy

Hello everyone! I wonder if there are implementations of hierarchy that use temporal sequence abstraction in a way described by Jeff Hawkins in his famous book? I mean sequence naming. For example, we have a low level, that learns sequences of notes and a higher level that learns sequences of songs, which are names for sequences of notes. I know, that it’s possible to construct spatial hierarchy with Network API, but I haven’t found the way I can construct this kind of hierarchy I described by existing tools. There aren’t such implementations, right?

1 Like

I think you should talk to @DyzLecticus. Here is a tread showing a program that seems to do something like that.

2 Likes

TBT is not pure hierarchy . I also think there is no sequence hierarchy either…

TM is sequence based, but it is a simply a part of Location-Sense interloop which purpose is to “narrow” on recognizing object/concept/label/note like gradient descent or kalman filters do.

Sequences should probably be based on those labels.

On the other hand you can use TM as sequence predictors for notes as you do for time-series. But afaik TM cant keep on track for longer spans.
To build a hierarchy as you want you need somehow to automatically label the part of the sequences the are “on-track”, then use them as a label for the next level.

I think that is similar to the idea of Temporal Pooler, Numenta had in the past

1 Like

Hey @whosuka,

What you’re describing has been traditional called ‘Temporal Pooling’, @mraptor is right.

The kind of functionality is used in some Network API functions found in htmresearch. Here’s one, which creates a 2-Region HTM Network, where 1 region is a mostly normal TM (‘L4’) which takes input from a sensory stream. The other (‘L2’) is a pooling layer which takes input from L4.

The basic idea (afaik) is that L4 looks for patterns in the raw sensory data, and L2 looks for patterns in L4’s activity.

So if L4 is recognizing notes in a sequence (say familiar Song A), that particular recognition creates a particular series of activity in L4. This series of activity is then recognized by L2. So L4 kinda says “I know what the next song notes are”, and L2 kinda says “I know what song L4 is recognizing”.

L2 also influences L4 with this recognition – kinda saying to L4 “it looks like you’re recognizing Song A”. This “apical” input from L2 is what makes the L4 different than a standard TM.

To understand how L4 & L2 are connected, see the .link functions:

The ‘ColumnPoolerRegion’ is defined in regions here:

The script which it imports ‘ColumnPooler’ from is here:

2 Likes