How to deal with different timescales for HTM

I’m wondering what thoughts everyone has on dealing with different timescales in the inputs to an HTM. The brain doesnt know ahead of time what timescale events occur on, and yet it does a good job figuring it out for various events–I.E. speech or reading < 1s, walking ~1s, etc. This is despite the fact that inputs are processed ~ every few milliseconds. But if I were to try running a temporal memory program that sees a thousand SDRs a second and needs to understand events that happen on a timescale of seconds, all context would probably be lost between events. So, my question is: how to preserve context across very long gaps? (Very long on the timescale of individual inputs)

1 Like

My first thought is perhaps a second, ‘slow’ TM that reads multiple concatenated inputs from the ‘fast’ TM. But is there any evidence that the brain has multiple systems for working on different scales like this?

1 Like

Subsampling.

From an engineering perspective that makes sense, but I’m not sure that’s compatible with a brain-inspired implementation. There must be some other way to preserve context over various timescales while still maintaining a single sampling frequency. I could be wrong, though. Maybe the brain does do subsampling as part of the upstream data flow?

This sounds to me like Temporal Pooling (TP).
In case you’re not familiar, a TP region (as I understand) is one that monitors another region, and stabilizes when a familiar pattern is seen by that region.
That monitored region (maybe Layer 4) monitors the raw sensory data, as typically done in application.

So if a sensory region sees the familiar sequence “A,B,C,…X,Y,Z” it will precisely predict each next element, taking on different activation states for each letter (different specific set of active cells).

The TP region monitoring the sensory region, however, would theoretically maintain a single activation state throughout the entire familiar sequence. That activation would basically represent “alphabet sequence” rather than the constituent letters.

I don’t know what the Neuroscience evidence is, but I believe it does exist. I believe this TP-style region is encapsulated in Layer 2/3 of Numenta’s model of the macro cortical column. The different layers use the same core mechanisms (TM-style distal learning & SP-style activation), but differ in where there input comes from and where their outputs go to (which other regions).

This kind of multi-layer system is enacted with the Network API here:

To those familiar with the macro column model, please correct any faults/gaps here!

1 Like

Ahhhhh, that makes a lot of sense. So you have something akin to an SP but its minicolumn analogues respond similarly to any of many unique SDRs, as long as they’re part of the same sequence, in the TM that provides the input. Is that right?

1 Like

Yes (as far as I understand), and the TP region cells can depolarize cells in the sensory region too through apical feedback. That is enacted here:

In its repo (https://github.com/numenta/htmresearch/tree/master/htmresearch/algorithms) there are also temporal pooling scripts, I believe the most current one is the “union_temporal_pooler”. I haven’t delved into it yet, though its where I’d look to drill down on the TP functionality as currently implemented.

1 Like

Great! I’m looking forward to adding TPs to my library. I haven’t had this much fun coding in like 6 years :smile:

1 Like

Sounds great, I’d be curious how your pooling algorithm works compared to current implementations.

Glad to hear it :+1:t2:

1 Like

You might find this interesting:

If I understand this correctly, the distribution over multiple areas are important for encoding multiple time scales.In HTM, this goes to the H of HTM.

4 Likes