ColumnPoolerRegion vs TemporalPoolerRegion


#1

What is the difference between column and temporal poolers of htmresearch.regions? Is it the number of feed-forward input columns: multiple vs single?


#2

i’m only replying coz i had the same kinda question and in absence of “in-the-know” reply what i have found out might be useful to you tho take it for what its worth.

temporal pooler is basically a union pooler of active/predicted stuff from below. it does not affect or bias the region below just gives you a stable representation of it.

column pooler can actually affect/bias the region below, helping to maintain context. so thats the main big difference. in terms of maintaining stable representation they seem to be different algorithms too so there must be quite a few differences with them in that area. i’m currently trying to play with column pooler but i havent tried temporal/union pooler before so cant speak of them on practical comparison basis.

if you go into htmresearch/projects/feedback/ there is a pdf file describing “A model of top-down processing and temporal predictions in cortex” and the model there is L2L4 network where L2 is a column pooler.


#3

TemporalPoolerRegion is fairly old now, and we don’t use it anymore.

ColumnPoolerRegion is actively being used in our recent papers such as the Layers and Columns paper and the Untangling Sequences paper. There are lots of differences between the two. The Column Pooling algorithm is described in detail in the Layers and Columns paper.


#4

Speaking of the algorithms from Layers and Columns paper, I’m curious if y’all have made progress in recent research with respect to semantically similar object representations. Specifically referring to this point in the algorithm, which implies that object representations would never share semantics:


#5

We’ve discussed it, but we haven’t really implemented anything recently (the union pooler kind of had this property). I think this is a problem we need to address at some point.


#6

Could you point some of the most promising directions based on your discussions?
I believe the capability to detect similar semantic in different patterns is the key part of the problem.


#7

One approach I have been experimenting with is cell scoring over multiple time-steps (essentially SP, except the input being a union of active cells over multiple timesteps and selecting cells instead of columns). However this approach is resource-intensive, so definitely curious to see what the experts come up with :slight_smile:


#8

Is that like the union pooler + SP? It sounds similar to the TemporalPoolerRegion implementation but not sure.


#9

We experimented with something called union pooling and temporal pooling - these give you similar representations for patterns that co-occur often in time. Is that what you mean?


#10

Not exactly. I’m talking about, let’s say, two cups, which are semantically identical, but quite different in sizes and proportions. The second one should be recognized after training on the first one, but unions of coordinates on their surfaces are completely different.


#11

Similar but not quite the Union Pooler. Input is a union of active cells over time. However, instead of using scoring of connected potential synapses to activate best 2 percent columns, instead do the scoring to activate 2 percent best cells.

I am not instantiating the potential synapses until a particular cell activates in the input layers, but still resource intensive (many more synapses to manage than TM for example)


#12

In the past we had no locations, so those would be very similar. Now, with locations included their representations are completely different, but we get a bunch of other nice properties. It’s still an open question for us how to have semantically similar representations in your example and still maintain structure information, as well as all the other properties. In our current algorithm we choose the object representations randomly, but you could try choosing them a little less randomly.


#13

Let me rephrase my question: do you see any way to do it without using hierarchy to keep generalized representations of a pattern at higher levels?


#14

Do the layer regions like ColumnPoolerRegion have spatial and temporal poolers bundled together?


#15

I’m asking this because there seems to be no spatial pooler between L2 and L4


#16

I don’t think hierarchy is required for this, but I have no proof :slight_smile:


#17

There’s no spatial pooler between L4 and L2 in the network. The ColumnPooler does implement the temporal pooling algorithm in the paper - we used the code to test the properties we described. The code is very experimental - we may change it significantly over time. It might be helpful to understand the paper, but I would not use it for anything else at this point.


#18

Unless I’m missing something, it still sounds very similar to the Union Temporal Pooler (htmresearch/algorithms/union_temporal_pooler.py). In general I think this is a cool direction to explore. I don’t think we properly finished this line of thinking. It would be interesting to consider integrating logic like this into the existing column pooler logic (which uses lateral connections extensively).


#19

Hierarchy adds extra complexity, so if this problem can be solved without it (at least to a certain extent), that would be great. So, when you have any ideas (even without proofs), please, let know to the community :slight_smile:


#20

From my experience, unions aren’t a good solution for this purpose. They are good to keep a lot of different sparse representations, but if many of held patterns are very similar, it creates a fuzzy representation with a lot of false positives.