What ways have been proposed for the implementation of temporal pooling?

Can someone help me with some description or a few link on the theories of how the cortex might create a stable representation of a sequence?

Read: A Theory of How Columns in the Neocortex Enable Learning the Structure of the World.

There are also a lot of interesting temporal pooling discussions on the forum.

well, the paper says that the output layer’s representation should remain stable during the sequence, but doesn’t discuss how it can do so. would that be by having a delay on turning off the active cells, or by having some kind of recurrent proximal input, or something else?
are there papers discussing that?

In those simulations, during learning we pick a random set of cells to represent the object and just force them to stay on. In the past we’ve used different methods to select the initial cells, but for this paper we kept it simple.

During inference, for the output layer we calculate the feedforward and lateral input to each cell. Cells with enough feedforward overlap with the input layer, and the most lateral support from the previous time step become active. Thus, initially a bunch of cells might become active, but over time only the ones that continue to be consistent with the input will stay on. This is described in equations 3-5 in the Materials and Methods section.

There are some biophysical mechanisms that could cause cells to stay active for a while, such as the effect of metabotropic receptors. However I don’t think we’ve really worked through these.

1 Like

Thank you., but why making it kind of supervised this way? i was thinking of it as having a spatial pooler that pools from the active cells in the lower layer, but like instead of pooling with 2 percent sparsity, pooling with 0.2 percent sparsity but keeping those active for 10 timesteps, so that the pattern would be a 2 percent representation made of the lower layer cell activations in the past 10 timesteps, invariant of their order.
This way if you pass those representations to another higher layer at every ten timesteps, that would make another sequence memory with lower temporal resolution (sequences of sequences), which is basically the concept Hinton & Dayan had proposed as Feudal RL (1992), and DeepMind is now heavily working on it.
Do you think anything would be wrong with that?

This is actually one of the first things we tried many years ago. We had problems getting this particular version working reliably, efficiently, and with high capacity (though I haven’t given up on the basic idea). The code for it is still there, if you want to experiment with it:

2 Likes