How SMI might inform Temporal Pooling

Ah, yes I see your point. An easy way to imagine what you are describing is say you have trained two sequences A, B’, C’, D’, E’, F’, G’ and G, F’‘, E’‘, D’‘, C’‘, B’‘, A’', and you have done a reset (so no active or predictive sells in either the TM layer our TP layer). Then an input D comes in unexpectedly. The TM layer columns for D are bursting, with cells for E’ and C’’ in predictive state. In this scenario, your thought is that a representation containing a union of both possible sequences would activate in the TP layer. A subsequent input of E would then narrow down the active cells in the TP layer to represent just the first sequence.

Admittedly, I haven’t actually explored resets with pooling yet (the application I am working toward will not use a reset function and I haven’t had a need for one yet). So I have only explored scenarios where the TP/object layer always has active cells (unless of course you just started up the application and nothing has been learned yet). My main focus has been on switching from one sequence/object to another one, and how the TP/object layer transitions between different representations (and how two sequences/objects can become merged into a single representation).

From my understanding, the TP/object layer should be more stable than the TM/SMI layer. In other words, a single unexpected input should not significantly change the active cells in the TP/object layer. In the above scenario, say I input A, B, C, E, F, G (i.e. I accidentally skipped D). When I get to E and the columns burst, I don’t want the TP layer to immediately switch to a representation of every sequence that contains E (in fact, I don’t even want the columns for “E” to burst at all in this case – the biasing signal from the TP layer should already have cells for E’ predictive). A single error in the input stream should have minimal effect on the active cells in the TP/object layer, and it should recover quickly. If, on the other hand, I input “A, B, C, E, D, C, B, A” (i.e. I’ve switched to the second sequence), the further into the second sequence I get, the more the TP layer will shift its representation to the one for the second sequence. How quickly this transition happens depends on the configuration parameters.

Obviously, I am describing my own implementation of a TP/object layer, so I could be completely off base :slight_smile: But hopefully you can see what I meant by there being no difference in implementation between a TP or object layer. It’s purpose, as I see it, is to simply form a stable representation of the sequence/object, and use that representation to bias the predictions (and ultimately the activations) in the TM/SMI layer.