TM’s ability to generalize

I’m not sure if this is correct but I may be wrong. The SP handles semantically similar static input states but not semantically similar temporal contexts. Sets of columns are meant to distinguish static input states, right? Encoders as well as the spatial pooler are static algorithms without any prior information contributing to new column activations (ignoring permanence updates in SP). Same column but different neurons means distinct contexts for the same input state, but not necessarily conveys any knowledge or representation at the level of semantically similar contexts.

Additionally, we have that different within-column active neuron states, each representing a distinct context, can lead to entirely different next timestep predictions in general. Is there any mechanism in TM that ensures semantically similar contexts (sets of active cells in a column) lead to similar predicted cell patterns? Each new context, no matter how similar to other known ones, is treated as an entirely new, distinct context in its cellular-level encoding which leads to bursting situations where they generally shouldn’t happen.

Regardless, the combinatorics is still massive. In my previous example, 32 cells per column, each set of active cells a new context, that means there are (32 choose 1) + (32 choose 2) + … + (32 choose 32) different contexts possible for a single column. Assuming each static input state is composed of 2% of columns, you would multiply that number by 41. This many possible contexts (billions of them for each static input state) would be fine if there was a way to discern between semantically similar contexts but it seems to me that each one is considered completely distinct from the rest. The combinatorics blow up even more when you start to consider chains of activation and arbitrary lengths of context.

I’d be interested in researching how the brain manages to discern temporal similarities and how that might influence new generations of HTM.

1 Like