Temporal memory state

I’m trying to internalize my understanding of Temporal Memory state.
Let me lay it down and if you can comment on it.

Ok, so the basic idea is that TM state represent the full history up to the last moment.
Does this implies that this specific state could not be arrived at by different "history"
OR at least can not be arrived at with high probability by different “temporal-sequence”.
What I’m trying to get at is to get a feel on how unique the state is ?

If my statement is correct this will also imply that every TM-state is a good Spatial-representation
of the unique-temporal-sequence, Right ?

I understand it on conceptual level, but tell me why I seem to have reservation on visceral level ?

It is very unlikely that a specific TM state can be achieved by a different sequence history. The probability decreases very quickly as the (1) number of cells per column increase and (2) the number of active column (sparsity) increase. If you have w column active at any time and k cells per column, then you can represent the same current input in k^w different contexts. The chance that two different history lead to the same state is vanishingly small in practice.

3 Likes

thanks, this implies that the prediction mechanism (i.e. the algorithm that does the switching between active and predictive) should have two competing components :

  1. Hebian learning - fire together, wire together (oftenly used connections should be reused)
  2. Try to use most unused connection, f.e. if there is no activity in the column (i.e. on burst)

(1) guarantees compression/reuse (2) guarantees optimal use of the available connections … which guarantees unique TM state.

2 Likes

Agreed. In fact, logic #2 exists in the current TM implementation
When we decide which cell should get reinforced, we first search for cells that match the current input. If none were found, the least used cell is picked.

https://github.com/numenta/nupic/blob/master/src/nupic/research/temporal_memory.py#L458

1 Like