As a follow up to this thread.
I’ve finished the first version of my temporal pooler and I’ve noticed that the invariant representation of a given sequence can actually be fairly mutable. Depending on the parameters (boosting, learning rate, reinforcing signal strength) the representation can be completely unrecognizable after as little as 6 repetitions of a 4-element sequence. Adjacent representations have a high overlap, but a few bits change between each iteration until everything is different.
My question is: is that actually bad? By entirely removing boosting and using an extreme self-reinforcement value I can force the representation to be basically constant. But realistically, should it actually have a finite lifespan?
For reference, my TP algorithm is this:
-Read in the TM’s active cells and its previously predictive cells.
-If they have a sufficiently high coincidence (I chose 50%), train the TP on the intersection of the predictive and active cells, as well as its own prior activity.
-Otherwise, train the TP on only the active cells.
Essentially, the TP acts like an SP with an additional strongly self-reinforcing connection set that is activated whenever the TM’s predictions are accurate.