I’ve come to the conclusion that my idea as originally formulated won’t work, at least not for low-level structures. Testing indicates that, unless I use prior knowledge of the beginning of a new sequence to reset the TM, the TP will eventually merge all sequence representations into one due to the self-reinforcement signal. I now think a simple spatial pooler-like object that updates every N iterations and reads the last N active cell SDRs (concatenated) is a better option. This is not a true sequence identifier–really just a temporal subsampler.
Perhaps at the top of the hierarchy there is room for some form of self-reinforcement sequence identification, with a TM reset signal coming from elsewhere in the stack (I.E. elsewhere in the brain). After all, the real brain is capable of observing a sequence, then telling itself “alright, that one is over, clear your expectations for the next one”. I suspect this to be a fairly high-level function.