This is a feature of
TP.py (soon to be known as
backtracking_tm.py, I think.) It has a notion of a "sequence", and whenever there's bursting it backtracks and tries to find the best new starting point for the current sequence. So it's always aware of when the current sequence began, so it can easily keep an average sequence length.
The pure Temporal Memory runs a much simpler algorithm. The layer itself is oblivious of sequence length, though it'd be possible to analyze the sequence length from the outside.
You can access this on the TP via the
getAvgLearnedSeqLength method. On a CLAModel (a.k.a. HTMPredictionModel) you'd call:
Yes, that's a lot of method calls.