Extract average sequence length

Hi!

When turning TM verbosity all the way up, one of the reported indicators is average sequence length. The relation between accuracy and average sequence length for my data is what I am after.
Is it possible to extract it from the network somehow? I have tried extracting other variables from the network, but have proven inept.

Moreover, does a sequence length of 0 mean that only the current input is used to make a prediction, i.e. a first-order prediction, or would that be the case for a sequence length of 1? The average sequence length seems to have a minimum at 0, so the former must be true, right?

Thank you! :smiley:

1 Like

I’m not sure how this works. @mrcslws maybe you can help? Is the “sequence length” @Timo_1028’s talking about really the length of stored sequences? If so, how do we extract than info without some form of temporal pooling?

1 Like

This is a feature of TP.py (soon to be known as backtracking_tm.py, I think.) It has a notion of a “sequence”, and whenever there’s bursting it backtracks and tries to find the best new starting point for the current sequence. So it’s always aware of when the current sequence began, so it can easily keep an average sequence length.

The pure Temporal Memory runs a much simpler algorithm. The layer itself is oblivious of sequence length, though it’d be possible to analyze the sequence length from the outside.

You can access this on the TP via the getAvgLearnedSeqLength method. On a CLAModel (a.k.a. HTMPredictionModel) you’d call:

model._getTPRegion().getSelf()._tfdr.getAvgLearnedSeqLength()

Yes, that’s a lot of method calls.

1 Like

Awesome, thank you both!