My guess there-s too high an overlap between samples at low frequencies and too low at high frequency.
Low overlap is relatively easy to understand - the next cycle is a high chance that samples fall somewhere in between with similar previous values.
High overlap results in very long sequences to memorize for a cycle and it might exceed capacity and struggle to keep track.
I guess you can test the two assumptions above by adjusting the encoder’s output sparsity
One way to have this done as an internal feedback would be cool.
I mean have the algorithm “seek” for lowest anomaly by slowly controlling its upstream SDR source (encoder, spatial pooler, etc.) to either increase or decrease sparsity.
A cycle/rhythm detector as discussed here could be useful to figure out in which “direction” should the sparsity be adjusted.