Im my own experience… HTM is
- Around as good as a vanilla RNN in predicting known sequence
- Way faster than a equal size LSTM
- Very bad at inference (predict known patterns that is not in the training data)
- Great at anomaly detection compared to AutoEncoders and Recurrent AutoEncoders.
And some of my speculations:
- The prediction power/memory grows logarithmically. As the connections is generated randomly. It gets harder and harder to connect to the right bits
- Around 5~16 steps of memory capacity in NLP tasks. (Better than a RNN but way worse than a LSTM)
I have developed a toy NLP program in HTM not so long back. You can see how well it works for your self.