General analysis of HTM learning and prediction abilities?



Do you know studies or have you tried to do an analysis of the HTM learning and prediction abilities function of the type of time series that are given as inputs of the HTM ? I mean testing the HTM by an exhaustive set of time series that could vary for example by the nature of their data (categorical, continuous), the length of their data, the type of the sequence (for example for a binary categorical data, sequences where values alternate at each time step), the degree of noise and if there is multiple time series as inputs, by varying the type of of causal relation between them (linear, circular,…)

I’m quite a beginner in Hierarchical Temporal Memory and I’ve seen a lot of studies of HTM on real-world data, but not more “general” studies on HTM capacities ? If there exist such studies, I would be interested to see them.



I think you want to investigate Encoders. An Encoder is a way to translate some temporal stream of data into a semantic binary representation. The types of features you describe above seem to be encoding challenges. You could, for example, encode different types of data into the same parts of the Spatial Pooler’s Input space, and the TM should still work as long as the data type changes have semantic meaning themselves. If they are just random switches between contexts, I’m not sure it would work.

But if there is order to the switching, or if the switching actually indicates some change of state in an object, the TM could learn these as part of the larger patterns it is memorizing.

I’ve not seen anyone do experiments along these lines yet, perhaps you’d be interested in taking on a challenge?



I’d recommend checking this paper out if you haven’t already:

Here is the link to Numenta’s research papers:

1 Like


Thanks for the answer. Encoding could effectively be part of such kind of studies (effect of the encoder type on the performance). But what I meant is more about measuring the capacity of the SP + TM to learn and predict sequences based on metrics like number of sequences, the order of the sequences, number of inputs, the type of causality between inputs, including an analysis of the effect of the HTM parameters on the performance.

If this is not done yet, yes, I’m quite interested to do this kind of study.

1 Like


Thanks for the links. I missed this paper, it was indeed what I was looking for.

1 Like


This paper also compares HTM prediction to ARIMA and HTM anomaly detection (uni and multi-variate) to Etsy Skyline and Twitter Advec. They seem to be using the raw anomaly score instead of the likelihood and still outperforming the other on a range of data sets with different behaviors/properties.

And this one from Numenta does comparisons but more thoroughly IMO.