Handling multivariate data with hundreds of variables

Hello,

I saw by reading posts that the HTM could not handle multivariate time series with more than 5 to 10 variables. I would like to evaluate the ability of the HTM to detect anomalies on multivariate time-series with hundreds of variables and I’m planning to do that by using a autoencoder that reduces dimensions before applying the output of the autoencoder to the input of the HTM. Could it work ? What would be the drawbacks of using this method ?

The rule of thumb is that no single NuPIC model should take in more than a few fields yes, though there are other possible way to incorporate many fields. I use an approach mentioned in one paper of making parallel nupic models for each of the different fields, and then looking for periods with simultaneous anomalies across models.

This way scales to any number of fields without any model being overloaded, though you still need to update each one continuously which means constant memory use. I need to learn more about the practical constraints all that brings.

2 Likes

Like with any preprocessing I’d say it basically depends on how it effects the signal to noise ratio, which is specific to that technique. Ideally the load-lightening dimensionality reduction would also reduce noise and not wipe out any core signal. The only way I know to find out would be to test it out, which seems to call for an evaluation method for the different instances of anomaly detection.

2 Likes

Thank you for the answer. What is the reason behind the fact that it can’t handle more than a few variables ? Is it a question of input array that becomes too dense when using more than a few variables ?

1 Like

HTM technology is (mostly) based on the biology of the brain.
Consider what you are asking - to feed a large number of variables into this network.

Can you think of where the brain does this to get an example of the processing involved in HTM?

Take the eye for example.

The processing of one layer of HTM may be good to work out the edges and movement of a well formed shape or similar processing in a different sensory modality.

There is a large number of points to be processed but the variable are somewhat constrained by the physics of the world. The end product take many layers of processing to work out the shape and distance of the object, and maybe the color and texture. It is likely that matching up the object to other aspects such as a name does not happen until the processing reaches the temporal lobe, several processing steps later.

How does this type of multiple input processing match up with your problem?

2 Likes

Adding more variables increases the size of the encoding pretty much linearly, since the multi encoder concatenates all field’s encoders into 1. This gives each SP column that much bigger of a receptive field, that much more of the environment it is responsible for monitoring. I think this signal saturation could theoretically be mitigated by just upping the number of columns in the SP, so the ratio of column count to encoding size would remain constant. I’m sure this could easily cause other problems around physical limits like memory usage, etc.

So the problem isn’t that the input becomes too dense, since the sparsity in SP is fixed. The SP will activate 2% of its columns no matter how many fields comprise the encoding, though the representational weight on each column goes up which is the concern (as I understand it).

3 Likes