HTM and Deep learning

hi, all of us know that HTM is

  • REAL-TIME
  • Continuous learning: the model should learn online as the data comes
  • Unsupervised: no one is going to label the data. NEVER.
  • Not CPU “greedy”
  • No hyper-parameter tuning or other human intervention
  • Noise robustness

but I want to know if HTM is more faster that LSTM in training and prediction ?and in precision of detection anomaly .
If there is any paper or anything

3 Likes

Hi @way-sal,

Here’s a paper from a couple of years ago: https://www.researchgate.net/publication/309778443_A_comparative_study_of_HTM_and_other_neural_network_models_for_online_sequence_learning_with_streaming_data

3 Likes

10 posts were split to a new topic: What is a “hyperparameter” in HTM vs Deep Learning?

Hi @way-sal!

One thing I like about HTM is one-shot learning. One-shot learning is becoming popular in DL as well using various incarnations of “memory augmented neural networks” (MANNs). However let’s say you want to do real-time server load prediction, or traffic volume prediction, then HTM seems like one of the most efficient frameworks out there. Efficient both in terms of data required, as well as CPU/memory usage.

That aside, I (personally) find it bit difficult to compare HTM with DL (like deep encoder-decoder models), for two reasons. First of all, HTM requires that data be in SDR format, meaning semantically encoded binary vectors. I work with NLP quite a bit, so what works well is creating word (or n-gram) embeddings using traditional neural-network based language modelling, then engineering methods to convert these embeddings to SDRs. So you have to account for time/memory to do this extra conversion step, plus IMHO you’re breaking away from a “pure” HTM system anyway. Then the second factor comes into play - the system works very well with known/seen word/phrase sequences, but doesn’t do well when trying to generalise. So while I could use a encoder-decoder network e.g. using LSTMs and attention modules to create an excellent text summarizer that can give me high quality one-sentence summaries based on a short paragraph of text, I simply can’t achieve that kind of quality using HTM (yet!!!). So then it’s bit difficult to compare efficiency/speed :-(.

Hope this helps!

6 Likes

Im my own experience… HTM is

  1. Around as good as a vanilla RNN in predicting known sequence
  2. Way faster than a equal size LSTM
  3. Very bad at inference (predict known patterns that is not in the training data)
  4. Great at anomaly detection compared to AutoEncoders and Recurrent AutoEncoders.

And some of my speculations:

  1. The prediction power/memory grows logarithmically. As the connections is generated randomly. It gets harder and harder to connect to the right bits
  2. Around 5~16 steps of memory capacity in NLP tasks. (Better than a RNN but way worse than a LSTM)

I have developed a toy NLP program in HTM not so long back. You can see how well it works for your self.

3 Likes

A lot of my phd work is around these comparisons, I even got a paper published:

You can get the paper here: Dropbox - 08424074.pdf - Simplify your life

4 Likes