Continuous learning: the model should learn online as the data comes
Unsupervised: no one is going to label the data. NEVER.
Not CPU “greedy”
No hyper-parameter tuning or other human intervention
Noise robustness
but I want to know if HTM is more faster that LSTM in training and prediction ?and in precision of detection anomaly .
If there is any paper or anything
One thing I like about HTM is one-shot learning. One-shot learning is becoming popular in DL as well using various incarnations of “memory augmented neural networks” (MANNs). However let’s say you want to do real-time server load prediction, or traffic volume prediction, then HTM seems like one of the most efficient frameworks out there. Efficient both in terms of data required, as well as CPU/memory usage.
That aside, I (personally) find it bit difficult to compare HTM with DL (like deep encoder-decoder models), for two reasons. First of all, HTM requires that data be in SDR format, meaning semantically encoded binary vectors. I work with NLP quite a bit, so what works well is creating word (or n-gram) embeddings using traditional neural-network based language modelling, then engineering methods to convert these embeddings to SDRs. So you have to account for time/memory to do this extra conversion step, plus IMHO you’re breaking away from a “pure” HTM system anyway. Then the second factor comes into play - the system works very well with known/seen word/phrase sequences, but doesn’t do well when trying to generalise. So while I could use a encoder-decoder network e.g. using LSTMs and attention modules to create an excellent text summarizer that can give me high quality one-sentence summaries based on a short paragraph of text, I simply can’t achieve that kind of quality using HTM (yet!!!). So then it’s bit difficult to compare efficiency/speed :-(.