I read the paper “How Can We Be So Dense? The Benefits of Using Highly Sparse Representations”, but I still don’t understand the difference traditional implementations like nupic.core / htm.core and machine learning implementations like nupic.torch / nupic.tensorflow. The implementation of nupic.torch and nupic.tensorflow is an extension of HTM Spatial Pooler. In the application of the paper, is the implementation of nupic.torch machine learning use neural network itself to achieve functions similar to Temporal Memory? Not limited to the application in the paper. What are the specific differences? For a similar application like hotgym, which method is more applicable?
What do you mean with traditional implementation?
As far as I know neural networks analyse time-dependent data as series of snapshots over fixed periods and use the same techniques as they would for still images.
Maybe I did n’t make it clear, I mean the differences between traditional implementations like nupic.core / htm.core and machine learning implementations like nupic.torch / nupic.tensorflow.
Thanks for the question. You’re correct - the nupic.torch version is an extension of the Spatial Pooler to deep learning, and allows spatial pooling concepts to be applied to deep convolutional networks and large benchmarks.
There’s no version of the HTM Temporal Memory in nupic.torch. We did experiment with TM-like extensions last year in this paper:
Gordon, J., Rawlinson, D., & Ahmad, S. (2019). Long Distance Relationships without Time Travel: Boosting the Performance of a Sparse Predictive Autoencoder in Sequence Modeling. Retrieved from http://arxiv.org/abs/1912.01116
It’s pretty early work - there’s a lot left to do. One of the areas I’d really like to explore next is incorporating active dendrites into these models, similar to the TM.