DNN Encoders?

The more I read about Deep learning the more I seem to be inclined to feel that DNN look to me as fancy-Encoders or at most Encoder+some-SP-functionality.

Even the LSTM way of handling time seem patched up (I still don’t grasp well the interplay of the internals of LSTM cell).
Seems like a big over-complication over the pure joy of the TM (first order Markov chain used to predict variable-sequences)

On the other hand the use of GPU is a big bonus.

What are your thoughts on using DNN-Encoders as input for TM ? As the eyes and the ears of HTM.


I think it’s a great idea.

At the Manhattan hackathon, Frank Carey started a hack that was going to use DL doing feature extraction on images to create SDRs for processing by HTMs:

This is a really interesting idea because you can imagine video processing frame by frame using DL to do feature extraction and passing the SDRs into an HTM to predict what features might come next.


Yes, sort of. I just recently started doing essentially this in the project I am working on. I am using an autoencoder to perform dimensionality reduction and feature extraction to create the input to the HTM SP algorithm, but not replacing the SP entirely. I thought for some time how to create an SDR from the (already binary) input data my application receives, but it was too complex, and I discovered an autoencoder would be perfect.

How’d this project go?