The more I read about Deep learning the more I seem to be inclined to feel that DNN look to me as fancy-Encoders or at most Encoder+some-SP-functionality.
Even the LSTM way of handling time seem patched up (I still don’t grasp well the interplay of the internals of LSTM cell).
Seems like a big over-complication over the pure joy of the TM (first order Markov chain used to predict variable-sequences)
On the other hand the use of GPU is a big bonus.
What are your thoughts on using DNN-Encoders as input for TM ? As the eyes and the ears of HTM.