An open-source community research project on comparing HTM-RL to conventional RL

As of right now, state of the art, Machine Leaning community are
using a deep NN to predict the next state in real time.
The depp nn can be paired with video ststem. And then run the video through
deep NN to predict the next state or frame, in video land. This can be as good as predictive thought.
For example, a bot, with the NN could be going along just fine, Like going around a circular race track, Then it gets a anti reward. The battery is very low. So it goes up
into video memory and look for an off ramp or the finish line.
Also the deep nn can be trained on video while it sleeps.

HTM would have SDR bit activation on features with in the video. Like steering wheel,
road lines, light post, and etc… Which the Deep nn is doing too. But what HTM is not
doing is taking information from one frame, or more, and mixing them together in a way
to build the next frame. HTM just uses hard memory saves.
HTM SDR activation bit would be best used in early layers of a unsupervised
deep neural network.

The Deep nn is acting like a sliding window algorithm. But instead of sliding along the
data, the Deep nn is anchored and the data is slid through.