Is there a place for rewards in HTM Theory?
I know that reward system comes from the ‘old’ brain and HTM mostly focuses on neocortex, but my question is whether an intelligent system can function without rewards? For example we as humans are intelligent partly because we are able to predict future (or the next state, using reinforcement learning term). So we are able to sit still, not doing any rewarding action, and still being able to predict what is going to happen by performing some mental processes. But sometimes even these mental processes or predictions are being hijacked by reward system. For example whether at this moment you rather think of a beautiful girl, delicious food or a math problem - it all depends on where your brain currently assigns more reward.
So my question is can an intelligent system be built without rewards? And if not how can self reward system be implemented in the most optimal way, for example in HTM framework?
I am asking because there seem to be an evidence that a goal oriented system works only if it has a source of rewards.