Role of Rewards in Intelligent Systems

Is there a place for rewards in HTM Theory?

I know that reward system comes from the ‘old’ brain and HTM mostly focuses on neocortex, but my question is whether an intelligent system can function without rewards? For example we as humans are intelligent partly because we are able to predict future (or the next state, using reinforcement learning term). So we are able to sit still, not doing any rewarding action, and still being able to predict what is going to happen by performing some mental processes. But sometimes even these mental processes or predictions are being hijacked by reward system. For example whether at this moment you rather think of a beautiful girl, delicious food or a math problem - it all depends on where your brain currently assigns more reward.

So my question is can an intelligent system be built without rewards? And if not how can self reward system be implemented in the most optimal way, for example in HTM framework?

I am asking because there seem to be an evidence that a goal oriented system works only if it has a source of rewards.


Welcome to the forum.

If you have not see this welcome post check out:

If you look at the diagram I posted earlier, the reward parts are in the section colored orange.

This matches pretty well with what you are saying.
I detail how that works with an non-reward based system in the following posts in the thread.
Let me know if you have any questions.


This is a more technically pitched paper on how the lower brain systems tames the cortex in ways that you may think of a reinforcement learning:

Prefrontal cortex as a meta-reinforcement learning system

Jane X. Wang, Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer,
Joel Z. Leibo, Demis Hassabis , & Matthew Botvinick

1 Like

Thank you for sharing this