How can HTM be used in Reinforcement Learning projects?

Hello

I have been learning about reinforcement learning & started to wonder if Hierarchical Temporal Memory (HTM) ideas can fit into this field.:slightly_smiling_face: Most RL projects today use deep neural networks but HTM focuses more on patterns and sequences. I think it could help with predicting states but I am not sure how well it works with reward-driven systems.:thinking:

Has anyone here tried using HTM together with RL?:thinking: For eg; maybe using HTM to process inputs before sending them to an RL model / to spot unusual patterns that could change how rewards are given. :thinking: I would like to hear if people have tested this in real projects and what the results were like.

I checked the HTM School resources but they don’t explain much about RL. While studying this; I also learned what is a Prompt Engineer?; which shows how new roles are appearing in machine learning. If anyone has guides, examples / experiments that connect HTM and RL; that would really help.:innocent:

Thank you !!:slightly_smiling_face:

1 Like

Hello,

Here is a project from many years ago. I hope this helps :slight_smile:

2 Likes