Hello
I have been learning about reinforcement learning & started to wonder if Hierarchical Temporal Memory (HTM) ideas can fit into this field. Most RL projects today use deep neural networks but HTM focuses more on patterns and sequences. I think it could help with predicting states but I am not sure how well it works with reward-driven systems.
Has anyone here tried using HTM together with RL? For eg; maybe using HTM to process inputs before sending them to an RL model / to spot unusual patterns that could change how rewards are given.
I would like to hear if people have tested this in real projects and what the results were like.
I checked the HTM School resources but they don’t explain much about RL. While studying this; I also learned what is a Prompt Engineer?; which shows how new roles are appearing in machine learning. If anyone has guides, examples / experiments that connect HTM and RL; that would really help.
Thank you !!