HTM vs GRU playing Rock, Paper, Scissors. (HTM outperforming by 7%)


I was inspired by this post by Matheus_Araujo on playing rock paper scissors against HTM. I thought to my self that why not let HTM play again a RNN or even a LSTM network? And so it becomes a side project of mine.

Framing the problem

The ides is to have 2 agents, one implement in RNN and one in HTM. And let them to predict what each other will play next. Then act accordingly.
Ideally (if both agent are learning and predicting the opponent’s pattern efficiently) both agent should have no batter chance of winning and loosing than random guesses due to both predicting the other’s next move and updating their predictions gradually.

The agents

Both agents are trivial to implement. For the RNN agent. Just connect a RNN/LSTM to a FC layer and train it every N step while asking it to predict the next move the HTM agent will make. On the other hand, The HTM agent is even more trivial to implement. You only need to encode what the RNN agent has done and send it to a TemporalMemory later. Then use the predictions made by it.

The results

So that’s the setup. Then, I wrote the game in C++ with tiny-dnn and NuPIC.core. After getting both agent working, tuning their hyper parameters and letting both algorithms play against each other for 100K times. HTM always ended up winning slightly more beyond the margin of error.


RNN Wins 32375 times, 0.32375%
HTM Wins 37755 times, 0.37755%
draw: 29870

I have also tried tune the hyper parameters further. There are cases where RNN can beat HTM, but I seems always to find a way to make HTM win again.

Anyway, that what I have discovered till now. Here is the source code.
It is very messy for now. But I’m planing on writing a blog post about this project and (possibly) making a GUI for it. I’ll clean it up in the future.

Scrreen shot:


Wonderful! I really loved it!