This is my second attempt at making HTM playing rock paper scissors against neural network (link to my last attempt). This time I’ll up the game from tiny-dnn vs NuPIC.core to PyTorch vs Etaler.
Same as what I did the last time, the project is to have a neural network play rock paper scissors against HTM and see who wins. And if both algorithms are sufficient at predicting what their opponent’s move, both network should fall into Nash equilibrium and have the same winning rate. (Note: this does not mean both network will generate random predictions. Thus the draw rate may or may not be 33% percent).
In the experiment, the neural network model is a simple LSTM network followed by a fully connected layer. Trained with BCELoss and SGD. And the HTM side has a grid cell encoder, a Temporal Memory and a CLAClassifer to decode the prediction.
The NN Model
self.rnn1 = nn.LSTM(3, self.hidden_size, self.num_lstm_stack) self.fc1 = nn.Linear(self.hidden_size, 3)
And the HTM model
self.tm1 = et.TemporalMemory(input_shape, cells_per_column) # SDRClassifer in Etaler is CLACLassifer in NuPIC self.sc1 = et.SDRClassifer(input_shape, 3)
Then both networks plays against each other, learning the opponent’s mode each step. We record the outcome of the results and analyze them.
I don’t know why the result is so different from my last attempt. But HTM totally beats LSTM.
Even after 10x more games, LSTM still can’t catch up with HTM.
I guess this we where my side project ends. Maybe someone knows more about how LSTM works can explain why it is performing so poorly? Maybe we can modify the LSTM’s parameters so it works as well as HTM? Source code available here.