Please discuss this paper below.
Properties of Sparse Distributed Representations and their Application to Hierarchical Temporal Memory
I am really impressed by your work. That is awesome. I started to learn HTM and read your papers recently. I have some questions, maybe they are basic questions, but I really appreciate if you guide me regarding them.
I tried two combine these two papers “Properties of Sparse Distributed Representations and their Application to Hierarchical Temporal Memory” and “Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex”.
In paper “Properties of Sparse Distributed Representations and their Application to Hierarchical Temporal Memory” for predicting state, element-wise multiplications is not used, the method is different? Also, it uses a way to find SDR representation from input, while in other papers it is mentioned by Encoder we will get an SDR representation of input, that is correct? which one should be used? About matrices that you used in part III. SDRs and HTM of your paper, how do you find them?
It’s the same method. With binary sparse vectors taking the dot product is the same as taking the 1-norm length after doing element-wise multiplication. If we have N minicolumns and M cells per minicolumn, in the Sparse Distributed Representations paper the segment is represented as a long vector of size N*M. In the Thousands of Synapses paper it is represented as a matrix of size N X M so I used element-wise multiplication followed by the 1-norm length. The effective operation is the same in both cases.
In a typical HTM system we 1) encode the data into a binary vector, 2) run the Spatial Pooler on the encoded vector to choose which minicolumns win and become active, and then, 3) run Temporal Memory on that set of minicolumns.
In the Thousands of Synapses paper we just start at 3). W^t is the output of the Spatial Pooler.
Thank you very much @subutai for your response.