What’s your thoughts on it? I think they even stole HTM’s logo in their paper.
Are you talking about this?
Yes, sir. Numenta’s logo is here in the illustration of the architecture.
So my understanding of their paper is that they have a kind of (doubly) linked-list memory, which records a state vector at some time instant, and is linked to a state vector from the previous time and will get linked to the next state vector that gets added, so you have a ‘timeline’ of state vectors.
The memory is also ‘content addressable’ in that you can present it with a state vector and it will find the closest match. This could form the beginning of a kind of episodic memory, where you take some current input state and you can then find the best match in the memory, and replay what happened before or after that, in sequence.
That seems like a very useful mechanism for any intelligent system to have, indeed it seems like a requirement for any higher intelligence. The question is how does something like integrate with learning machinery.
Are these episodic memory units distributed locally around some set of neurons or regions of layers, or can they record input from anyplace in the system?
Yeh, there is a big question about how to incorporate (short term working) memory into deep neural networks at moment. Asking gradient descent or evolution algorithms to figure out how to use random access memory (like an array) is a tall order. It is a needle in a haystack problem. Providing “soft” memory like an information reservoir should allow a net to more gradually learn how to make use of that resource.
https://www.ncbi.nlm.nih.gov/pubmed/27561374
http://csjarchive.cogsci.rpi.edu/proceedings/2011/papers/0201/paper0201.pdf
The learning part is clearly explained in the paper. Good old SGD with backpropagation.