A Radically New Theory of how the Brain Represents and Computes with Probabilities

papers

#1

I found this paper A Radically New Theory of how the Brain Represents and Computes with Probabilities (by Rod Rinkus, Neurithmic Systems & Brandeis University). The terminology the author uses isn’t exactly the same used in the HTM community. However, from what I understood, this theory is similar to the HTM theory (e.g. the usage of the SDRs and the binary nature of neuron spikes).

Furthermore, in the “acknowledgement” section the author writes

I’d like to thank the people who have encouraged me to pursue this theory over the years, including
Dan Bullock, Jeff Hawkins, John Lisman, Josh Alspector, Tom McKenna, Andrew Browning, and Dan
Hammerstrom.

For now, I do not plan to read this paper, and so I will not try to compare this theory to the HTM theory, but, if someone is interested, I think it would be useful and interesting task to do so, at least the salient differences or key points of the “Sparsey” network.


#2

Thanks for sharing @nbro! I did some quick searching around - I can see papers related to his SDR based algorithm dating from mid 90s. The current model which as you say is called Sparsey, is referred to earlier as TEMECOR. There are definitely similarities to HTM - SDRs, Hebbian learning, some kind of columnar structure, etc.

You’re right, it would be interesting to understand what the differences are, and what the current status is. Even the MNIST results are very similar (91% accuracy, one-shot learning): http://www.sparsey.com/MNIST_Results.html


#3

I believe Rod Rinkus worked for Jeff Hawkins at the Redwood Neuroscience Institute.

Correction: Rod did not work for Jeff, but they have known each other for a long time.


#4

I see much of what we are doing here in Rod Rinkus’s work.

He does have some very interesting insights; I do find the hex-grid arrangement of his computing components (MACs) to be particulary heart-warming.

These diagrams are mesmerizing:
http://www.sparsey.com/NotionalMappingSparseyToCortex.html

Note that this work is more in alignment with the “traditional” hierarchical arrangement of scene digestion. The H of the hierarchy is loud and proud in this work.

I could be wrong but I see that the current Numenta model diverges from this as the interpretation of what is coded; I see one as egocentric and one as allocentric. Does anyone else see this or am I missing the point?