I found this paper A Radically New Theory of how the Brain Represents and Computes with Probabilities (by Rod Rinkus, Neurithmic Systems & Brandeis University). The terminology the author uses isn’t exactly the same used in the HTM community. However, from what I understood, this theory is similar to the HTM theory (e.g. the usage of the SDRs and the binary nature of neuron spikes).
Furthermore, in the “acknowledgement” section the author writes
I’d like to thank the people who have encouraged me to pursue this theory over the years, including
Dan Bullock, Jeff Hawkins, John Lisman, Josh Alspector, Tom McKenna, Andrew Browning, and Dan
Hammerstrom.
For now, I do not plan to read this paper, and so I will not try to compare this theory to the HTM theory, but, if someone is interested, I think it would be useful and interesting task to do so, at least the salient differences or key points of the “Sparsey” network.
Thanks for sharing @nbro! I did some quick searching around - I can see papers related to his SDR based algorithm dating from mid 90s. The current model which as you say is called Sparsey, is referred to earlier as TEMECOR. There are definitely similarities to HTM - SDRs, Hebbian learning, some kind of columnar structure, etc.
You’re right, it would be interesting to understand what the differences are, and what the current status is. Even the MNIST results are very similar (91% accuracy, one-shot learning): http://www.sparsey.com/MNIST_Results.html
Note that this work is more in alignment with the “traditional” hierarchical arrangement of scene digestion. The H of the hierarchy is loud and proud in this work.
I could be wrong but I see that the current Numenta model diverges from this as the interpretation of what is coded; I see one as egocentric and one as allocentric. Does anyone else see this or am I missing the point?
Hi Mark, thanks for the props regarding my work and figures. I only just saw your post. Yes, lots of sims with Numenta, but also many diffs. I haven’t focused on egocentric vs. allocentric distinction yet, but broadly, I see allocentric as emerging progressively up through the cortical hierarchy. Allocentric
coordinates are just a particular kind of invariance and they are created by the same principles that explain, e.g., tuning of complex vs. simple cell tuning fns in V1. So, progressively more allocentric reps will emerge at higher cortical levels, e.g., entorhinal/hippocampal.
Welcome to the community!
Do poke around and see what we are doing; all sorts of discussion on the the brain, AI, and how they fit together.
I am curious about what you may think of my musings and where we overlap in thinking. Much of it is concentrated on this thread: