Hello, I am wondering if anyone has read this paper by Pentti and how much overlap there is between these ideas and how SDR’s work.
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19890017031.pdf
Hello, I am wondering if anyone has read this paper by Pentti and how much overlap there is between these ideas and how SDR’s work.
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19890017031.pdf
I actually saw that @rhyolight stated in another thread that Kanerva worked for Jeff at one point
I did a search for “Kanerva” on this forum and could not confirm that.
Numenta is certainly aware of the work but I can’t see that Kanerva ever worked for Jeff.
The most relevant hits are here:
Maybe the machine elves hid the search from you
I stand corrected.
That is the one page that I did not use a ctrl-f.
Thanks for the follow-up.
His Bio puts him at the Redwood Institute; I knew that there was some relationship but I was not aware that these were JH employees.
I hired Pentti to work at the Redwood Neuroscience Institute. He worked there for the three years that I was director.
Pentti’s book, “Sparse Distributed Memory”, was one of my favorites. The key thing I learned from Pentti’s book was how to think about high-dimensional spaces, particularly that randomly chosen points in high-dimensional space will almost always have minimal overlap. We rely on this property in all of our work at Numenta.
Unfortunately, the title of Pentti’s book, “Sparse Distributed Memory”, refers to something very different that what we mean by “Sparse Distributed Representations”. In his book, Pentti describes what are essentially random access memories like you find in a computer. The addressable space of these memories is astronomically huge, much larger than could be built. By “sparse” Pentti means that as you write to the memory, the memory locations will be sparsely filled. If you store 10^8 items in a memory that has 10^50 addressable locations then the stored items will be sparsely distributed in the memory space. The actual representations and addresses in Pentti’s work are not sparse like SDRs. It is the memory itself that is sparsely filled.
I haven’t read the book in many years, so I hope my recollection is correct. Pentti has published a number papers on how high-dimensional vectors can used to process semantic meaning. These are interesting, but I have not been able to see how they can actually describe neural tissue.
Very interesting reading about a computer memory address system where you can access the data from a specific address by requesting with a “similar enough” address.
THE CAPACITY OF THE KANERVA ASSOCIATIVE MEMORY IS EXPONENTIAL
Tabulation hashing is very interesting:
https://en.wikipedia.org/wiki/Tabulation_hashing
And actually is very useful for collision interactions in games. Greatly reducing the amount of code needed.
In some ways you could see Kanerva’s work as a kind of vector version of tabulation hashing.