It is known a lot of AI folks hypothesize one of the main brain functions is that of associative memory.
In HTM terms such a device is able to memorize SDRs and later recall them later even with an incomplete query SDR. This property makes it a kind of nearest neighbor search indexer.
An earlier attempt at implementing such a memory was a bit disappointing, because
- it was relatively sluggish compared with ANN benchmarks. And performance is severely impacted by number of “ON” bits or solidity.
- virtually nobody seemed interested about it
- the MNIST tests I made with it revealed a tendency to “saturate” when some bits are very frequent in its input SDRs and that inherently affects its recall ability.
Despite these it reached >95% accuracy on MNIST with 40/2000 bit SDRs - it doesn’t seem much yet it is competitive against SP+SDR Classifier at similar SDR size & such low sparsity. Yup the low 2% sparsity is needed to maintain an acceptable performance
While trying to implement it in python/numba the performance it reached was an order of magnitude better than my previous experience - which motivated me to refactor & numbify my own “SDR Indexer” - and that made it significantly faster.
During that another observation popped out - that a slightly modified addressing scheme in the indexer makes it capable to index & store arbitrary wide SDRs with virtually no penalty.
Performance degrades with (at least) the square of ON bit counts - e.g. 20 ON bits resolve to 190 storage/search address locations (aka slots) while 40 bits expand to 780 address slots.
But performance-wise it doesn’t care if it operates with SDRs with a solidity of 20 bits out of 500 available ones, or if the 20 active bits are spread across a ridiculously large 100Kbit or 1M bit SDR space.
And this property hints at the title - what if instead of dreading extremely large SDRs we embrace them?
In what ways a machine which transforms low space, dense information flows (== short, denser SDRs) into few higher meaning bits resembles the brain?