Further reading on SDRs and Temporal Memory

I came across a book from Pentti Kanerva titled Sparse Distributed Memory. It arrived today and upon looking through it I became surprised that I’d not heard of Kanerva’s work before.

As the cortex is a sparse distributed memory system, and is the bases of HTM theory, Kanerva’s book goes into great detail of the subject of SDM (or ‘SDR’) so is a great read for extending the understanding and appreciation for how the cortex communicates in this simple but brilliant semantic language.

Beyond that Kanerva briefly explains the storage and retrieval of sequences which resonate quite closely to HTM temporal memory.

I’d really like to have the time to read the whole book one day, but I thought I’d share my recommendation of this book as I’m sure some people will enjoy reading it to further develop their understanding of neural computation.

3 Likes

Yes, we have discussed Kanerva’s work in the past (on the old mailing lists). @subutai has studied his work. I’ve heard him mention Kanerva several times. I would recommend the book based upon that, although I have not read it.

Kanerva’s work is pretty much a necessary reference for a lit review on sparse computing these days. This is the review paper I usually cite (the catchy name helps):

[1] Kanerva, Pentti. “Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors.” Cognitive Computation 1.2 (2009): 139-159.

1 Like

The Kanerva memory (SDM) is sort of associative-indexed-address array, very clever arrangment.
Let say we use 1000 bits Sparse Distributed Memory (SDM).
Now imagine that you have a VIRTUAL table/2D array with 1000 columns and virtual 2^1000 rows.
Second step you randomly pick let say 10_000 of those rows that will be mapped to real computer memory.
So you create 2 2D arrays : first one with indexes , second one for the storage for the data.
Ex:

idxs = np.random.randint(0,2**1000, 10000) #non repeatable
store= np.zeros((10000, 1000))

next… let say you want to save the binary-1000bits sequence :

bin1, bin2, bin3, …

You save in a virtual address : bin1 the value bin2 , then in address bin2 the value bin3 …etc.
Later you can extract the sequence even if you write multiple sequences into the memory, by just knowing how the sequence/sub-sequence stars. The value you retrieve by the address gives you the next address where you will find the next value, which is address for the next … and so on …

Now because you cant have the storage for 2^1000 addresses/rows, that is where the indexes comes in…
you pick a radius let say 5, based on the virtual-address you want to save into, you find the the closest 5-index-values , then get 5-index-index this is where you save the bin-value in the “store”.
In pseudo code :

ixs = idxs.arg_where( idxs== address, radius=5)
store[ixs] &= bin-value

The structure guarantees that you will get correct sequence, because of the properties of the SDR we know and love. Sparsity guarantees it, up to a point.

That is what I remember by memory … I implemented it 3-5 years ago.

The other big difference is SDM relies on hamming distance , HTM relies on overlap.
Also the book has some other good insights.
I don’t remember now, but I think I was getting similar MAPE results, like HTM.