The Kanerva memory (SDM) is sort of associative-indexed-address array, very clever arrangment.

Let say we use 1000 bits Sparse Distributed Memory (SDM).

Now imagine that you have a VIRTUAL table/2D array with 1000 columns and virtual 2^1000 rows.

Second step you randomly pick let say 10_000 of those rows that will be mapped to real computer memory.

So you create 2 2D arrays : first one with indexes , second one for the storage for the data.

Ex:

idxs = np.random.randint(0,2**1000, 10000) #non repeatable

store= np.zeros((10000, 1000))

next… let say you want to save the binary-1000bits sequence :

bin1, bin2, bin3, …

You save in a virtual address : bin1 the value bin2 , then in address bin2 the value bin3 …etc.

Later you can extract the sequence even if you write multiple sequences into the memory, by just knowing how the sequence/sub-sequence stars. The value you retrieve by the address gives you the next address where you will find the next value, which is address for the next … and so on …

Now because you cant have the storage for 2^1000 addresses/rows, that is where the indexes comes in…

you pick a radius let say 5, based on the virtual-address you want to save into, you find the the closest 5-index-values , then get 5-index-index this is where you save the bin-value in the “store”.

In pseudo code :

ixs = idxs.arg_where( idxs== address, radius=5)

store[ixs] &= bin-value

The structure guarantees that you will get correct sequence, because of the properties of the SDR we know and love. Sparsity guarantees it, up to a point.

That is what I remember by memory … I implemented it 3-5 years ago.

The other big difference is SDM relies on hamming distance , HTM relies on overlap.

Also the book has some other good insights.

I don’t remember now, but I think I was getting similar MAPE results, like HTM.