Graded (non-binary) SDRs

As far as I am aware, SDRs have only been discussed as binary arrays. That makes sense when you measure activity in a single moment. However if you measure activity over a period of time the SDRs could have a frequency. Related SDRs could be co-active during that period, but they are active at different frequencies. The SDR that represents the abstract concept of the show ‘Friends’ could co-activate a union of other SDRs relating to the characters such as Ross, Rachel or Monica. However, these SDRs could be graded, in that Rachel could have a higher activation frequency than Ross. This could simply be that Rachel is the most relevant character in Friends (… or at least it is for me…) so therefore has a higher activity. But there are bits that overlap between these SDRs, and those bits are more selective to Rachel, although they are shared across all characters.

It might be that synaptic integration zones are still weighted (based on the number of synapses between pre/post synaptic neurons) within the zone/segment. These weights are responsible for the differing frequencies among SDRs. If this is true then it might be possible to have graded response to a graded stimulus.

I was wondering, is there any interesting computation utility for graded SDRs?

EDIT: Instead of an SDR being represented as pure binary with indices [10,248,531,824,939,1262,1444,1782], it could be graded as integers where there is more than one occurrence of the same index [10,10,248,531,824,824,824,939,1262,1444,1444,1444,1444,1782] - a union of the same SDR over time. This will map out to {10:2 ,248:1 ,531:1 ,824:3, 939:1, 1262:1 ,1444:4 ,1782:1}

8 Likes

I’m curious where you got this idea. It is interesting. I have not tried anything like this.

When I was writing a Union operation I was wondering how to ‘flatten’ overlapping indices of the list of SDRs so that there would not be multiple indices. Then the same night I was watching this video https://www.youtube.com/watch?v=CXYN3IZyrIo It made me think that it might be a useful thing not to remove the duplicate indices if SDRs were not strictly binary.

This is definitely an extremely useful tool in the HTM hacker’s toolkit. A couple of examples where I use this operation all the time:

To generate a fingerprint for a phrase, you can create union of its individual word SDRs which also includes index scores (frequency). Then sparcify by taking the top 2% of total array length of indices with the highest scores (using a random tie breaker), and discard the rest. Resulting SDR is your new fingerprint for the phrase.

SDRs can be created for images (or anything else) that have been classified by DNN. Your output is typically something like [Animal: 99%, Cat: 98%, Dog: 30%… etc]. Take the word SDRs for each listed concept and sub-sample the indicated percentage of indices. Take a union of those smaller SDRs along with the index scores, and sparcify as above.

2 Likes

Come to think of it, this is similar to what Cortical.io does. They stack unions then sparsify given an overlap threshold.

I suppose when you start using graded values and graded weights you begin to get something similar to ANNs. A value, after all, represents a frequency of spikes over an arbitrary time period. Say if a pixel value of an image is fed in, and the highest value is 255, then that represents 255 spikes over 255 time steps. If a weight was 0.5 and a value of 255 was fed in to the presynaptic neuron, the post synaptic neuron will have a value of 128 (given a linear activation). If you use this same basic idea with SDRs you could possibly transcend out of the binary world. So instead of mapping binary values such as [1,0,0,1,0,0,1,0] > [0,0,1,0,0,1,1,0], you could map graded values like [2,0,0,5,0,0,2,0] > [0,0,2,0,0,4,5,0]. Then of course SDR operations can be extended to use linear algebra to process SDRs. [2,0,5,0,0,0,2,0] + [0,0,2,0,0,4,5,0] = [2,0,7,5,0,4,7,0]; [2,0,5,0,0,0,2,0] - [0,0,2,0,0,4,5,0] = [0,0,3,0,0,4,-3,0], etc.

I’m going to play around with this idea over the weekend.

1 Like