Does the SDR have to be binary?


#1

Hey, I’m fairly new to HTM theory, I was wondering what would happen if the inputs were floating point instead of binary?
Thanks
Sam


#2

A big reason SDRs work the way they has do to with SDR comparison.

A binary array comparison is a much different operation than a float array comparison. I’m not sure you’ll get anywhere near the computation speed if you change bits to floats.


#3

Thanks, I did think computation may be an issue, I just can’t get my head around features having to be absolute.


#4

TLDR; SDRs are binary in HTM because they are binary in the brain. There seem to be specific properties of SDRs the brain takes advantage of to store and retrieve memory that would not be feasible if they were floating points.


“Features” in the HTM sense are not absolute. There is a high degree if granularity in what bits can mean in what contexts. A bit might mean something extremely specific in one context, or very abstract in another. Most bits don’t make any sense unless they are consistently groups with particular other bits.

Keep in mind whenever we talk about an SDR, we are really talking about a neuron’s receptive field. It could be a proximal receptive field viewing feedforward input, or it could be a neuron’s distal field, and who knows where those connections are coming from? In SDR communication, context is very important.

This all derives from how the brain stores memory between neurons. There is no “processor” in the cortex. The memory is the processor. Cortical tissue uses properties of SDRs to store representations of the world through sensory processing and interaction. It also uses them to playback motions, memories, actions.


#5

So, in my work I use “superpositions”, which can be considerd a type of float SDR’s. They have most of the properties of regular SDR’s, including a similarity measure. I think most of the time the brain probably does use just binary SDR’s, but in a few cases perhaps not? eg, with binary how do you represent “a little”, “somewhat” and “very”, or probabilities for that matter?

For example, in my notation you would represent “very hungry and a little tired” as:
0.9|hungry> + 0.1|tired>

Another example, is Schrodinger’s cat alive or dead? In my notation the 50/50 probability would be represented by:
0.5|alive> + 0.5|dead>

Exactly how the above couple of examples map to the brain I don’t know, and unlike HTM, my focus isn’t strict adherence to the biology/neuroscience. Heh, we all need our own niche. First, the “kets” |hungry>, |tired>, |alive> and |dead> I assume correspond to specific neurons that represent that concept. As for the coefficients, perhaps they correspond to a simple sum of spikes during some time window. In any case, whatever the mapping, I have found float SDR’s to be useful.

BTW, if you want superpositions to look more like standard binary SDR’s, consider superpositions such as:
|273> + |186> + |1897> + |314> + |453> + |49> + |332> + |1461> + |1740> + |159>
which is this binary SDR (ie, list of on bits):
[273, 186, 1897, 314, 453, 49, 332, 1461, 1740, 159]

Anyway, that is my 2c.

See here for example:
forum post
code


#6

Remember that individual bits in an SDR do not play human interpretable roles the same way individual neurons don’t have human interpretable functions in the brain or in ANNs in general. Only when considered together can neuron activity have any semantic meaning. Patterns and features are encoded in HTM networks in a sparse, distributed fashion across the entire network. Its not terribly intuitive. We want to understand each bit as if it does something specific like “this bit means hungry” and “this bit means tired” etc. That’s the kind of thing you see in classical machine learning techniques and data science all the time.

“A little” or “somewhat” can be represented as a small number of overlapping bits in the context of SDRs and vice versa.


#7

This is a great explanation:


#8

I guess it depends on if you believe in the existance of “grandmother cells” or not.

See Wikipedia: https://en.wikipedia.org/wiki/Grandmother_cell
I quote: “The grandmother cell is a hypothetical neuron that represents a complex but specific concept or object. It activates when a person “sees, hears, or otherwise sensibly discriminates” a specific entity, such as his or her grandmother.”

And another quote from there:
“In 2005, a UCLA and Caltech study found evidence of different cells that fire in response to particular people, such as Bill Clinton or Jennifer Aniston. A neuron for Halle Berry, for example, might respond “to the concept, the abstract entity, of Halle Berry”, and would fire not only for images of Halle Berry, but also to the actual name “Halle Berry””

Though that page then back-pedals somewhat:
“However, there is no suggestion in that study that only the cell being monitored responded to that concept, nor was it suggested that no other actress would cause that cell to respond (although several other presented images of actresses did not cause it to respond)”

Sure, I get the idea of representing the semantics of a word across a SDR, with similar meaning words having overlap in their SDR’s. But I don’t think that excludes the possibility of their also being specific neurons for specific concepts too.


#9

I’ve never heard them called that, but yes it is pretty clear to us this happens. It’s not that certain cells only respond to abstract concepts, though. Cells recognize many patterns at different levels of abstraction. They play roles in many SDRs.


#10

Don’t confuse invariant representations with the idea of single cell feature extractors. The UCLA and Caltech study is evidence for invariant representations and information/sensory fusion and abstraction up the cortical hierarchy. This is very different than the idea any one neuron by itself learning a human interpretable concept.


#11

Hi Guys,

Can you point me to a paper that supports this view ?
I have a general interest in computation inspired by the brain, and would like to know more.

Cheers,
Csaba


#12

Dileep introduced that term in his HTM 2008 workshop introductory talk,
together with introducing SDRs. So it’s official Numenta terminology.


#13

Old topic, I know, but thought I’d add my two cents on the “grandmother cells” idea.

There isn’t going to be “A” grandmother cell (i.e that cell dies, you forget about grandma). The representation for an abstract concept of “grandmother” (the whole concept, not simply the word) will consist of many active cells, with the normal noise and error tolerances that come with SDRs.

Most of the cells in the representation will NOT only be active for the concept of “grandmother”. The majority of the cells will also activate as part of representations for other abstract concepts which share semantics, such as “elders”, “Christmas cookies”, “nursing home”, etc. depending on the specific experiences which built the “grandmother” concept.

There will be a diminishing percentage of cells in the representation which have receptive fields that strongly match the one specific concept but weakly match any other concepts. You could consider only those few that breach some minimum threshold to be “true” grandmother cells (i.e. almost never activate as part in any other representation).


#14

Regards coding for facial identity in the brain, which seems related. There was a recent article last year about how faces are coded.

Friends, family, colleagues, acquaintances—how does the brain process and recognize the myriad faces we see each day? New research from Caltech shows that the brain uses a simple and elegant mechanism to represent facial identity…The central insight of the new work is that even though there exist an infinite number of different possible faces, our brain needs only about 200 neurons to uniquely encode any face, with each neuron encoding a specific dimension, or axis, of facial variability.
http://www.caltech.edu/news/cracking-code-facial-recognition-78508

It appears to use a combinatorial code with about 50 axes some for shape variations others for variations in appearance, if I read correctly.