Does the SDR have to be binary?

Hey, I’m fairly new to HTM theory, I was wondering what would happen if the inputs were floating point instead of binary?

1 Like

A big reason SDRs work the way they has do to with SDR comparison.

A binary array comparison is a much different operation than a float array comparison. I’m not sure you’ll get anywhere near the computation speed if you change bits to floats.

1 Like

Thanks, I did think computation may be an issue, I just can’t get my head around features having to be absolute.

TLDR; SDRs are binary in HTM because they are binary in the brain. There seem to be specific properties of SDRs the brain takes advantage of to store and retrieve memory that would not be feasible if they were floating points.

“Features” in the HTM sense are not absolute. There is a high degree if granularity in what bits can mean in what contexts. A bit might mean something extremely specific in one context, or very abstract in another. Most bits don’t make any sense unless they are consistently groups with particular other bits.

Keep in mind whenever we talk about an SDR, we are really talking about a neuron’s receptive field. It could be a proximal receptive field viewing feedforward input, or it could be a neuron’s distal field, and who knows where those connections are coming from? In SDR communication, context is very important.

This all derives from how the brain stores memory between neurons. There is no “processor” in the cortex. The memory is the processor. Cortical tissue uses properties of SDRs to store representations of the world through sensory processing and interaction. It also uses them to playback motions, memories, actions.


So, in my work I use “superpositions”, which can be considerd a type of float SDR’s. They have most of the properties of regular SDR’s, including a similarity measure. I think most of the time the brain probably does use just binary SDR’s, but in a few cases perhaps not? eg, with binary how do you represent “a little”, “somewhat” and “very”, or probabilities for that matter?

For example, in my notation you would represent “very hungry and a little tired” as:
0.9|hungry> + 0.1|tired>

Another example, is Schrodinger’s cat alive or dead? In my notation the 50/50 probability would be represented by:
0.5|alive> + 0.5|dead>

Exactly how the above couple of examples map to the brain I don’t know, and unlike HTM, my focus isn’t strict adherence to the biology/neuroscience. Heh, we all need our own niche. First, the “kets” |hungry>, |tired>, |alive> and |dead> I assume correspond to specific neurons that represent that concept. As for the coefficients, perhaps they correspond to a simple sum of spikes during some time window. In any case, whatever the mapping, I have found float SDR’s to be useful.

BTW, if you want superpositions to look more like standard binary SDR’s, consider superpositions such as:
|273> + |186> + |1897> + |314> + |453> + |49> + |332> + |1461> + |1740> + |159>
which is this binary SDR (ie, list of on bits):
[273, 186, 1897, 314, 453, 49, 332, 1461, 1740, 159]

Anyway, that is my 2c.

See here for example:
forum post

Remember that individual bits in an SDR do not play human interpretable roles the same way individual neurons don’t have human interpretable functions in the brain or in ANNs in general. Only when considered together can neuron activity have any semantic meaning. Patterns and features are encoded in HTM networks in a sparse, distributed fashion across the entire network. Its not terribly intuitive. We want to understand each bit as if it does something specific like “this bit means hungry” and “this bit means tired” etc. That’s the kind of thing you see in classical machine learning techniques and data science all the time.

“A little” or “somewhat” can be represented as a small number of overlapping bits in the context of SDRs and vice versa.


This is a great explanation:

1 Like

I guess it depends on if you believe in the existance of “grandmother cells” or not.

See Wikipedia:
I quote: “The grandmother cell is a hypothetical neuron that represents a complex but specific concept or object. It activates when a person “sees, hears, or otherwise sensibly discriminates” a specific entity, such as his or her grandmother.”

And another quote from there:
“In 2005, a UCLA and Caltech study found evidence of different cells that fire in response to particular people, such as Bill Clinton or Jennifer Aniston. A neuron for Halle Berry, for example, might respond “to the concept, the abstract entity, of Halle Berry”, and would fire not only for images of Halle Berry, but also to the actual name “Halle Berry””

Though that page then back-pedals somewhat:
“However, there is no suggestion in that study that only the cell being monitored responded to that concept, nor was it suggested that no other actress would cause that cell to respond (although several other presented images of actresses did not cause it to respond)”

Sure, I get the idea of representing the semantics of a word across a SDR, with similar meaning words having overlap in their SDR’s. But I don’t think that excludes the possibility of their also being specific neurons for specific concepts too.

I’ve never heard them called that, but yes it is pretty clear to us this happens. It’s not that certain cells only respond to abstract concepts, though. Cells recognize many patterns at different levels of abstraction. They play roles in many SDRs.

Don’t confuse invariant representations with the idea of single cell feature extractors. The UCLA and Caltech study is evidence for invariant representations and information/sensory fusion and abstraction up the cortical hierarchy. This is very different than the idea any one neuron by itself learning a human interpretable concept.


Hi Guys,

Can you point me to a paper that supports this view ?
I have a general interest in computation inspired by the brain, and would like to know more.


1 Like

Dileep introduced that term in his HTM 2008 workshop introductory talk,
together with introducing SDRs. So it’s official Numenta terminology.

Old topic, I know, but thought I’d add my two cents on the “grandmother cells” idea.

There isn’t going to be “A” grandmother cell (i.e that cell dies, you forget about grandma). The representation for an abstract concept of “grandmother” (the whole concept, not simply the word) will consist of many active cells, with the normal noise and error tolerances that come with SDRs.

Most of the cells in the representation will NOT only be active for the concept of “grandmother”. The majority of the cells will also activate as part of representations for other abstract concepts which share semantics, such as “elders”, “Christmas cookies”, “nursing home”, etc. depending on the specific experiences which built the “grandmother” concept.

There will be a diminishing percentage of cells in the representation which have receptive fields that strongly match the one specific concept but weakly match any other concepts. You could consider only those few that breach some minimum threshold to be “true” grandmother cells (i.e. almost never activate as part in any other representation).


Regards coding for facial identity in the brain, which seems related. There was a recent article last year about how faces are coded.

Friends, family, colleagues, acquaintances—how does the brain process and recognize the myriad faces we see each day? New research from Caltech shows that the brain uses a simple and elegant mechanism to represent facial identity…The central insight of the new work is that even though there exist an infinite number of different possible faces, our brain needs only about 200 neurons to uniquely encode any face, with each neuron encoding a specific dimension, or axis, of facial variability.

It appears to use a combinatorial code with about 50 axes some for shape variations others for variations in appearance, if I read correctly.


If those cells were removed, then the concept of grandmother would be lost - right? This might point to less semantic overlap than you imagine. If the SDR is in a network with feedback, then it can associate grandmother with the features you suggested (rather than sharing SDR). It could also go in the other direction - a set of features triggering the grandmother SDR. This seems more robust.

1 Like

No, because they are only a diminishingly small percentage of all the cells in a given SDR. It is the overall code which matters – individual bits can be dropped (actually an astonishingly large percentage of them) without losing the ability to recognize the pattern.

1 Like

Bear with me here :slight_smile: if the SDR is coding all these other semantics, then if the grandmother specific ones disappear, then you would be left with the shared semantic which no longer has the differentiation of grandmother. Taking your example, lets say all “elders”, “in nursing home”, “female” are grandmothers. So we lose the bit that codes for “elder” now we have “in nursing home”, “female” and confuse the grandmother for a doctor.

The point is that “grandmother” is a network which overlaps with those other concepts in varying percentages. If I randomly drop (even a large) percentage of those cells, the basic ratio of semantics remains, and the pattern can still be recognized.

Right, but in this example, you are selectively removing all the bits for a given region of semantics (say all the “elder” bits and all the “female” bits. This would be highly unlikely to happen by random chance, but if it did then, yes, you are correct that the original concept could be confused with another one due to important semantics of the representation being lost.

I was just pointing out that if you simply remove only the few cells that strongly align with a given concept and not any other, it doesn’t actually do a lot to the overall ratio of semantics in the representation (and new cells will just be recruited to take their place).

I am not following you, sorry. The discussion was about a single SDR and now you are referring to regions. The SDR is sparse, and if the bits have semantic information then you do lose information with a “bad” bit. If you imagine the grandmother SDR as composed of many features with each bit mapping to a feature, there would not be many bits per feature in the SDR.

This is not to say that there is only one SDR representing grandmother in a large network. I’m not sure if you are seeing my point or if you are seeing my point is invalid.

1 Like

Sorry, I shouldn’t have reused the term region here. I’m lacking a good term to use here – perhaps “islands of semantic meaning”? A sub-set of cells in a given representation that encode for a sub context.

No, one bit is useless by itself. It is the collection of bits that forms a concept. Yes, an individual bit has a (very) small amount of semantic meaning, but it is reinforced by all the other bits in the representation, and thus is quite unimportant all by itself.

I believe I am. You are finding it difficult to accept my assertion an individual cell can have semantic meaning, because that would seem to imply that losing such a cell will have catastrophic impact on a concept containing that cell (i.e. the system would not be noise tolerant).

I am saying that individual cells having some (albeit very small, and more than just one) amount of semantic meaning does not make them important by themselves. A given representation has a large number of them (despite being sparse, do recall that a real brain is much larger in size than the toy models that we typically work with here), and it is the collection of cells (each with their measly amount of semantics they bring to the table) all voting together which is important.