Sparse Distributed Representations



Sparse Distributed Representations (SDRs)

The Brain’s Data Structure

Computers store information in “dense” representations such as a 32 bit word where all combinations of 1s and 0s are possible.

By contrast, brains use sparse distributed representations. The human neocortex has roughly 100 billion neurons, but at any given time only a small percent are active. The activity of neurons are like bits in a computer, and therefore the representation is sparse. HTM also uses SDRs. A typical implementation of HTM might have 2048 columns and 64K artificial neurons where as few as 40 might be active at once. There are many mathematical advantages of using SDRs. HTM and the brain could not work otherwise.

Sparse Distributed Representations are binary representations of data comprised of many bits with a small percentage of the bits active (1’s). The bits in these representations have semantic meaning and that meaning is distributed across the bits.

Example of a sparse distributed representation in an array of cells

This diagram represents a sparse distributed representation: a couple thousand cells with a small number of blue cells active.

In SDRs, unlike in a dense representations, each bit has meaning. This means that if two vectors have 1s in the same position they are semantically similar in that attribute. SDRs are how brains solve the problem of knowledge
representation that has plagued AI for decades.

SDR Properties


Shared bits between two representations indicate semantic similarity because the bits in SDRs each have semantic meaning. More shared bits means more semantic similarity. Here are two representations with some shared semantic meaning, indicated by the shared bits:

                       ↑                                         ↑


  1. You want to compare two SDRs, A and B, with a third, C, to say if A or B is more similar. If A has 10 overlapping active bits out of 40, and B has a distinct set of 15, can you say that B is more similar? Or can some overlapping bits indicate more similarity than others?

Storing SDRs

These representations can be stored as a list of the indices of the active bits. This takes up little space because the representations are very sparse.

[1, 23, 75, ..., 2045]


You can also subsample the indices because the semantics of the data are distributed across the bits. If you have a new representation that contains all of the subsampled indices then there is a high likelihood it is the same value (need to justify this more). And even if it isn’t the same exact value, it is at least semantically very similar.

[1, 23, 75, ..., 2045] (40 indices)
can be stored as
[23, ...] (10 indices)


  1. Can we formalize the effect of subsampling? In the case of checking if an SDR is the same as a stored subsample, with what confidence can we say it is the same as a function of the total bits and the number sampled? This is applicable to the Comparing and Fault Tolerance sections below.


Comparing two representations is as simple as taking the intersection of the two indices sets. The larger the overlap, the more semantically similar the two values are. This generally works even when subsampling the bits.

Fault Tolerance

SDRs are inherently fault-tolerant for the same reason that subsampling works. The semantics are distributed across the bits so you can discard several active indices without a significant loss in representation.


You can check for existence of an SDR in a set by taking the union, or OR’ing the set of SDRs, and then simply checking for a new SDR if all ones match ones in the union SDR. It is very unlikely that you will have an SDR not in the original set that matches all bits.

Temporal Transition

A subtle property of SDR unions is that each constituent SDR will match the union SDR but also any transition SDR composed of some bits from the starting SDR and some from the ending SDR. This could be an important mechanism for temporal pooling, in which cells become active for a sequence of SDRs.

Dense Representations

Dense representations use a small number of bits and typically assign values to all, or most, of the combinations of 1’s and 0’s. ASCII, for instance, encodes text characters in eight bits. Each combination of the bits has a specific value assigned to it.

Dense representations are very high capacity and thus good for storing large amounts of data. But they do not share many of the properties of SDRs. They are not fault tolerant as flipping a single bit will result in a completed different value with potentially no shared semantic meaning.

Further Resources


HTM Theory Guide
NuPIC Wiki is being deprecated