Here are a few observations about using dense scalar vectors in the context of SDR-based machinery.

A dense intermediate vector is what basically DL is working with. - e.g. each hidden layer computing a vector of floats for the following layer.

If you think that “life dosn’t like floats” please bare with me. First I won’t suggest to do any learning stuff on the actual scalar vectors, only to use them as intermediaries in at least two cases:

The cases are:

- encoding input scalar values to SDRs via dense vectors.
- “transcribing” SDRs to other SDRs with different properties (size, sparsity)

Of course the obvious question arising is why would we need such a thing.

The short answer is *composability*, and the follow up question is what does that means?

Lets postpone the encoding of inputs and take composition of SDRs. Most basic example is overlap - we can use a union of bits for both A and B to represent the pair C = (A,B)

Very simple and efficient except for the problems it generates:

- decreasing sparsity. If we want to combine C with D which in itself is a the pair of (E,F) we induce a lot more difficult problem for downstream processing and exhaust bit space pretty quickly.

In associative memories decreased sparsity is a big problem. - Bit overlap can represent sets of SDR but not sequences. We can’t tell whether A was before B or vice-versa

Using intermediate dense vector would allow to not only to compound larger number SDRs but also to control both sparsity and SDR size of resulting SDR.

The basic methodology is simple:

- take each SDR of density P and size N and project it to its own dense vector of size N’ (N’ could be N or other desired compounded SDR size)
- each vector is normalized to [0,1] limits if it uses floats or an integer max threshold if we struggle for efficiency.
- sum up the resulting dense vectors.
- use the top (or bottom) ranking P’ values to output the compound SDR. where P’ is chosen to achieve the desired P’/N’ sparsity of the resulting SDR

ok the algorithm above doesn’t address the ordering issue, that will follow after we talk about scalar encoding.

Till then let’s notice is we - or **it** - can use “knobs” to control how much influence each input SDR will have on the output SDRs. An exploration algorithm compounding A, B, C could knob each at full strength or tune it down by simply multiplying its intermediate dense vector with a value between 0 and 1.

Compounding of (A, B, 0.5 *C) would get somewhere between full (A,B) and full (A, B, C)