An idea of a new method for encoding arbitrary length vectors into SDR

Hi all.
I have came up with a new method, which I called the Rotational Cell encoder; it encodes N dimensional values without being effected too much by the curse of dimensionality (Like the current Grid Cell does, requiring exponentially more cells to represent higher and higher dimensional space). But I’m not totally sure that my method works. And it is purely mathematical now. It would be great if you guys could spend some time reviewing my thoughts and catch potential problems.

The problem

Simple, encode given a N dimensional vector, generate a SDR that has the properties of a SDR. - Being sparse and two similar vectors should generate similar SDRs.

The method

Rotational Cell Encoder (RCE for short) accepts a real valued vector V of length N and spits out a SDR. And each RCE is formed by several Rotational Cell Modules (RCM). Like how Grid Cells in HTM works.

For each RCM:

  1. Generate a random unit vector U on the N+1 D hyperspace.
  2. From a rotation matrix M using the input vector V’s component as rotation radient.
    • Maybe multiply by a factor of 2*PI
    • Rotating in a N-D space need N-1 D angles. Thus a N+1 D space is needed to have N rotation angles
  3. Calculate the new vector W = MU. The new vector should also be an unit vector.
  4. For each component in W, use one 1D Grid Cell Module to encode it’s value.
    • There are multiple Modules in a Rotational Cell Encoder. That should solve the ambiguity problem.

For each RCE:

  1. Encode the vector V using the RCMs
  2. Concatenate the SDRs generated from the previous step.

Potential/Theoretical Issues

Some issues that I have think of. There might be more issues with this method. And lot’s of improvement is definatelly needed.

In higher dimensional space, an unit vectos’s component’s mean value gets lower

So the scale/grid size of the Grid Cell Module in each RCM has to get lower and lower as the RCE is encoding higher dimensional data. Maybe a unit vector may not be the best idea to begin with. The length of the vector U might have to grow exponentially according to N to maintain constant grid size.
Need a mathematician to confirm.

The components in vector W is not in an uniform random distribution

Ideally bits in a SDR should be equally likely to turn on. But since components in the result vector W is not uniform (should be self evident. Or integrate the area under curve; it’s not uniform.). The quality does not hold. There needs to be a way to map the components back to a uniform random distribution.

That’s all what I have came up with for now. Please leave feedback so I can improve my idea.

5 Likes

Is this similar to this viz from the grid cell episode? :smile:

29%20AM

I can’t wait to see how it works. I imagine there is a threshold for how many fields you can encode before you loose resolution to identify anything.

1 Like

Yes, it is very similar to how Grid Cells works.

How can I calculate how many fields I can encode before being non effective? I believe the curse of dimentionality will creep up somewhere, eventually. But not as fast as Grid Cells and Scalar Encoders.

Write some hypothesis and some experiments by running different types of encoded input into an SP. You should be able to use a standard classifier over the SP’s active minicolumns to identify input semantics.

2 Likes

I guess you don’t really need the SP in the equation! Just run a standard NN linear layer with a softmax classifier. Just increase the number of classifications and see how high you can still perform.

(That being said, I know just enough about NNs to talk a good game but I am not the person to ask for advice about how to do this.)

1 Like

@rhyolight May you ask the research team in Numenta how they make this graph for the Grid Cell paper? I think the same approach can apply in this case. I know the source code is available on GitHub, but I don’t fully understand it.

The NN way seems to be a good idea. Although NN generally don’t like sparse inputs. Maybe worth a shot.

Not being a math person, I didn’t completely understand your idea (in particular, the formulas being applied to generate the unit vectors).

Couldn’t one simply choose random point and direction in the hyperspace and encode locations along that line with a standard 1D GCM? The more of these random 1D GCMs you add, the better coverage of the hyperspace you would get. Although perhaps that is what you meant by requiring exponentially more cells to represent higher dimensional space. Or perhaps I just described your idea in less technical terms :laughing:

Hey @mrcslws did you write that code for the chart mentioned above?

EDIT: Never mind, @lscheinkman just reminded me that this paper has all the code here. Take a look!

Haha… I should explain myself better. Kinda yes and no. Simply encoding a ND vector using lots of standard 1D GCMs looses the main appeal of a GCE. Namely that a change in any axis axis can potentially affect all bits in the encoding.

For example when encoding A(50, 50) and B(100, 50)
With a 2D GCE. The resulting SDR will be very different. They should not have much overlap. But when encoding with 2 1D GCE, since the y component is the same. The resulting SDR will have 50% of their 1 bits overlap.

To solve this problem; I decide to rotate a unit vector according the the input values across the N-D space. By doing that and passing the rotated vector, any change in the input vector can effect all bits in the output.

2 Likes