Neural network rotational architecture mirrors minicolumns

#1

I found a paper on a NN archetechture that I think mirrors part of htm theory.

http://super-ms.mit.edu/rum.html

I think that if you squint you can see that mini columns / grid cells work more on rotation than adding or muliplying vector points together.

In a simple case, a mini column with 3 cells on it that when not bursting only has enough ions to activate one cell, looks like a unit vector in 3d space that rotates from the x axis to the y axis to the z axis without changing magnitude.

If instead two cells can be active at a time, the vector would have intermediate steps half way between the axis. (1,0,0) to (1, 1, 0) to (0, 1, 0)

The magnitude does change some in this case, but I do think that a mini column with a lot of cells and 2% activation probably looks a lot like a rotation in a high dimensional space, especially if the the activations don’t all change at once every cycle.

Maybe thinking about activations as rotations could help link the theory to head direction cells?

#2

This is the approach outlined in the PDP books, in particular, the tutorial for linear algebra. If memory serves correctly, this is also a big part of the Cooper RBF model.

Considering that SDRs are positional codes have you considered set theory as a tool for analysis?

1 Like
#3

1 Like