Alternative sequence memory

I’ve been thinking about this TP issue and thought I’d try the other TP method we briefly discussed in another thread.

The code I use for the following examples can be found in this gist.

To see it in action check out this codepen

In the above gist I’ve written a network that does first-order-memory (layer 4) and temporal pooling (layer 3). I have used two sequences for demonstration - ABCD & XBCY. I have not implemented learning as I just want to demonstrate recall, so the weights are hard-coded. The symbols (XABCDY) are represented as single cells instead of SDRs for simplicity.

Layer 4 is simple as it just does first-order-memory. So A or X activates B, B activates C, C activates D and Y. All the weights in the matrix are binary, so the pre-synaptic neuron will activate the post-synaptic neuron instead of just depolarise/predict, I’ll explain why later. All the synapses between all the neurons in layer 3 are in the distal weight matrix.

Layer 3 has two cells to represent the two sequences. I have labelled them N & K. N represents ABCD, and K represents XBCY. While either sequence from layer 4 is playing out, the associated cell in layer 3 remains active. The synapses between layer 4 and 3 are in the proximal weight matrix. The weights are continuous. They simply represent the order in which the inputs arrive. For example, for N: A=4, B=3, C=2, D=1; for K: X:4, B=3, C=2, D=1. This simple encoding allows for competition between N & K during recall. If A was fed in from layer 4 then N will have a value of 4, and K will have a value of 0. Feed in B from layer 4, and feed back in the output from layer 3, then N will equal 43 and K will equal 3. The value for N is higher than K, and will remain that way for the duration of the sequence if C and D follow. I’ll explain later how layer 3 cells have their activation calculated.

The output from layer 3 is fed back into layer 4 through via the apical weights. By summing the inputs and weights from the distal and apical synapses the cell with the highest value is selected to be active. This again is based on the idea of competition. After C has been fed in, N will equal 432 and K will equal 32. C activates D & Y, but as D has apical synapses from N, D will have a value equal to N, while Y will have a value equal to K, so therefore D wins out.

The proximal synapses in layer 3 detect sequences in a specific order. A cell that represents a particular sequence from layer 4 will increase in value as the sequence plays out. For ABCD, N will increment like so: N=4, N=43, N=432, N=4321. If the sequence from layer 4 is incorrect the layer 3 cell will still increment, but just not as much. So for ACBD, N will increment like so: N=4, N=42, N=423, N=4231. For DCBA, N will increment: 1, 12, 123, 1234. The value of a cell in layer 3 is calculated: layer3[i] = layer3[i] * 10 + layer4[j]. So cells in layer 3 that have similar representations will have closer values than those with dissimilar representations.

In the code above I just input the beginning of the sequence (A or X) and let the memory unfold automatically over time. The in interesting thing is that if I input X, the layer 3 cell K has already predicted the whole sequence. The same for input A.

image

1 Like