Note: I am not suggesting the following method is a biologically plausible mechanism
How could a sequence of SDRs be stored in memory and be retrieved from any point? How could SDRs be stored in a sequence while maintaining their semantic value? These are the problems I will try to resolve here.
Mapping (associative memory)
The cortex could be thought of as a collection of connected maps. Each map assigns SDRs to other SDRs. If two SDRs are active simultaneously (‘spatial contact’) or they occur in close succession (‘temporal contact’) they form synaptic binding. This binding is the mapping that associates two or more SDRs together. If SDR A frequently activates prior to B, then A can be thought of as ‘mapping to B’. Relationships can go beyond one-to-one. A can map to B & C (one-to-many). A & B can map to C (many-to-one). A & B can map to C & D (many-to-many).
In a simple sequence this is very simple: [A,B,C,D] - A maps to B, B maps to C, C maps to D. These mappings are contained within a single map. Yet another sequence is mapped into the map: [X,B,C,Y]. With both [A,B,C,D] and [X,B,C,Y] existing within the same map, there are now two additional relationships: (many-to-one: A => B, X => B), (one-to-many: C => D, C => Y). If A & B were inputs to the map then C would be the output. If C was fed back into the map then both D & Y would be the output. This is problematic if the desired sequence was [A,B,C,D]. When C was fed back into the map it had no context so it simply returned all the mappings from C. This is a problem of ambiguity.
A method (that has some resemblance with HTM temporal memory algorithm) solves the ambiguity problem by essentially re-encoding the output (based on the inputs) then feeding it back into the map with the next input. Given [A,B,C,D] A will map to K (where K is a random SDR - the ‘re-encoding’ of A), B & K maps to F, C & F maps to D.
The re-encoded outputs ((K,F), (S,R)) act as context for the next input. If C was fed into the map with the context F, then D will output. Else if C was fed into the map with the context R it will output Y. F represents the context B & K and K represents the context A. So C is fed into the map within the context of [A,B].
However, this is also problematic. The whole sequence is completely dependant upon the preceding SDRs. C cannot map to D without A being at the beginning of the sequence. Every SDR must be in the exact same order, leaving no room for invariance. For the same reason a sequence cannot be activated (or ‘accessed’) from any point, it must always start from the beginning of the sequence. Put another way - D cannot be retrieved without knowing what F is, but F cannot be known without knowing what K is, unless you backup all the way to the start at A.
Random Access Sequences
In order to access a sequence from any point, multiple SDRs that represent the context must be fed in simultaneously. This general idea can be found near the beginning of the book T.Kohonen Self-Organization and Associative Memory where the output is fed back into the map through multiple delays.
Given a sequence of [A,B,C,D,E,F], E is mapped along with the previously active SDRs (D,C,B,A) to F. In order to access F again, all that is needed is the same inputs in the same delay order. The inputs can be noisy or incomplete due to the flexible nature of associative memory. Each delay SDR must be re-encoded with a constant t+n SDR, otherwise the union of all the delay inputs would not encode their order.
The disadvantage this has compared to the previous method is that the contextual memory is limited. The previous method’s context sustained throughout the whole sequence, while this method’s context only sustains to the number of delay inputs. However, this is the same limitation humans face. When learning a sequence we can only hold a number of items of the sequence in mind at any one time.
There is another way to use previous SDRs as context. Instead of delays, ‘graded’ SDRs can encode the order of the previous SDRs. ‘Graded’ means that the active bits of an SDR can have a value greater than 1. This can represent the spike frequency of a neuron. If a neuron is highly active (high spiking frequency) it might have a value of 4 (the max is arbitrary), while a neuron with the lowest frequency will always be 1. (See here for how an SDR can have a graded representation.)
The current SDR (t+0) within a sequence could have a high spike frequency which could be represented with a value of 4. The value of the current SDR degrades over time as its frequency decays. t+1 has a value of 3, t+2 = 2, t+3 = 1, t+4 = 0. At any point in the sequence there is a union of previous SDRs with varying values representing their order in time.
In the example on the left, D is the current SDR, thus has a value of 4. As A was the current SDR 3 time steps ago it has decayed to a value of 1. This union of graded SDRs maps to the next SDR (E) without the use of re-encoding the inputs with t+n constants. For this reason the union fed into the map has high semantic value. Again, this method is robust to noise. As long as t+1 has a greater value than t then the encoding of order stands.
I’ve implement a working test to validate this theory today using my SDR library SDRKit. I used the SDRRepository & Buffer to achieve the above method. I am happy with the outcome as it seems quite robust. Even if the input union is incorrect (i.e t & t+1 are reversed) it seems to recover after a few steps, though I’m not sure how this works. Using the graded union, subsequences can be abstracted (ie [A,B,C,A,B,C,A,B,C] has a [A,B,C] subsequence) fairly easily (in theory) which will be a big part of the temporal hierarchy and higher-order context generation.