While on vacation I’ve given some thought to the basics of sequence learning and prediction applied to Simple Cortex architecture. So far this is only conceptual and I will have to modify SC code a little to achieve the desired results. Let’s say a SC Area has learned two sequences, ABCD and XBCY:
NEURN 0 1 2 3 4 5 6 7
STIM0 A B C D X B C Y
STIM1 _ 0 1 2 _ 4 5 6
Stimulus 0 handles the proximal, feed-forward context, “A”, “B”, “C”… etc., and Stimulus 1 handles the lateral distal context, the previous active neuron index. The “_” symbol represents a cleared stimulus, indicating the start of a sequence. After learning those two sequences let’s input “A” into Stimulus 0 and reset Stimulus 1 context:
step 0: (A, _) activates n0, n0 predicts n1, n1 decodes B
step 1: (B, 0) activates n1, n1 predicts n2, n2 decodes C
step 2: (C, 1) activates n2, n2 predicts n3, n3 decodes D
step 3: (D, 2) activates n3, no predict neurons
Now what if we input “B” into Stimulus 0? At first glance we might think this to be true:
step 0: (B, _) activates n1, n1 predicts n2, n2 decodes C
activates n5, n5 predicts n6, n6 decodes C
However, this is incorrect. Our input stimulae (B, _) is an entirely new, unique, and unobserved set of stimulae. Therefore, it is a mistake to assume either of the C’s will follow. A starting “B” in a sequence could represent something vastly different than a B in the middle of either of the two learned sequences. Therefore, the Area has to learn a new neuron and start a new learned sequence:
NEURN 0 1 2 3 4 5 6 7 8
STIM0 A B C D X B C Y B
STIM1 _ 0 1 2 _ 4 5 6 _
But what if we want to predict what follows a B, no matter its distal context? We would have to feed Stimulus 0 into a Predict step first, then continue feeding the predicted neurons into a Predict step on subsequent steps.
step 0: B predicts n2 and n6, n2 decodes C and n6 decodes C
step 1: n2 predicts n3, n3 decodes D
n6 predicts n7, n7 decodes Y
This is how SC can recognize sequences from the middle, although it will not be able to differentiate overlapping sequences until the proper context is observed which activates the right neuron.
@jacobeverist I think your minicolumn grouping idea would work, but I actually might not need it after all based on these two reasons:
Based on the above thinking I believe the 2nd problem of not having minicolumn-like architecture “2) inhibition of neurons that activate on same value but different context” is not an issue for SC to properly learn and predict sequences.
The 1st problem “1) Neurons with shared synapses” I can solve by having a SDR of dendrite activations and one or more vectors of dendrite addresses. Whenever a dendrite is active, the neuron will learn the active dendrite’s address.
This type of architecture is nice for recognizing, learning, and predicting neurons with lots of contexts. I could see Simple Cortex neurons responding to more than 2 stimulae like HTM’s sensory-motor theory:
- Feed-forward observed sensation
- Lateral temporal context
- Sensor position context
- Apical top-down perception