TM: ABACAD pattern!

We all know the TM explanation about :


where C will predict D or Y based on the split that happens at the beginning, so that it is like :


once we are at B1 it guarantees us we will predict D, similar for B2 => Y.
the logic seem to break for sequence like this :


now if i get A, whats next ? we have 3 different splits !!
we can interpret it like :

… and so on


If I understand what you are asking, the learned sequence in context in this scenario would be A1 B1 A2 C1 A3 D1. So assuming nothing is predicted when an A comes in (for example if we have reached the end of the learned sequence, or if we did a reset), the A minicolumns would burst, predicting B1, C1, and D1. If the next input were B, C, or D, the ambiguity would be resolved, and the rest of the sequence could be finished out without any bursting.


At first the TM would predict all 3 outcomes from A (‘B’, ‘C’, and ‘D’) because it wouldn’t have learned enough context to distinguish between the 3 A’s.

However if the sequence is repeated enough times, the TM will learn all of the context there is – so it will come to know that the ‘A after D’ is different from ‘A after B’ or ‘A after C’. Once it knows this A is ‘A after D’, it will predict more precisely (‘B’ only in that case).

These different version of A are sometimes denoted as A’, A’’, A’’’, etc.
Each A-version activates the same columns, but each activates different cells within the columns-- so A’ cells would connect to D’ cells, while A’’ would connect to B’ and A’’’ to C’.

For a more complex version of this pattern (for instance with multiple ‘ABA’ or ‘CAD’ subsequences), the TM will eventually learn to distinguish between them too. In those cases it would simply need to see the pattern repeated more times, in order to make those distinctions.


So… first:
ABACAD, generates
next, ABACAD, generates
next, AXABAC, generates

… will if repeated enough B2:A6:C2 morph to B1:A2:C1

1 Like

hmm… i see …if i have stored transitions somehow :
==> ABA
A1 => B1
B1 => A1

and I get : C
it expected B, so transition will change to :

B1 => A2
A2 => C1

so revise the previous transition if not strong enough and create the new transition.

1 Like

I believe you are describing the repeating inputs scenario. If you repeat a sequence, what the vanilla TM algorithm does is it bursts when it reaches the end of the known sequence, adds one more element to the sequence, then cycles through it again, bursts, adds one more element, and so-on. So given enough iterations, it learns in an ever-growing long sequence like:

A1 B1 A2 C1 A3 D1 A4 B2 A5 C2 A6 D2 A7 B3 …

1 Like