Difference between TM and backtracking TM

I’m not sure either (so I would also like to read any insights from someone who does know). But I had a different understanding of backtracking TM, which is not really to address noise. It is more for reducing the ambiguity that piles up when a particular sequence is repeated. For example, say we have a system configured for one-shot learning, and we have input the following (using bold to indicate when minicolumns burst):

A B C D A B C D A B

In this case, the minicolumns for B will burst. With normal TM, a new representation for B would be chosen and connect to the representation for A, growing the sequence a little bit longer. With backtracking TM, though, it would check if, for example, the A minicolumns had burst one timestep ago, would B have been correctly predicted in the current timestep? (this isn’t the best example, but you get the idea…) Thus, it can recognize that it already knows this sequence.

3 Likes