How does bursting relate to sequences that are probable, but not certain?

Some sequences always happen the same way (so HTM should learn them to the point where no bursting happens), but others may be probabilistic. (When there is a missing factor in the input, this can happen). So what happens to the predictions in HTM in that case? I would think in learning there would be more bursting, and sequential connections would get weaker… So predictions would be made with less confidence. So how would this lack of confidence manifest itself?

What if all sequences presented to HTM memory are completely random? Would there just be endless bursting?

Thanks.

2 Likes

A single error in a learned sequence (for example a skipped element in the sequence) results in bursting which quickly corrects itself after the next input or two (assuming they do not also contain errors). This is because bursting columns cause all learned next inputs to become predictive, so the following input should match one of those possibilities. If learning is enabled and that same error occurs enough times (depending on the configuration), it will no longer be considered an error and it won’t result in bursting any more. Eventually if the same error happens frequently enough and the original doesn’t occur any more, the “error” will become the only prediction (original will be forgotten).

In the case of completely random inputs, most inputs will result in bursting, with some inputs not bursting due to learned transitions randomly aligning with two or more inputs. The frequency of this random alignment will increase over time, due to circular connections being made. If the number of unique inputs is relatively small, the system could even eventually stabilize and not burst at all (where every input predicts every possible next input).

3 Likes