Encoding vision for HTM

I became interested in event based sensors because I thought maybe we should look at the problem differently. Normal sensors communicate the state of input. Event based sensors communicate the change in the input. As you said, there is no communication when there is no transition. On the other hand, HTM learns the transitions/change of the input too. So if HTM learns transitions, why should it learn anything when there are no input transitions. In other words, do we really need to learn for example A->B->B->B->C? Why is learning that sequence as (A->B->C) or (A->B and B->C as separate sequences) not enough? Maybe there is another solution for the need of learning A->B->B->B->C -if there is any- and maybe we are using the wrong tool to make up for this. The autonomous agent that I work on learns sequences with parts that have no input transitions. This leads to just unnecessary stalls and redundant action selections. Why can’t it learn only the transitions, the stuff that actually changes?

From this perspective, event based sensors are a perfect match for HTM. Having no active columns at all when there are no transitions makes sense and maybe the learning should be designed around this. Or maybe, this is all wrong :slight_smile:

Bonus idea: Think about the functionality of manual reset tags in current HTM theory. No input change and as a result, no active neurons would be the reset itself. A sequence would reset itself naturally when the data stops changing. For some reason, this sounds so right.

2 Likes