Encoding vision for HTM

So here is some early implementation of event based sensor compared to the default RGB sensor. It kind of works simpler and better than I expected. You can control its sparsity via a threshold. You can even fix the sparsity if you introduce some sort of inhibition among pixels. It also has a significantly lower dimensionality compared to RGB sensor; a pixel is either on or off.

The main limiting factor is that the image motion speed effects everything. The current agent movement that rotates and jumps from Voronoi cell to Voronoi cell (0:15 in thesis video) needs to be altered accordingly. This may prove to be a good constraint in the long run though.

So what would be the preferred way of handling the static images though? What happens if the agent stops; should it not really see anything?

4 Likes