Object Recognition Touch Demo

Could you speak more about “Build Object Recognition Touch Demo” for me? I’ve always assumed a problem like object recognition in any capacity would require highly invariant representations made possible by hierarchical processing (simply additional layers for CNNs, but in the context of HTM, separate cortical regions each composed of layers talking back and fourth to other regions). Is HTM really there yet?

I want to take the “Thing” demo we’ve shown here and turn it into something that displays a visualization of two Layers and all the neurons in them. I think I have a way to show the output layer(s) converging on an object. Luiz is doing all the hard work on the highbrow side.

So this is basically a visualization task to show cells representing objects as they are touched.

1 Like