Project : Full-layer V1 using HTM insights

Nice stuff @SimLeek :slight_smile:

I’m a very slow Python decoder, would you care to try to explain your approach a little here ?

I’ve read your post on Who is currently Building HTM Systems? as well as the GitHub readme, however I cannot say I’m confident to have a grasp on it all.
From what I understood, PyGPRetina simulates the output of retinal cells when presented visual input (such as a video), then you send it to a hardcoded V1 sim ?

If that is the case, I’m very, very, interested by the retina stuff.
Maybe I’d see a little less of a match for the hardcoded V1, although you may have developed insights here that I currently lack. To answer your concern on the other post, I’m quite inclined to believe that indeed V1 functionality can be faithfully engineered and hardcoded, but as you certainly understood by now, my whole drive here would be to see how it could self-organize from a signal such as your retinal output.

(you also seem able to code GPGPU, while my experience in shaders is mostly restricted to graphic rendering. So, many thanks for saying hi here ^^)

Regards,
Guillaume