There is already the 2D object recognition project.
It is the most simplified experiment, but challenges the issues.
Moving agent equipped with more than one sensor, exploring “2D object”.
The goal is to achieve stable representation in the object pool.
Current status:
- I have written the base code in python for working with htm.core.
- I have used grid cell encoder to encode the agent position, location layer is now just this encoder
- see plots and results in the thread
Current tasks:
- how to implement object pool, i.e. the Spatial-Temporal Pooler
- how to implement distal input to location layer (arrow no.4 on picture)
Any cooperation on current code is welcome.
Anybody can write it’s own code from scratch also, we can share findings and improvements.
I am now enhancing HTMpandaVis to work with data history to be able to better see what is going on inside layers. The prior mission for the vis is to get better insight in this project and push it forward.
There is current code, my fork