I’m interested in experimenting with HTM and NuPIC in a simple embodied system with a sensory-motor loop (sensory: Raspberry Pi 3 with microphone and camera; motor: Raspberry Pi 3 controlling pan-tilt motors for the camera; learning/processing: NuPIC processing likely done on remote computer) to see if basic learning similar to an infant can take place.
I was wondering if anyone here had tried similar experiments, and if so, hopefully you’d be willing to share the approach you took in terms of NuPIC architecture, and also perhaps the stumbling blocks you ran into.
I already anticipate that the introduction of reinforcement learning might be a little more than NuPIC is equipped to handle, and I’d love to hear from more experienced folks what other problems might creep up.
I’m still deciding on the “metric” to use in order to evaluate the learning abilities of the system, but I’m thinking if I can get it to the point where it can replicate a simple visual habituation task that has been conducted with infants, that would be a good starting point.
Any thoughts, considerations, or warnings are much appreciated!