I’ve watched a couple videos of Jeff Hawkins talking about their work on sensorimotor inference and also allocentric object representations. Is there any code associated with these experiments?
I can see how it might work, but the biggest question I have is how to do the coordinate transformations in an HTM way. I’ve read some modeling papers where they shifted the expected inputs of visual fields by feeding in a motor efferent copy from an intended saccade. Of course, this was a very restrictive model for a very specialized application. I’d like to know the more general approach to doing coordinate transformations.
I presume that it would be represented by a number of “paths” that a predicted sequence could take and then a particular motor efferent copy that it receive dictates the sequence that will be predicted for the upcoming inputs. Similarly, for a particular sequence of inputs, you could back out an motor prediction.
But these are allocentric predictions of motor signals. I think I would be happy with just that without having to worry about the transform between allocentric and egocentric coordinate frames.
Any experiments or code along these lines? I’ve looked in the repositories and I can’t find anything obvious.