The paper you found is good evidence for the somatosensory cortex only being a map to map body articulation that together acts in response to a given attractor in the environment. It also has a list containing the required object features for the project Matt got me started on:
Here we report the existence, in the rat somatosensory cortex only, of a novel navigational system, which contains the full spectrum of all distinct spatial cell types including place, head-direction, border/boundary, conjunctive, speed and grid cells.
It also includes the speed variable I found necessary:
Without some method of predicting outcome of its own motion a fast moving ID Lab critter will race past its food, instead of ahead of time slowing down to a stop. For my purposes I coded a distance dependent circuit that decreases confidence level of motor actions that lead to being over the required speed limit for landing. There is no direct control of motor, just a memory bit that only becomes active just before something bad like that happens. This is enough for motor memory to self-organize actions accordingly. Since HTM makes predictions there should be an easy way to use that instead, but unfortunately I’m not sure what is most biologically plausible.
After looking for information on conjunctive cells I found this that in B shows traveling wave path around an internal avoid area:
It is possible to narrow down the route to one possible path, but normally there can be many. Shortest route can be the hardest and most dangerous, not very attractive. After including all predicted obstacles to avoid along the way a traveling wave can this way be narrowed down to one or several best guesses. Conjunctive cells seem more like for drawing a final line through the 2D map, such as for the path it commits to taking or most thinking/thought about.
Although it’s helpful to save the actual head direction in memory, my model predicts that while traveling it’s possible for the head direction angle to be controlled according to direction of the traveling wave, being followed upstream to attractor. That’s what made my model come to life in a way that I had never ever seen before.
For object recognition coding purposes the benchmark for perfection is performance when mapping straight from the input file where there are exact coordinates for each given thing, instead of itself having to figure all that out and try to get as close as it can. This would provide a test platform that already does lifelike things, but does not yet have a neocortex. What do you think?