Just to elaborate a bit on what would be useful (not just for me, but for solidifying the theory and communicating it to others in the future). If this summary is correct, it’s actually not that useful. It’s kind of a restatement of the definition of an allocentric location signal.
What I would prefer is a statement in engineering terms, along the following lines (I’m not sure if this is at all relevant to the theory, but it’s the kind of thing that would satisfy people looking for a brief explanation):
This theory proposes that the allocentric location signal is computed by fusing two sources of information using an SDR-intersection based multiplicative filter: 1) the current sensory input as encoded by the spatial pooler, and 2) a movement signal capable of transforming the current allocentric location estimate into the predicted location estimate at the next timestep, using a procedure described in this theory. In this way, the sensory input at time t corrects for drift caused by the repeated application of noisy motion signals.
Basically I’m heavily biased by the robotics literature on SLAM (RatSLAM uses grid cell techniques to map subdivision), and I expect that any solution to the allocentric localization problem is going to have a lot in common with SLAM techniques, since they are solving precisely the same problem. Would it be reasonable to put your proposal into the language of SLAM, Daniel?