Numenta Research Meeting - July 27

Excellent discussion as always. As I was listening in, I pulled out a notepad and began frantically scribbling. I felt like I’d seen this problem before (or at least one with striking similarity).

The mathematics to describe this situation have been around since Copernicus first gave us the heliocentric model of the solar system (and perhaps even sooner). It’s a relatively straight-forward application of consistent coordinate transformations.

Of course the motion of planets in the solar system is a vastly simpler problem to work out than the motions of ourselves and the individual objects in our surroundings. But the essential nature of inferring the kinematic parameters of the coordinate transformations remains the same. Copernicus had centuries worth of geocentric observations of the planets from which to construct his model. Whereas our brains are constantly making the same inferences, in real-time, from egocentric sensory inputs.

The problem we face (and solve spectacularly each and every day on a moment-to-moment basis) is to rapidly estimate the relative position, orientation, and motion of ourselves and of numerous independent objects in our immediate vicinity. So, while I can probably write out the mathematics required to do this parameter estimation by solving a massive set of linear equations, I have less intuition on how the brain solves the same problem with simple complex switching elements.

Assuming that there is a way to frame the problem that uses SDRs to represent the sensor data (egocentric state) and some way of processing these representations to arrive at another SDR representation of an external, inferred, (allocentric) state that is responsible for generating the observed data, then it should be possible to produce results similar to the linear algebra solution.

Still pondering…

3 Likes