Previously, Jeff Hawkins has discussed the possibility that reference frame rotations might occur locally in a cortical column. In this research meeting, he proposes an alternate possibility that movement and sensed features are translated in the thalamus. Using vision as an example, Jeff gives an overview of the thalamus and discusses the role and mechanism that thalamo cortical cells might play.
I am not a neuroscientist and, naturally, I have a stupid idea….
What if L5 “motor” output doesn’t contain motor vectors but some sort of data resembling sensory (some sort of “Augmented Reality Glasses” trick)?
Maybe neocortex is “cooking” some “fake” sensory data with the clear intent to trick the old brain to do something for him?
I had a realization that “thalamus is a router” one day but I wasn’t sure what to do about it. I am very excited to see where this goes!
I was picturing each transformation “level” being its own dimension, where each transformation can be stacked. Like a gimbal, each with separate independent axis that lead to a wide range of motion. (I’ve been calling it a “Neural gimbal” )
For example: when reading your phone screen:
- eye left/right
- eye up/down
- neck left/right
- neck up/down
Each level might be a “hop” through the thalamus, to translate retina pixel to phone screen pixel.
Keeping track of this in real-time requires some sort of feedback to tell if the state estimation is “in focus”.
Now that I mention it, it probably wouldn’t be too hard to train a deep learning model in a simulator to detect joint movements with optical flow, and estimate independent transforms for each joint. Might be fun. I’m Not sure how the system would deal with 2 possible explanations for a given inverse kinematic a solution, though
Perhaps you should get familiar with the posture control system?
This is the inverse kinematics system that maps from your vestibular system to the support points.
Oops I was commenting on the bottom link to video from January. Still exciting!
But here’s a fun idea for Jeff: What if the thalamus transformation is like a muscle? That is, commands sent to thalamus (via the branching axon from Layer6) is a motor command… and the thalamus is the “muscle” that “moves the reference frame”. So to see the results of the action in the real world, the brain needs to use “sensors” to measure the effect of that motor command, and adjust accordingly.
So then a branching axon sends (a) a motor command to “flex” the thalamic “muscle” and (b) an efference copy so that higher levels can decode the lower-level action that was taken
Maybe I misunderstand, but might be interesting.
I’ve had a thought in recent days that the efference copy of the motor output might be related to voting. Essentially each column is attempting to nudge it’s neighbors towards a common (or at least complimentary) representation. This would be sort of like multiple clocks on the wall gradually synchronizing their oscillations with one another. The resulting firing patterns would correspond to a lowest energy configuration for the local network. The purpose of this synchronisation is so that the columns in the local cortical region can pool their influence on other regions through their combined motor outputs.
I’ve had another thought this morning related to grid cells, reference frames, and rotations of k-vectors in high dimensional spaces using rotors (similar to quaternions). I still need to work out the math, but if my intuition is correct, it should be possible to generate high dimensional grid cell module representations (states) and their allowed state transitions in such a way as to directly encode the topology of a low dimensional continuous manifold using nothing but binary vectors with constant sparsity.