Sorry for the delay in jumping into this interesting conversation. I would like to respond to Paul’s original query and not try to address the subsequent thoughtful comments. Part of the confusion might be due to the discrepancy between the current state of the “theory” and the current state of our network “simulations”. You correctly point out that the “location layer” simulation in our recent manuscript doesn’t rely on mini-columns whereas I talked about mini-columns in the podcast I did with Matt. When we do simulations we are almost always implementing a subset of what we think the brain is actually doing. Either we don’t know enough yet to implement a more complete network and/or we pick a subset to help us better understand the results of the simulation. As long as the simulation illustrates an important point and helps us better understand the ultimate solution, then it is worth doing the simulation.
In this case the general principle of L4 and L6 interacting via unions to resolve ambiguity of location is an important idea. The simulation and network doesn’t include mini-columns, orientation, learning of the grid cell modules, etc. Even though we know it is not complete, we hope others find it useful. We did. BTW, a very recent paper from David Tank’s lab suggests yet another way grid cells could represent unique locations, and unions of locations. I managed to squeeze in a last minute reference to Tank’s paper in our “Frameworks” paper that was posted last week.
Now a bit about mini-columns.
The brain needs a way to represent similar inputs differently in different contexts. For example, a melody is composed of a series of intervals. The intervals, and even sequences of intervals repeat and yet the brain doesn’t lose track where it is in a melody. It must have an internal state representing “this interval at this location”. Similarly, the same muscle contractions occur in different behavioral sequences, which are just like melodies. Representing something differently in different contexts is a basic need of brains. Our mini-column hypothesis addresses this functional need in an elegant way and matches numerous experimental observations.
Something like mini-columns are needed in the representation of location. As explained in the frameworks paper, objects have a location space. What occupies a particular location in that space depends on the state of the object. If my finger is at some location in the space of a stapler, what the finger feels depends on the state of the stapler, is it open or closed. Similarly, what icon appears in the corner of my smart phone display depends on the state of the smart phone. Cortical grid cells represent location, therefore we need a method representing the same location in different contexts. Mini-columns are a logical candidate to do this.
As also mentioned in the frameworks paper, the cortex needs to learn sequences of displacement cells, therefore we suspect mini-columns are used here too. (BTW, I now think that L5 displacement cells might be the only place where pure sequence memory exists. Displacement cells are ideal for representing musical intervals, that is pitch invariance, and therefore this might be where melodies and other sequences are learned.)
We are currently trying to unite a whole slew of things that we know macro-columns must be doing. I am working on the idea that mini-columns span across layers providing a mechanism for tying the different layers together. For example, in V1, iso-orientation slabs are created in L4. Mini-columns with these receptive fields intersect L6 grid cells (as in Tank’s paper) forming a unique representation of location based on the context of sensory input.
I hope that helps.