Crazy quilting in the cortex



I agree that this idea can be a bit disturbing on first exposure. It certainly was for me.
I have been reading through the results of mapping tracts using newer imaging technology and it really does seem as if there are more connections than were originally found using injection tracing.
On the functional side of things this does make some sense. In a computer program you can access a variable from any part of the program that needs a sample of whatever information it is conveying. The brain does not work that way - if information must be available it will have to be conveyed by projections from one place to another. For sensor fusion I can see a very real need for taps of various parts of different sensory streams to be connected at different levels of processing.


Some preliminary findings…

For each of three cortical areas I investigated (somatic, visual, auditory):

  1. there is one primary sensory field projection:
  • transforms sensory input field onto physical cortical substrate
  • direct input from sense through thalamus
  • exists at birth
  • structure / topology does not change with learning
  • globally discontinuous, locally continuous
  1. there are (many?) minor input property projections for each sensory input:
  • allows different input sources to express at same point in sensory field
  • does not exist at birth, emerges with learning
  • these typically look like stripes between binary variables
  • may have distinct (occular dominance) or fuzzy (SA/RA) borders
  • could emerge because of direct input source differences or because of the first level of pattern recognition in cortical processing is responding to specific low-level input patterns (like SA / RA)
  1. low level input patterns
  • observed only in anesthatized animals in V1 (as far as I could find)
  • orientations of a line or direction of movement in V1

For #1 above, visual and auditory cortex projections are most similar. Both have mirrored continuous sensory space projections. The somatic cortex also has mirror representations, but the way it treats sensory field location seems different. It is not a continuous field, but a series of semi-discontinuous patches of space, each with local continuity.

For #2 above, when input is from different physical sources (like two eyes or two ears), there is a physical merge of neuron fibers into a striped pattern (for V1 you can even see the stripes in the LGN). Eyes and ears seem to project directly to grouped sections of neurons with sharp borders. The SA/RA stripes in somatic cortex are similar because there seems to be a small amount of cortical processing occurring in order to respond only to one stimulus or the other (note that SA/RA stripes don’t have the sharp borders of the binocular / binaural stripes). This is learned over time, I don’t really know how they emerge, but I also don’t think it matters too much as long as each input can project to the right location in the sensory space. For skin, there is not the problem of separated input sources (both RA/SA can be deduced from the same patch of skin, the neurons firing are physically close to each other in the sensory space, not in separate eyes/ears). Why there are even SA/RA stripes, I don’t understand.

For #3 above, I think this is an echo of sensory input into an unconscious cortex. Once we can monitor a waking cortex, these transformations will likely make a lot more sense wrt what is in the visual field. It is hard to say anything about these patterns except that there is likely some very low-level pattern recognition going on even in the receptive fields of the first level of input to the cortex.


Some of the early work on #2 “local” patterns it that they are learned on exposure to the environment. Deprived upbringing results in impairment past the plastic period; this is the Gabor filters that are formed by learning.

On the local stripes - these form stereo perception in the eye and phase/location signals in the ear.
Considering that these are strong location signals I would think that this is very important to the current grid thinking that is being developed at the Numenta mothership.


In any experiments where the animal is anesthetized (most all of them above), the object location signal doesn’t exist because it is generated in the cortex. As the eyes / ears focus on objects in the sensory field, the job of the cortical columns will be the same no matter. There is no experimental data on this, as far as I can tell.


I’m not sure what you require in the form of experimental data. I have been looking at many different papers over the years and there is considerable work out there.

This is a quick grab of papers with no sorting but I have dozens to pick from.


Great thank you!


Mark you are really helpful suggesting papers, hopefully I’m not taking advantage of you, but I have a specific request.

I would love to find evidence of striping in auditory cortex along the lines of the H&W orientation/direction topology. There is clearly a binaural projection of the input just like binocular visual input, and it follows that there may be a stripes for different low-level characteristics of sound like tone, attack, volume, etc?


BTW, I just added a #3 to my preliminary findings above and updated my notes. I’ll record tomorrow.


It’s more clusters than stripes in the auditory cortex.

For those interested in looking for more background:


Some considerations for you to toss in the mix. The path to the cortex varies by sensory modality. The ear and eye have fairly direct connections that are able to preserve the important topology. The skin has to send this stuff up the spine through inter-neurons.

If you think of how the nerves have to grow to the right place, guided by chemical markers as the growth cones make their way - I can see how local groups can stay in sync while on a larger sense they get mixed up.

As far as clusters - there is no inherent topological meaning in sounds as there is in vision. There is grouping based on what you are trying to wring out of the sound. For location you have the speed of sound and the distance between the ears; say 0.5 foot spread and 1100 feet/second. The difference in arrival times makes 2000 Hz and up kind of important. (~0.5 ms delay) For fine discrimination of angles you are in the low number of microsecond differences.

There are other grouping that are based on the shape of the ear and its filtering properties for directional cues. This has to form a very complicated mapping to extract direction; I have no idea how the brain trains this filter without a teacher. Perhaps it is fusion with other senses but I have never read a paper on how this comes about.

Form follows function. There are some things evolution had to get right for this stuff to work at all. Other things have some freedom so there is unimportant variation. In movies they say “we will fix it in post” - the the brain I think it’s “we will fix it in hierarchy.”


I just read your updated notes.
Thanks for the shout-out.
Is my original “well shuffled but still topologically arranged” making more sense as you look at it?

I see that by the time you get to the association regions this will give the SDRs at this level access to the kinds of things that can be combined. I have said this earlier on this same thread - " In a computer program you can access a variable from any part of the program that needs a sample of whatever information it is conveying. The brain does not work that way - if information must be available it will have to be conveyed by projections from one place to another. For sensor fusion, I can see a very real need for taps of various parts of different sensory streams to be connected at different levels of processing."

I have come this as a way to frame what MUST be happening in the brain. For data to come together there has to be some connection. These connections run the gamut from sensor fusion to the creation of concepts.

This could be through the hierarchy, (map-2-map connections) or it could be serial as in the proposed mechanism of “loop of consciousness” passing something through awareness to make it available to other parts of the brain.


In terms of merging disparate input sources across the same topology, makes sense as long as both input topologies match up, right? For eye/ear this makes sense because we can assume both input pathways are mirror images, but what if they bring their own twisted topological version of reality to the associative region?


I will get evasive and avoid directly answering your question about topology.

Let me throw this up to your for consideration:

Learning your personal space you have vision, audition, touch, joint angle.

Focal lesions in certain parts of the parietal cortex (part of the association area) cause degradation in visually guided grasping. This is a specific deficit. If you did not test for this you may not notice that this person was having any problems.

What information is being processed at that point?

Now apply that back to your question.

Another case:

A focal lesion in the area between the main parietal association pool and the auditory cortex (about the size of a pea) can cause the extremely narrow deficit of the loss of ability to read printed words. I have a paper on this.

On this particular map some very specific information is being connected and passed on.

This area seems to be related to visual parsing but it passes that to the auditory area for speech recognition. These areas are very different in shape and organization. This is still some important mapping between them.


@rhyolight - reading through your notes I see that we are still not exactly on the same page when it comes to “well shuffled with preserved topology” in the association areas.
Check out this picture:

Each dot is a projection axon from other maps to the association region. Assume that blue is vision, yellow is touch, black is touch sampled at a larger scaling, and magenta is vision sampled at a larger scaling.

Each sense is topologically preserved.
This mixed in scaling was what I was trying to get across with this image:

So in the association area the projections are a stew of senses, with samples of different scales on each sense present.

I have asked before if anyone was considering what it means when multiple source maps project to the same target map. There is considerable evidence that this is happening all over the cortex. This is what I was trying to get across with the connection matrix posts.

I had also stated earlier that I suspect that the purpose of the layers of the hierarchy is to extract maps of as many features as possible in a topologically related way to project into this stew.

Some stereo disparity, some color, some edges, some movement, …

A SDR (dendrite) snaking through this stew will have multiple senses, features, and scaling available to form associations. The hex-grid that forms and connects to other hubs is signaling that some learned pattern of features is forming at this location in the association area. A hex-grid is agnostic as to what is in the pattern, just that it is in some location and extent on the map, and is signaled by a particular angle, size, and relative phase/starting node of hex-grid coding.


Sorry, not much information around the internet what slabs are. Are they only those structures in V1 or there are slabs somewhere else?


Similar patterns can be seen in S1 as well. And in A1 there are binaural patches instead of slabs but definitely interesting topology.


Thanks! Could you tell me how many neurons are usually in a slab (or at least order of magnitude)?


So I did the journal club yesterday and I have to update my brain about some concepts I had wrong. I’m definitely thinking a bit differently about some concepts after talking it out with Jeff and the research team. I’m working on updating my notes, but I’ll present my findings soon.

Additionally, it is clear that I did not understand @Bitking’s “well-shuffled” idea properly, so my apologies for that. I will re-read your post once my brain clear up and respond to it as well.


There are tons of papers referenced above that provide evidence of these structures if you wanna read them like I did ;). But don’t worry I’ll present my findings soon enough.

It’s not that simple. Some of these patterns look a lot like fingerprints, so some slabs will be very short while others span the complete region.


4 posts were split to a new topic: 1000 Brains Theory Q&A