Great thank you!
Mark you are really helpful suggesting papers, hopefully I’m not taking advantage of you, but I have a specific request.
I would love to find evidence of striping in auditory cortex along the lines of the H&W orientation/direction topology. There is clearly a binaural projection of the input just like binocular visual input, and it follows that there may be a stripes for different low-level characteristics of sound like tone, attack, volume, etc?
BTW, I just added a #3 to my preliminary findings above and updated my notes. I’ll record tomorrow.
It’s more clusters than stripes in the auditory cortex.
For those interested in looking for more background:
Some considerations for you to toss in the mix. The path to the cortex varies by sensory modality. The ear and eye have fairly direct connections that are able to preserve the important topology. The skin has to send this stuff up the spine through inter-neurons.
If you think of how the nerves have to grow to the right place, guided by chemical markers as the growth cones make their way - I can see how local groups can stay in sync while on a larger sense they get mixed up.
As far as clusters - there is no inherent topological meaning in sounds as there is in vision. There is grouping based on what you are trying to wring out of the sound. For location you have the speed of sound and the distance between the ears; say 0.5 foot spread and 1100 feet/second. The difference in arrival times makes 2000 Hz and up kind of important. (~0.5 ms delay) For fine discrimination of angles you are in the low number of microsecond differences.
There are other grouping that are based on the shape of the ear and its filtering properties for directional cues. This has to form a very complicated mapping to extract direction; I have no idea how the brain trains this filter without a teacher. Perhaps it is fusion with other senses but I have never read a paper on how this comes about.
Form follows function. There are some things evolution had to get right for this stuff to work at all. Other things have some freedom so there is unimportant variation. In movies they say “we will fix it in post” - the the brain I think it’s “we will fix it in hierarchy.”
I just read your updated notes.
Thanks for the shout-out.
Is my original “well shuffled but still topologically arranged” making more sense as you look at it?
I see that by the time you get to the association regions this will give the SDRs at this level access to the kinds of things that can be combined. I have said this earlier on this same thread - " In a computer program you can access a variable from any part of the program that needs a sample of whatever information it is conveying. The brain does not work that way - if information must be available it will have to be conveyed by projections from one place to another. For sensor fusion, I can see a very real need for taps of various parts of different sensory streams to be connected at different levels of processing."
I have come this as a way to frame what MUST be happening in the brain. For data to come together there has to be some connection. These connections run the gamut from sensor fusion to the creation of concepts.
This could be through the hierarchy, (map-2-map connections) or it could be serial as in the proposed mechanism of “loop of consciousness” passing something through awareness to make it available to other parts of the brain.
In terms of merging disparate input sources across the same topology, makes sense as long as both input topologies match up, right? For eye/ear this makes sense because we can assume both input pathways are mirror images, but what if they bring their own twisted topological version of reality to the associative region?
I will get evasive and avoid directly answering your question about topology.
Let me throw this up to your for consideration:
Learning your personal space you have vision, audition, touch, joint angle.
Focal lesions in certain parts of the parietal cortex (part of the association area) cause degradation in visually guided grasping. This is a specific deficit. If you did not test for this you may not notice that this person was having any problems.
What information is being processed at that point?
Now apply that back to your question.
A focal lesion in the area between the main parietal association pool and the auditory cortex (about the size of a pea) can cause the extremely narrow deficit of the loss of ability to read printed words. I have a paper on this.
On this particular map some very specific information is being connected and passed on.
This area seems to be related to visual parsing but it passes that to the auditory area for speech recognition. These areas are very different in shape and organization. This is still some important mapping between them.
@rhyolight - reading through your notes I see that we are still not exactly on the same page when it comes to “well shuffled with preserved topology” in the association areas.
Check out this picture:
Each dot is a projection axon from other maps to the association region. Assume that blue is vision, yellow is touch, black is touch sampled at a larger scaling, and magenta is vision sampled at a larger scaling.
Each sense is topologically preserved.
This mixed in scaling was what I was trying to get across with this image:
So in the association area the projections are a stew of senses, with samples of different scales on each sense present.
I have asked before if anyone was considering what it means when multiple source maps project to the same target map. There is considerable evidence that this is happening all over the cortex. This is what I was trying to get across with the connection matrix posts.
I had also stated earlier that I suspect that the purpose of the layers of the hierarchy is to extract maps of as many features as possible in a topologically related way to project into this stew.
Some stereo disparity, some color, some edges, some movement, …
A SDR (dendrite) snaking through this stew will have multiple senses, features, and scaling available to form associations. The hex-grid that forms and connects to other hubs is signaling that some learned pattern of features is forming at this location in the association area. A hex-grid is agnostic as to what is in the pattern, just that it is in some location and extent on the map, and is signaled by a particular angle, size, and relative phase/starting node of hex-grid coding.
Sorry, not much information around the internet what slabs are. Are they only those structures in V1 or there are slabs somewhere else?
Similar patterns can be seen in S1 as well. And in A1 there are binaural patches instead of slabs but definitely interesting topology.
Thanks! Could you tell me how many neurons are usually in a slab (or at least order of magnitude)?
So I did the journal club yesterday and I have to update my brain about some concepts I had wrong. I’m definitely thinking a bit differently about some concepts after talking it out with Jeff and the research team. I’m working on updating my notes, but I’ll present my findings soon.
Additionally, it is clear that I did not understand @Bitking’s “well-shuffled” idea properly, so my apologies for that. I will re-read your post once my brain clear up and respond to it as well.
There are tons of papers referenced above that provide evidence of these structures if you wanna read them like I did ;). But don’t worry I’ll present my findings soon enough.
It’s not that simple. Some of these patterns look a lot like fingerprints, so some slabs will be very short while others span the complete region.