I don’t want to get stucked on this point - substitute “surface” for “manifold” if it helps. Or please explain how do you think of it
Feel free to think of a purely 3D object without mentally flattening it to 2D surfaces. I’ll wait.
I assume that you have a great deal of trouble as your metal machinery that considers space does it in 2D sheets connected together with bundles of connecting fibers.
The manifold that Matt brought up is the correct answer but without some explanation it may not make any sense. These 2D maps/areas that make up the brain (currently thought to be about 100) each hold some part of the current mental processing. If you take them as a whole each contributes some sort of contribution of “this thing vs that thing” on the local representation in that part of the brain.
When you glue them all together the resulting “manifold” is just the current state of all the areas/maps active in the brain at that point taken as a whole.
Each area can be though to a a projection to some dimension in mental space. Since they are all connected together in the current mental state you can describe it as a multi-dimensional representation.
I hope this helps.
The neuronal space is extremely high-dimensional. I think it is safe to say it is not restricted by dimensionality. It can virtually represent input of any dimensionality. I tend to think of the representations as being dimensionless. They can only be unrolled within the input space they came, which had the dimensionality in the first place.
We are on the same page that objects can be thought of as 2D manifolds, but we can imagine them (and they are) embedded in 3D space. Therefore, to specify a location of a sensor / feature / object, it seems to me, it is necessary to encode their position in 3D space, isn’t it?
While intellectually appealing, a 2.5 D representation seems to be what is going on.
Until it enters your personal space and you can manipulate it - it is a steradian/flat patch that is “that direction and is somewhere near or far away.”
Another observation that I have made related to this, is that when I am exploring an object with my finger, for example, it feels to me as though I am projecting myself onto the finger and in essence “walking around” on the object and building or recognizing a model of it much like exploring a room and building a model of it. There is a sense of a position on the object and a sense of a heading, both of which changes fluidly as I move my finger around in different directions.
To me, at least, the sensations of exploring a room and exploring an object are extremely similar, and the logical conclusion (to me) is that they must be sharing the same mechanics behind the scenes. An “allocentric” space when exploring an object is essentially the same as an “egocentric” space when exploring a room. The switch between the two spaces, I would theorize, is at least in part a function of attention
If this is true, then understanding how grid cells and head direction cells are used when navigating a room are directly applicable to how they are used when exploring an object. We had a discussion on this thread recently where some excellent references were given related to the nature of head direction cells when exploring a 3D room.
As an interesting side note, this would also explain how we are able to almost effortlessly control an avatar in a video game (compared to an RL algorithm, for example, which requires a ridiculous number of iterations) – we can essentially project ourselves onto the avatar and explore the game world as though we were actually in it.
You’re correct that the poster only showed how to represent 2D locations, and so more is needed to handle 3D objects. I wrote the poster to give people an intuition for how grid cells can create a large space of unique locations, and it was easiest to illustrate this in 2D, and then people can extrapolate to 3D. Biology may simply extrapolate to 3D, using 3D modules, or it may have found a way to use 2D modules to handle 3D.
And yes, during movement, the representation of a set of grid cell modules moves through a finite-dimensional manifold (2D in the poster). And to handle 3D objects, I agree that we probably need a manifold that is at least 3D, or we need some other trick.
From an actual implementation perspective, do the grid cell modules have a fixed size on their input space once they are initialized? How would they handle being ported to a larger environment with new locations? Do they have a mechanism for dynamically adjusting to different sized environments? For example, what exactly would happen if the room that your mouse cursor moves within suddenly became larger or smaller? How would the grid cells react?
So, maybe it makes more sense to think about hippocampus and it’s grid cells as a compressed map of neocortex, and neocortex as primary and more detailed map of environment? Where both maps are representing all levels of generalization / conceptualization. In that case, primary spatial map would be egocentric one, in the dorsal parietal “where” stream. Of course, there is another map of neocortex in thalamus, it’s probably organized differently.
Sorry if that’s covered somewhere.
That’s interesting - a map of a map. The brain also has an interpreter that tries to make sense of what the rest of the brain has done - its a kind of rationalizer after the fact.
My take is that the hippocampus remembers the key features of your experience; the most processed version of the day’s events. At night, during REM sleep, the contents of the hippocampus is replayed reinforcing the memories of the day.
Some hippocampus factoids in no particular order:
- The hippocampus seems to hold about 1 days worth of experience before saturating. Symptoms of saturation include discomfort and hallucinations.
- From a system approach - the buffered events of the day allow the amygdala and other sub-cortical structures to add emotional weighting to the memories before consolidation in sleep cycles.
- It is likely that the hippocampus does good one-shot episodic learning as opposed to slower Hebbian learning in the cortex. It is possible that sleep spindles power an accelerated learning in the cortex.
- The sleep cycles seem to normalize or reset the hippocampus for a new days learning. If the hippocampus is learning the “delta” between what is in the cortex and hippocampus I could see that part of the process is to “ring” both to test the response and drive learning to the cortex until they are in agreement.
- The claims that spatial processing is done in the hippocampus seems off - patients with damage to the hippocampus see to be able to process space normally - they just can’t form new memories.
Damage isn’t always clear (H.M. had some hippocampus remaining for example), and there might be redundant regions which haven’t been studied enough to know that yet. The fact that there are place cells convinces me that it does spatial processing, although it’s not necessarily essential.
Maybe the brain processes things that aren’t real locations as if they were locations. It might be equally valid to conceptualize the function of the hippocampus as encoding events as spatial relationships, or encoding spatial locations as events. It can move between locations by behavior, and you can recall events at different speeds than they occurred, moving between each part of the event as if moving through space.
I don’t really understand it, but in the podcast Jeff Hawkins talked about objects built up from smaller objects by linking them and being able to follow the links using attention. That’s pretty similar to memory.
Is this “remembering” in any way independent, or it is simply fresh memories in association cortices forming temporary connections in hippocampus, for potential reinforcement?
It is likely that the hippocampus does good one-shot episodic learning as opposed to slower Hebbian learning in the cortex. It is possible that sleep spindles power an accelerated learning in the cortex.
Do these spindles also happen in awake hippocampus, although less frequently?
If the hippocampus is learning the “delta” between what is in the cortex and hippocampus I could see that part of the process is to “ring” both to test the response and drive learning to the cortex until they are in agreement.
Isn’t this agreement between cortex and amygdala, with hippocampus as a mediator?
The claims that spatial processing is done in the hippocampus seems off - patients with damage to the hippocampus see to be able to process space normally - they just can’t form new memories.
Well, temporary hippocampal connections can also be with spatial memories in the cortex.
So, my “map” idea should be qualified in that this mapping is temporary.
Still, all association cortices must have tentative / potential connections with hippocampus, so it’s kind of a proto-map.
I don’t see it this way. There is a strong spatial component to what the hippocampus does. I think that the amygdala “colors” episodes but does not contain them.
This patient may help frame the behavior of the amygdala:
Note the preserved function which helps frame what the amygdala does and does not do.
This quora entry has some interesting pointers if you wish to explore further:
Thanks. So, I guess hippocampus mapping to cortex is indirect, via thalamus.
At least outside of EC and medial temporal lobe.
As for intrinsic function, that seems to be search for correlations between fresh memories.
Because they are conveniently localized in hippocampus but widely distributed in the cortex.
This covers spatial aspect too: the sources of short-term memories are likely proximate spatially.
I have not been able to locate the exact mechanism/neural pathway that feeds images to the amygdala (and more importantly - parses) but it does seem to be aware of shapes, and to a lesser degree places, to activate emotional responses.
in general, this post has contributing amygdala content:
and in particular this link speaks to my point:
This link ties the amygdala to the hippocampus and hopefully - brings this post more in alignment to this thread:
https://pdfs.semanticscholar.org/2559/1f5d9fde558ce5cd7f49607ef87e48e6287f.pdf
I think you are talking about blindsight: Blindsight - Wikipedia,
which is probably via LGN or Superior Colliculus: Superior colliculus - Wikipedia
In general - yes.
I am very aware of sub-cortical visual awareness.
What I am less sure of is the perceptual mechanism. “We” have a pretty good idea of the general mechanism of the visual path and contents of the cortical visual stream.
I have not seen the same level of description and understanding of this sub-cortical visual perception.
This is important and related to grid cells in an indirect way.
As far as I am aware - nobody really knows how the visual, tactile, and somatosensory information is combined at the level of the hippocampus to form the response or activation pattern that is described in the Moser work. If a mouse navigates by vision or motion or whiskers it all seems to end up forming the same response to locations that have come to be known as grid and place cells.
Most of the work I have seen comes at this from the cortex side. @Gary_Gaulin put up a post today that analyses the relationship between the grid & place cells and if you read this it would seem like this is somehow happening in isolation from the rest of the brain.
My point in all this is that the cortex pathway seems to lead from the raw sensory areas through the WHAT and WHERE streams to the temporal lobe and on into the entorhinal cortex. This is a pathway that is full of learning and serial processing.
The sub-cortical structures seem to be shorter and more hardwired. They come to the hippocampus via the “other” direction and may well play as a significant pathway to forming the place and grid patterns.
This is the bit that I have been trying to learn more about: the hardwired visual system in the sub-cortical structures.
We know it has built in shapes and emotional coding for at least fear. We know that it can guide you around a room when you can’t see at all. It can guide your grasping when blind. It sees faces and expressions. There it good reason to think that it uses features to be sexually attracted - sex linked features that are thought to signal desirables in a potential mate. I’m sure I could name more but this is just a random list. Yes - the male and female brains could be wired/programmed differently.
That’s a lot of lower level processing and it’s tied directly to the hippocampus. It sure would be nice to know more about a few dozen dense cluster of nerves that are doing all this.
My impression is that sub-cortical perception is much lower-quality, the only advantage is that it’s faster.