Grid cells and void spaces

Hi all,

I’ve tried to keep up with all the great discussions recently on grid cells, but if what I’m asking here has already been covered then please link me. Also as this is active research, I welcome the usual high quality speculation I’ve come to expect from this forum :stuck_out_tongue:

So, I think I understand how locations are unique to each environment, where the discussion is usually framed around the presence of objects (coffee cups, pens and staplers).

I’ve been pondering on how the lack of an object is represented in the grid cell theory and thousand brains model. So in the scenario of reaching into a cardboard box to touch a coffee cup (assume a large box), this is the question of what’s happening as your hand searches the empty space for the cup. If you sweep the area systematically, your brain must be constructing a map containing no objects, thus if you revisit old empty space then your brain will expect your hand to touch nothing.

Is it believed that the empty spaces in the box are learned as models by cortical columns, the same as the coffee cup is?

2 Likes

If speculation is allowed…

As far as I understand, features of objects are stored as kind of pairs. Pairs of one piece of sensory information (a bumb, a rough patch, a red spot, a certain sound if scratched of hit, …) together with a kind of geolocation information (the result of a combination of grid modules).

If you touch a cup you know well on a smooth curved surface, all the pairs in your brain that match smooth curved surfaces with all the known locations, light up. The more you touch, the less pairs remain. Your brain has identified the cup, and also the location on the cup that is currently sensed.

Your brain makes predictions of what features are relative to the location you touch, but it also expects nothing at certain places. For instance, there is a little 3D space of nothingness on the inside of the handle of the cup, and another bigger space of nothingness on the inside of the cup. I would expect your brain to store those absenses of features, as part of the model of the cup you know.

When you leave the rim of the cup and continue to move your finger through the nothingness, your brain predicts that at a certain distance your finger will encounter the other side of the rim. So while you move through nothingness, your brain still needs a model of empty space in pair with geolocational information through grid cells.

A question I struggle with is how much of this empty space needs to be stored. But I just realised that this might just be a function of how well you know the object. If I’m handed a coffe cup that happens to have four handles, I would probably recognise it quite fast as a cup, even though lots of information is surprising and different from the model in my brain. I know coffee cups, but I don’t know this particular cup yet, and so I need to increase the model.

So if I spend much time exploring all the absenses around objects, my brain will start storing more information about this empty space. Therefor I speculate that visually impaired people have vastly more models of empty space than I have, since it is so important for their mobility.

2 Likes

Hmm, this doesn’t make sense. Empty space is just as important for my mobility as for visually impaired people’s. My brain just processes it using other sensory input. So the difference is merely where this information is stored.

1 Like

I’ll speculate as well! I believe when you sweep your hand through that space, you’re simultaneously modelling two spaces. One is an allocentric space where you can represent any objects you’ve ever learned. The other is an egocentric space that represents YOU in your environment and the locations of every object around you (the features of your environment).

When you sweep your hand, you are searching your environment and trying to match sensory input with an object library. If you touch nothing, you never represent anything in the object space. As soon as you sense something in your sweep, you cross reference it with your library of objects, find the best match and intersect it with your egocentric space to represent that object in space in with YOU.

2 Likes

Thanks @Falco and @rhyolight for the responses!

The idea that an empty space has no representation in the object space certainly seems like the most efficient solution. For this to work, there must be some fundamental mechanism that can separate sensory input that correspond to objects worth storing vs. sensory input that represents nothing - I’ll see what google turns up for me on the neuroscience front.

Similarly I’m still curious about what generates the sensory prediction that corresponds to it (i.e. your hand is about to feel “nothing”). Unless your hand is in a complete vacuum, there’s always an environmental factor to the prediction (wind resistance, temperature). Or to pick an extreme, if the coffee cup were in a tub of water you could be searching another type of empty space but with entirely different sensory prediction.

1 Like

Attention? We certainly can tune into one object in our sensory field. I think learning requires attention.

2 Likes

and

Couldn’t it also be that the attention defines wether you’re scanning your egocentric space or the allocentric space of a specific object?

When you’re fishing for an object in the box, you’re scanning your egocentric space. Everything there is supposed to be empty space, until you surprise yourself with something.

When you touch an object you know, or even when you imagine touching a familiar object, you concentrate on what its features are, and the attention is no longer on empty space. Unless the empty space is part of the object. (Like a hole in the handle of a cup).

One of the strange things about consciousness, is that it seems we can only concentrate on one thing at a time. We can switch very quickly, but always from one thing to one other. Maybe this is related to grid cell projection; as in limiting the spacial representation to be compared.

2 Likes