How can head-direction cells represent all possible orientations when they have preferred directions?

Since there are inf directions there should be inf head-direction cells with each one corresponding to a different preferred direction. They are not tiled to represent an inf space like grid cells do. What am I missing?

If you consider the orientation sensitivity in V1 cells you will note that there is an angle range of sensitivity for each group of cells.
Without looking it makes sense that the same principle applies here.

1 Like

Grid cell modules* represent unique precise locations that’s why it makes sense to include them in computations with displacement cells.

Head-direction cells would be ambiguous if they represented a range of radial orientation instead of a precise radius. It wouldn’t make sense to use them for computation of precise predictions. I wouldn’t be able to guess exactly how much I would need to turn my head in order to see the object behind me.

How is this ambiguity resolved?

Edit: Grid cell modules* represent unique precise locations.

depends how how expect you thing neural things are. I see them as populations that form a distributed representation. More of a statistical thing.
If you look at the Moser video of place cells - the firing we over a fuzzy range of locations in each of the grid locations. This was for a single cell being monitored. There are thousands of the cells with different tuning and the collective action gives a much more precise representation.

1 Like

What is the mechanism to store precise orientations?

If the orientation is 50 degrees and you have 2 cells each one over a range R1=[30, 55], R2=[52, 67] it would make sense that both would be activated but their sum of activations doesn’t make it precise.

If you look at the Moser video of place cells

Head-direction cells are different?

Why should they be? The HC/EC is full of various spatial representation cells.
Border, head direction , place, vector to objects, distance to objects.
BioRXiv is full of papers describing cells with various specialized properties.

See the Moser video embedded in this HTM school video. It also show cells with different receptive fields voting to make a better estimate of some value.

See vector cells here and see my take on how grid cells vote to form place cells in the thread above it. Look at the range of angle sensitivities listed in this paper. It is certainly not exact values.

I see the sensed local patterns in the local group of cells being the raw ingredients for the various forms of representation. Connectivity is key here.

Here is where I differ from the Numenta canon: Numenta places the grid cells at the level of the single column, I place is in groupings of columns. I see that my take does fit with what I have been reading, but Numenta is working on how the TBT will do the same thing. I will wait to see if they can seduce me away from my viewpoint.

Can you imagine a 1D grid cell module? It must have some input range before the period cycles. This is the orientation dimension, or head direction dimension.

4 Likes

It must have some input range before the period cycles.

Can you explain this a bit more?

A grid cell module in a set of modules where each one represents a different scale like the ones in HTM School Episode 14?

The Moser grid cells turn on and of as the mouse moves. If there is some scale to this it is the period between on and off as the mouse moves. If the mouse moves in a straight line then the distance between the spots is one complete cycle.

There must be some mechanism that produced this pattern and this must have an input and it must have some range of operation. It is thought to be the sum of arrays of repeating receptive fields of different sizes. The sum and difference pattern produces this spatial period of activation.

3 Likes

Ok, now consider that for the same 50° orientation you have hundreds of range cells:

(range over 1°)
R0=[49, 50] → active
R1=[50, 51] → active
→ 358 other 1°-range cells that are not active for this orientation
(range over 2°)
R360=[48,50] → active
R361=[49,51] → active
R362=[50,52] active
→ 177 other 2°-range cells that are not active for this orientation
(range over 10°)
R2000=[41,50] → active
R2001=[42,51] → active
R2002=[43,52] → active
→ 7 other 10°-range cells that are active for this orientation
→ 26 other 10°-range cells that are not active for this orientation
etc

A relative low number of cells are active at the same time only for the 50° orientation, while an order of magnitude more cells are inactive. Together these encode one very specific range of a roughly 50° orientation, even if most of the active cells are not very precise.

2 Likes

I can’t understand it. Why use a 49-50 and a 48-50 cell at the same time? It doesn’t resolve ambiguity. I’m waiting for your response @rhyolight

Please see figure 2 in this paper to see how the voting mechanism works to form a precise estimate from low accuracy cells.

This is the paper that explains this figure.

4 Likes

Perhaps it would be best to think of this in terms of the “Wisdom of the Crowds”. Our senses are not very accurate instruments. We do not come with GPS or magnetic compasses built in. All we have to work with are noisy proprioceptive and vestibular systems coupled with our learned spatial interpretation of the temporal patterns arriving at our eyes, ears, and skin. No single input provides an exact answer to anything. However, they all have a piece of the puzzle, and the specific combination of all of those pieces provides enough information for our brains to generate a very precise representation of our position and orientation in space with respect to the environment around us.

As for the mechanism responsible for this… Well, that’s what we are trying to figure out. Ila Fiete and her colleagues have made some good progress in proposing some compelling models inspired by their observations of grid cells in lab animals.

This discussion brings to mind another technique, known as a Hough Transform, that’s been used in computer vision for some time now. The Hough Transform is used to find the position and orientation of linear, elliptical (and some other parametrically defined shapes) features in images. The algorithm works by examining the strength and direction of the local gradient and then computing all potential lines, ellipses, etc. that could both pass through that point and produce the observed gradient. These potential features are registered in a sort of histogram for the phase space of all lines, ellipses, etc. The process is repeated again and again for different regions of the image until eventually the histogram has built up a handful of peaks which correspond (in phase space) to the parameters of the lines or ellipses that are present in the image.

While this example is not specifically related to what is happening in our brains, it does illustrate how it is possible to arrive at very precise descriptions of fairly complex features using only imprecise measurements of local information from many sensors.

5 Likes

I’m not sure how to better explain it. You must be missing something in your mental model. Keep in mind that there must be many head direction modules in order to identify a unique orientation, just like grid cells.

1 Like

By your statement it seems like you imply intent. Don’t forget that our brain evolved to the state it is. Researchers didn’t choose to have ambiguous signals. It just so happens to be a principle that is found in different places of the brain. But it also tends to produce a robust system that seems to work very well.

This is (as far as I know) still very speculative of course. Numenta and others are trying to make sense of it, mostly by simulating and testing.

If I read your question in another way, then consider that those two cells you mention for the 50° orientation are partly shared with the inputs of two other orientations (48° and 49°). This is necessary to produce a sparse data representation where each bit has a semantic meaning. The 50° state has common inputs with other orientations that the 48° and 49° orientations have not. That’s what resolves the ambiguity.

2 Likes

How can head-direction cells represent all possible orientations when they have preferred directions?

The way I conceptualize this is: a population of cells that each fire at a preferred angle would have a bit pattern that changes in a predictable order that repeats itself after going 360 degrees around. If that bit pattern is included in memory addressing of any kind then there are separate memories depending on the angle critter was at when something out of the ordinary like bashing into a wall occurred.

I found this interesting information and a model for where the HD signal originates:

3 Likes

The 50° state has common inputs with other orientations that the 48° and 49° orientations have not. That’s what resolves the ambiguity.

@Falco You are right. The single 50° angle should emerge as a common feature rather than a single activation of a cell representing a single value.

Location: In a 1D grid cell location module a period is a finite space (because it’s represented by a finite number of cells) that repeats to cover all space.

Orientation: In a 1D grid cell orientation module a period is a finite angle that repeats to cover all 360°.

Let’s say I’m trying to build a grid cell orientation region with 10 modules each one having 10 cells.

R: Region
O: Orientation module.
C: Cell inside the module.

R = [O1, O2, ..., O10]

O1 = [C1.1, C1.2, ..., C1.10]
O2 = [C2.1, C2.2, ..., C2.10]
... 
O10 = [C10.1, C10.2, ..., C10.10]

For the orientation module O1, the period that cycles is 10° to cover 360°. Each cell represents a change by 1°.

C1.1 = [0°-1°, 10°-11°, ..., 350°-351°]
C1.2 = [1°-2°, 11°-12°, ..., 351°-352°]
...
C1.10 = [9°-10°, 19°-20°, ..., 359°-360°]

For the orientation module O2, the period that cycles is 20° to cover 360°. Each cell represents a change by 2°.

C1.1 = [0°-2°, 20°-22°, ..., 340°-342°]
C1.2 = [2°-4°, 22°-24°, ..., 342°-344°]
...
C1.10 = [18°-20°, 38°-40°, ..., 358°-360°]

The 50° angle will have a cell active in each 10-cell module.

RegionSDR = 10/100 (10% sparsity, or maybe more than 1 cell active per module?)

@rhyolight Is my interpretation correct?

1 Like

Close. The scaling of the modules is not multiples of the smallest unit. They seem to increase in a ratio closer to 1:1.4. The exact multiple is under some debate.

Please check out these papers:


2 Likes

Nice, thanks!

@Bitking Is there a similar scaling for grid cell modules that deal with location?

1 Like

Yes, this seems to be the rule in all grid modules.
I have a collection of papers on this here:

2 Likes