A proposal for how orientation cells learn the whole room by radially observing it from a single point

The problem is described at 38:03

@jhawkins @mrcslws

The orientation of an object is stored in relationship with the orientation of nearby objects, temporally, using displacement cells. At t=0 the orientation of an object is active inside the room and at t=1 the orientation of another nearby object is active inside the same room. Displacement cells learn the orientation transitions between objects, the room and their sub-objects.

Location: Displacement cells may represent two different grid cell module activations being in the same physical point in space. Each location is unique to the room and the room.

Orientation: Displacement cells may represent two different grid cell module activations being in the same physical room. Each orientation is unique to the room and the room.

Bellow is the problem illustrated in figures. At each figure the observer leaves the room and comes back in standing at a different location each time, the room is the same.

Figure A

Two examples of orientation relationships. (1) is between the sub-objects of the object and (2) is between a sub-object and the room.

Figure B

The second time you open your eyes you are able to recognize the same objects from a different location because their orientation relative to each other (and the room) and the orientation of their sub-objects relative to each other stay the same.

  • Orientation relationship of the head/eyes relative to the object changes.
  • Orientation relationships of the objects relative to each other (and the room) stay the same.
  • Orientation relationships of their sub-objects stay the same.

Figure C

Even if the objects had been rotated before you entered the same room in a different location you 'd still be able to recognize them because the orientation relationships of their sub-objects (which had been partially observed previously) stay the same.

  • Orientation relationship of the head/eyes relative to the object changes.
  • Orientation relationships of the objects relative to each other (and the room) changes.
  • Orientation relationships of their sub-objects stay the same.

In displacement cell modules that deal with location the association between two objects being at the same physical point in the room occurs in two seperate time steps. The way I see this is like how we perceive that a frame on a 60hz screen is drawn every 16.67ms when in fact it takes slightly more time because it takes time to color every pixel in the frame which can’t happen instantaneously. Similarly, a moment requires more than a single time step to fully draw the picture of the location and orientation of objects.

I’m not sure if I’ve made any progress on this.
I’d like to hear your thoughts.

2 Likes

Thanks for the summary, it definitely helps me understand a bit.

This still seems like SIFT or ORB to me, which would probably really improve neural network image recognizes if anyone encoded those algorithms into a neural network.

However, I think it’s missing an important point in that some features need to be ignored, or else you get a jumbled mess of everything relating to everything. That may seem obvious, but how you do that is very important in terms of designing something. Perhaps it’s related to saliency?

Also, would any of this be done in one neocortex region, like the V1? I thought you’d need a barrel cortex for that. Or does it take more regions?