Understanding displacement modules in Layer 5

Before I ask your help in understanding how displacement modules work I will try to analyze what happens in the example mentioned in this latest slide from the CNS 2018 event.

  1. A sensation (an edge) arrives in Layer 4 activating all columns that correspond to this sensation in association with the orientation and allocentric location of the sensor represented at Layer 6a.

Recognizes: Both red & blue objects because at that location/orientation the sensor recognizes the exact same feature f1 (edge @ location/orientation).

  1. The sensor changes its allocentric location and orientation in Layer 6a to a point where both objects share different features. Then the associative memory predicts the corresponding features in Layer 4. It is learned/predicted that the red object has an edge @ that location/orientation f2 while the blue object has a curved line @ that location/orientation f3.

Recognizes: Both red & blue objects because even though the sensor has moved to a location/orientation with different features (f1, f2) enough to make the distinction the sensor hasn’t sensed yet which feature is there.

  1. A unique sensation f2 makes the actual recognition by activating the corresponding columns in Layer 4 and the associated cells in Layer 6a.

Recognizes: Red object

  1. The sensor moves to the initial location/orientation where both objects share the same feature f1 but since ambiguity has been resolved this time Layer 4 predicts only the cells which correspond to the red object.

Recognizes: Red object

In this poster displacement modules provide an additional step for recognition. What do they do exactly? How do they expand this step-by-step frame of understanding?

1 Like

Hi @nick! Without the displacement modules, the network can learn a set of unique objects and recognize objects that have been previously learned.

Displacements add the following:

  • They can represent compositions of objects, which capture the spatial relationships of sub-components without relearning the subcomponents. Without displacements, a composition would have to be learned as a new object and the subcomponents would have to be fully relearned as a part of the whole.
  • Displacements can represent novel configurations immediately, before you have a chance to learn.
  • Displacements provide generalization because the representations of two environments or objects that share sub-components will have overlapping sets of displacements.

One of the most compelling aspects to me is the notion of similarity. For a while, I struggled to figure out how two objects could be identified to have similar subcomponents, even if the subcomponents weren’t in the same relative position on the object. How can you “line up” two different object or environment representations so that their shared parts are overlapping? I couldn’t get it to work. Displacements capture these local configurations independent of their location and orientation which provides a nice form of generalization.

Let me know if you have questions.

One thing to note is that Marcus originally came up with the idea of displacements (originally called transforms) as learning the relative position between representations in different populations of cells. So in addition to the layers in the circuit above, there would be another location layer for “composite objects.” The displacement would encode the position of objects to their position in the composite object space. This is still an interesting idea with some nice properties (composites without relearning), but different from my variant that encodes relationships between location representations in the same population. It requires fewer transforms (just one per sub-object rather than one for every combination of pairs of sub-objects) but requires learning and doesn’t have the generalization property. Perhaps both are used.

Edit: You also asked how they work!

Displacements encode the relative positions of two grid cells. In the image you showed, there is one displacement module for each grid module. There is a displacement cell for each relative position of two grid cells possible in one module (e.g. “up 1 + right 1”, or “down 2”). While the displacement cell in a single module is very ambiguous (just like the location in one grid module), the set of displacement cells across all the modules is unique to both the objects and their relative positions. The connections between the grid cells and displacement cells can be hard coded or learned up front and then doesn’t need to change - the connections work for any objects! We will have more rigorous descriptions of these mechanisms published at some point.


Hi Scot, I’m looking for any published papers on displacement cells. I believe Marcus was the one that came up with the idea, did he publish anything?

@barnettjv From what I know, I don’t think there are any real details on displacement cells yet related to HTM theory in public. I saw @mrcslws’ original mentions in the Location paper, and have seen a few threads like this one, but I think that’s it. (glad to be wrong!)

Also, I don’t think @scott is around anymore.

Hi brev, I was able to find this Marcus’ paper. https://numenta.com/neuroscience-research/research-publications/papers/a-framework-for-intelligence-and-cortical-function-based-on-grid-cells-in-the-neocortex/

1 Like