How to generate place fields in grid cells

This started with some questions I posed over in htm school: grid cells about how grid cells initially learn a hexagonal grid of place fields. In that thread, my questioning can be summarized by this:

This is apparently an area with little information. I figured it was time to start a new thread about how to possibly implement this.

So, I came up with a super simple starter algorithm for generating place fields to learn about the allocentric space over time. This algorithm is substituting an external environment, such as a room, with objects. So the grid cells represent allocentric locations of features on objects.

Go easy on me if this has major/obvious flaws :smiley:. I would appreciate any feedback on its sanity. Maybe other people have given this a try. Maybe Numenta has this already implemented in a better way. If so, I could just throw this out and see how they did it.

NOTE: In this algorithm, I use some concepts called a “displacement radius” and “inhibition radius”.

The displacement radius is basically the size of the place field. It’s a radius that exists around the center the vertex of each equilateral triangle in the hexagonal lattice.

The inhibition radius is an area outside the displacement radius that “contains” each place field and basically adds space between the place fields, where other grid cells in the same module can set a place field.

Hopefully those makes sense.


  1. start with a set of grid cell modules containing some
    grid cells that so far have zero place fields total
  2. now the first sensory input comes in from a location
    on a novel object as the starting point (no
    temporal context at this point)
  3. pick a random cell from each gcm to represent this
  4. use the sensory input itself to identify the current location
    in the context of previous sensory pattern plus motor
    command (no context for first pattern though)
  5. map these patterns to a place field VERTEX of currently
    active cells
  6. execute a motor command
  7. measure the spatial displacement from the previous
    location (see below)
    8a. if displacement is less than the “displacement radius”,
    then map these additional patterns to the current place
    8b. if displacement is more than the “displacement radius”
    but less than the “inhibition radius”, then choose a
    different cell in each gcm
    8c. if displacement is more than both the “displacement radius”
    and the “inhibition radius”, then add a new place field to the
    current cell
  8. goto #5

How to compute the spatial displacement after a motor command?

Well, this will just depend on how motor commands are executed in
the environment. The motor command needs to yield some value that
can be encoded and use as an additional temporal context pattern.

Like I said, this is a starter algorithm, so I know there are steps missing. For example, once the entire object/environment has been learned, then it can start doing inference using the learned place fields I think. I’m also the most unsure about how step #4 works, but I have an idea.

I haven’t implemented any of this in code yet, so it’s completely theoretical at this point.

I thought the displacement modules poster might have been useful for this topic.

But I didn’t find anything too insightful unfortunately. It seems the grid place fields are pre-established there.

Not to tout this post as a cure for everything but it does describe some of the low level mechanisms of hex-grid coding and the front running theories of how they form. About in the middle it describes how maps can “stack up” or overlay projections to make a form of voting to build up to a peak of activation. This is the same as driving a place cell.

We know from various papers that where high level processed data terminates the data is reversible from the hippocampus. That implies that the cortex state can be recalled to recall a memory representation from the hippocampus to the cortex.

Yes, that’s an intriguing post that generated a lot interest. If I understood half of it, it seems to propose physical grids literally forming in L2/3 of cortex by recruiting horizontal cells into shared activations.

This is very different from grid cells from what I can tell. Grid cells in mEC don’t actually participate in a physical grid formation, right? I’m not sure that the algorithm I’m looking for relates to the hex-grid forming that you wrote about.

1 Like

It sounds like you do have the basic concepts.

As the post starts out - there are two separate things going on, both related to “grids.”

One codes for location, one binds columns into patterns that happen to be a sparse hex shaped pattern. It is possible (and likely) for both to be going on at the same time in the EC. We have recording via several different methods that all show the hex-grid formations happening in the EC.

If you search this site for hex-grid there are a few supporting posts and pointers to papers supporting this concept and implications of the concept.

And to be absolutely clear - nobody knows how the senses are processed to make the spatial coding that Moser calls grids. We can get about two or three levels up the sensory processing stream and then lose any rational description of the spatial processing that ends up informing the EC as to where the critter is.

This is driven home in that Moser stuck wires in critters and saw this pattern. H&W stuck wires in kitties and was able to make sense of the transformations up to a certain level - then - not so much. We can describe map to map connections so we know the paths. We just have no idea what they are doing. We do think they are doing the column computations just like everywhere else.


This shows a graphic example of how my proposed system works.

There is something at work to form the cluster of features that we can identify as a Moser grid cell being active when certain inputs are present. I think that in time Numenta will show how the TBT builds up to doing this.

I think that the Calvin tiles method (hex-grid coding) shows great promise but until I get demo code running to show this in action it is just another pretty theory.

BTW: to clarify a relationship - (mini-columns made up of HTM cells) group together to form (Hex-grid coding) to interact with hex-grid codes in other maps to display the macro-property of (activation spot in Grid cell array)

Example below:

The black spots are mini-columns. The colored lines are competition wining lateral connections, the red in one map and the blue in another. The yellow spots are in a third map where the projections from the red and blue maps coincident hex-grid signalling interact to a form a spot that you call a grid module. I don’t expect the entire hex-grid arrays to be active at the same time.

The recognized input space (the room) sets up a hex-grid in one map; allocentric locations. This is all the sensed cues such as landmarks or odors.

Your relation to the landmarks or body senses like moving set up a related smaller patch of hex-grid in a different map. YOU in the allocentric location space. Same processing, just add YOU to the mix.

In the map/area with the coincident bits there is a signal in the EC that could be learned in the HC as a location. A place. Since the YOU patch is smaller only one intersection goes active as you are in this area.

There is no requirement to go and visit all the spots in the room to form the Moser gird, this is formed automatically by processing the perceived space into hex-grid patterns when you enter the space.

A goal does much the same thing but the GOAL is added to the landmark mix instead of YOU.

All my hex-grid posts are totally based on HTM, they just assume that the output forms these hex-grid things. And explains how.

1 Like

This is about more than place Fields - this is vectors pointing at the places:

@subutai - you may find this very interesting!

1 Like