**Update:** I recorded a video lecture on this topic:

The slides are at https://github.com/ctrl-z-9000-times/KropffTreves2008_reproduction/blob/master/grid_cells_presentation.pdf

**Update:** I recorded a video lecture on this topic:

The slides are at https://github.com/ctrl-z-9000-times/KropffTreves2008_reproduction/blob/master/grid_cells_presentation.pdf

14 Likes

@dmac thanks for you presentation.

Could you please share your C++ SP implementation with HTM.core?

1 Like

Iāve implemented this several times, but I may not have a code solution ready for you to use:

- Python NuPIC, shown on slide 13, SP augmented for grid cells is available at https://github.com/ctrl-z-9000-times/KropffTreves2008_reproduction
- Python HTM (implemented from scratch) for slide 17, Experiment for L2/3 is available at https://github.com/ctrl-z-9000-times/sdr_algorithms
- C++ HTM.Core has an experimental branch with Grid Cells. But Iāve given up trying to merge this branch into the master, so use it at your own risk. https://github.com/htm-community/htm.core/pull/285

1 Like

@dmac thanks for your information. Unfortunately i Did not find any branch named GridCells. Maybe it is ColumnPooler2?

Could you pls check it again?

Where is your demo with Mnist?

Yes, the branch is named ācolumnPooler2ā. I linked to its PR in my previous response.

I have demonstrations using MNIST, but they are not related to this topic.

HTM-scheme/projects/KropffTreves2008_reproduction at master Ā· rogerturner/HTM-scheme Ā· GitHub is a HTM-scheme translation of @dmacās code (so a replication^{2} of Kropff & Treves 2008 )

One small change is to increase coordinate encoder radius and width every 50000 cycles (see grid_cell_demo.ss lines 193-201). Results after 200,000 training iterations appear similar to @dmacās (trained for 1,000,000 iterations).

2 Likes

bump

@dmac Very nice explanation, i was struggling with lot of things that was covered with this. Thank you so much for this video !

2 Likes

Great presentation and great work in replicating this research.

Given @jhawkinsā recent focus on 1 dimensional grid cells, would this still apply? Do you have some insight how a minicolumn could handle this? Or maybe how different parts of a minicolumn could indeed produce a 1 dimensional grid cell with the filters mentioned?

Edit: By rereading a few times what I just wrote, Iām not sure how correct it is. I donāt really know if I should call it one dimensional grid cells or one dimensional components of an n-dimensional grid? Itād be nice if someone could help with that too. Sorry for the confusion.

I do not think highly of their most recent work. Their 1D-grid-cell hypothesis opens many more questions than it answers:

- How does a one dimensional grid cell form? Their idea does not solve the underlying problem of making grid cells. They suggest the oscillatory interference modelā¦
- Are they really restricted to only 1 dimension? If so is it possible to have more dimensions than grid cells? What would happen if all grid cells were assigned to their dimensions and then I added a new dimension to the world? For example by giving you wings or a jet-pack, you gain the ability to travel vertically. Suddenly the grid cells which worked in your flat 2-D world need to also work with a third dimension.
- What about all of the strange dimensions? Its easy to think about moving in ideal 2-D euclidean spaces, but what about moving your arm into the sleeve of your coat. Each of your arms/legs has about 6 degrees of freedom, including several rotational joints. I donāt see how the 1-D grid cells can deal with inverse kinematics problems such as arm or leg movement.

Their 1-D grid cell idea does not seem to be compatible with (Kropff & Treves, 2008), but they have not offered compelling evidence for their theory or against the K&T theory.

1 Like

Ok. Thatās a shame. :-7

To be honnest, @jhawkins didnāt call it one dimensional grid cells. *I did* and I seriously think I misunderstood what Jeff means.

The central idea (I think) is that each minicolumn *encodes* a changing input in a **one dimensional gradient**, or a range, or some kind of register. (Iām not sure if it has a minimum and a maximum, like a start and an end, or if it is a wrap-around value. At some point I think Jeff mentioned it as some sort of frequency. The sound of a metal detector comes to mind. You know, the kind that goes up in tone when you move nearer to a ferromagnetic object. But again, this is speculation on my part).

And for the visual region which he talks about mostly, there are many more directional encodings than the 3 carthesian axises. (I think of a sea urchin as a mental image, or maybe a pin cushion).

I guess if you look at a point in space and move across or around it, each of the minicolumns encodes a value as a transformation onto the directional gradient it manages. Like a projection onto a line segment. So that one changing point will have a bunch of changing values represented in each of the gradients. But another point at a fixed position to the first one, will have another set of changing values in each of the gradients. Some values change slower or faster; some changes over a shorter or larger range; some donāt change at all.

By combining each of those sets of values, somehow (sorry, *handwaving*) the neocortex makes sense of how the object with the two points we consider, moves in relation to us.

The big advantage that a minicolumn only needs to *manage* one dimension, is that itās much more universal than a system that needs to manage a value over an unknown and possibly changing amount of dimensions. Remember that the same minicolumn structure encodes all types of information (visual, auditory, tactile, abstract, ā¦). It makes more sense to produce n-dimensional objects from one dimensional fundamentals, than to have a complicated two-dimensional (hexagonal) gridcell and sometimes use it for lower dimensional and sometimes for higher dimensional objects.

I think there are minicolumns for all the types of sensorial perception we have learned, including higher level abstractions. It is possible that an experience lacks information for a number of minicolumns that we developped specifically for that (now lacking) information type, in which case those minicolumns remain silent. They donāt *help* in the identification and prediction of the current experience.

1 Like

The K&T model *can also* manage a value over an unknown and possibly changing number of dimensions.

Edit:

The K&T has a different set of āfundamentalsā. In the K&T model: grid cells activate at locations in the world. These locations are shaped as spheres, and the diameter of each sphere is defined by the time it takes to traverse it. These spheres can have as many dimensions as there are in the real world locations, which they are modeling. There is no extra cost for having more or fewer dimensions.

The complex hexagonal tiling is actually just an artifact of the way spheres naturally pack in 2-dimensions.

2 Likes

Iāve found more prior art:

Learning Invariance from Transformation Sequences

Peter Foldiak, 1991

Physiological Laboratory, University of Cambridge

(PDF) Learning Invariance From Transformation SequencesāThis temporal low-pass filtering of the activity embodies the assumption

that the desired features are stable in the environment.ā