Numenta Research Meeting - September 21, 2020

In this week’s research meeting, Marcus Lewis presents his ‘Eigen-view’ on grid cells and connects ideas in 3 underlying papers to Numenta’s research. He discusses the mapping of grid cells in terms of eigenvectors, and evaluates eigenvectors in terms of the Fourier transform (space) and the non-Fourier transform, called “spectral graph theory” (2D graph).

Marcus’s whiteboard presentation:

Papers referenced:
► “Prediction with directed transitions: complex eigen structure, grid cells and phase coding” by Changmin Yu, Timothy E.J. Behrens, Neil Burgess

► “The hippocampus as predictive map” by Kimberly L. Stachenfeld, Matthew M. Botvinick, Samuel J. Gershman

► “Grid Cells Encode Local Positional Information” by Revekka Ismakov, Omri Barak, Kate Jeffery, Dori Derdikman
https://www.cell.com/current-biology/fulltext/S0960-9822(17)30771-6

.

4 Likes

As always, thanks for the excellent notes @Bitking

3 Likes

Link to research meeting video: https://www.youtube.com/watch?v=i_V7utn-_XQ

@mrcslws
One part of this view that I still really like:
The distance between two states (i.e. the adjacency in the graph) isn’t simply the physical distance in the world. It is the temporal distance, adjusted for the animal’s movement policy. I think this idea will be important for understanding grid distortions.

(Kropff & Treves 2008) uses this idea to make grid cells. I explain how in my presentation on the subject: Video Lecture of Kropff & Treves, 2008

1 Like

I wish I coiuld take much credit for the notes but they are taken directly from the text on the youtube posting.
I do some minor format editing and that is all.

It seemed like there was a bit of confusion early on trying to define eigenvectors and eigenvalues. These concepts are not really that difficult to understand. So the professor in me feels compelled to try and explain things a bit more clearly.

One could interpret the action of a matrix times a vector as performing a mapping from one linear vector space into another linear vector space. The eigenvectors are by definition the set of directions that span the space and are invariant under this mapping. In other words, any vector aligned with an eigenvector will not change its direction (only its magnitude) under the action of this matrix-vector multiplication. The scaling of the magnitude of a vector aligned with an eigenvector is proportional to the eigenvalue associated that eigenvector.

I think this property is what Marcus was trying to explain when he was describing the action of repeatedly applying the matrix to the vector. The misunderstanding probably arose from the specific example Marcus chose to try and explain the concept (further confounded by the presence of the separate examples that immediately followed it).

I will just add here that linear combinations of eigenvectors are not always eigenvectors. Eigenvectors are mutually orthogonal by necessity. For any general vector that is a linear combination of eigenvectors, the action of the matrix-vector multiplication will scale the components of that vector aligned with each of the separate eigenvectors proportionately to the eigenvalues associated with each of those eigenvectors. Unless all of the eigenvalues are identical (uniform scaling), then the resulting mapped vector will not be aligned with the original vector, and hence not meet the requirements for being an eigenvector.

Repeated eigenvalues could indicate that the mapping operation may be acting on a subspace of the input space and mapping it to a comparable subspace in the target space. In which case the choice of eigenvectors may be somewhat flexible (i.e. any set of mutually orthogonal directions that span the subspace would be a valid set of eigenvectors).

As long as the matrix is non-degenerate (i.e. its determinant is non-zero), there will be a set of mutually orthogonal eigenvectors that completely span the input vector space. Degeneracy in the matrix implies that there is some redundancy in the matrix (e.g. some rows are linear combinations of other rows). In which case, the mapping operation could be interpreted as projecting vectors from the input space onto a lower-dimensional subspace or manifold of the target space.

Things get a little more fuzzy when you start talking about eigenvectors of functional spaces. It’s much more difficult to visualize these entities as vectors as they are typically functions distributed over space or time, and their action is less concerned with preserving specific directions than the functional relationships between local coordinates. To me it’s usually easier to switch terms and start talking about orthogonal basis functions. But before I dig myself too deep, I’ll end this post with a traditional academic dodge: Discussion of this topic is beyond the scope of this report. If there is sufficient interest, I can continue, possibly in a new thread or sub-forum.

3 Likes