During some recent podcasts, Jeff Hawkins has said a few times that there is now evidence that Grid and Place cells exist in the neocortex. Yet I can’t find any published references to this. Can someone please point me to the canonical references from which Jeff is drawing his claim? Thanks.
I am not privy to where JH draws his information but there are numerous sources that say the same thing.
Example:
A novel somatosensory spatial navigation system outside the hippocampal formation
“In humans, hexadirectional grid-like signals have been observed in many brain regions outside the hippocampal formation including the posterior cingulate cortex, the medial prefrontal cortex, the retrosplenial cortex, the medial parietal cortex and the frontal cortex while grid-cell-like neuronal representations were identified with intracranial electroencephalography recordings on presurgical epilepsy patients and fMRI studies in the human entorhinal cortex.”
Thanks, Mark. This is exactly what I’ve been looking for.
(As an aside, I noticed that you’re almost everywhere on this forum. What do you do for a day job?)
The reason I ask about these references is because I’ve been using Marc Howard’s EC-CA Laplace framework to model a navigation mechanism for a virtual robot. (I noticed you made a post about this last year.) I’m actually presenting a poster on this next week. https://comco2021.uni-osnabrueck.de/abstracts/day1_poster_goldowsky .
This work is a baby step proof-of-concept that the Laplace framework could be used for navigation through a more general form of conceptual space. Not only is Howard’s model able to afford navigation, it is also able to afford more general computations on the Laplace representation (using standard signal processing procedures such as convolution and cross-correlation) and its inverse (the inverse is the representation for Place cells).
The Laplace framework is a convenient way to input sensory signals and sort of solves the symbol grounding problem. One way to implement Jeff’s 1k-brain theory is to wire together 1k of these Laplace-inverse circuits and use this macro circuit to build SDR over sensory input. This SDR would form memories at the core of a cognitive architecture. This is the vision I have for a thesis.
Since the Laplace-inverse circuit models Grid/Place cell assemblies, there would be no evidence to support wiring together 1k of these Laplace-inverse models if there were no evidence of Grid/Place cells in the neocortex. So this paper inspires me to move forward with this idea.
@howard.goldowsky Can you provide a paper reference that would give an introduction to this framework? I’m having a hard time understanding how Laplace transform applies to neural networks since my previous experience with it was on signal analysis on circuits and mechanical systems.
I’d say that this is the seminal paper that gives the most thorough background of how the Laplace Transform relates to Grid and Place cells and Marc Howard’s model of working memory. If you look through Howard’s other publications you’ll find a few that are a bit more philosophical and a few that are a bit more applied. Bitking had a post on this paper once, as well. Also linked.
The key mathematical difference between the Laplace Transform you’re remembering from circuit analysis and signal processing is that the one used for circuit analysis is the complex-valued transform over the full complex plane. s = sigma + i*omega. The flavor Howard uses is the real Laplace transform, which has omega = 0. So Howard’s application the transform exists only on the real line. The basis functions are the set of exponentially decreasing functions, which provide a sort of “clock” to measure time differences, and this is where all the functionality lies. The equations model any time differentiated signal input into the EC. eg. velocity. Thus I was able to compute velocity vectors for navigation. This paper linked below shows how the math was used to model a rodant’s navigation through a maze.
Many thanks for the link. I’m finding I have to dust off my calculus knowledge that hasn’t been seriously used in over 20 years!
I’m halfway through the paper, but I find I’ll have to go to one of their previous papers where they first explain the encoding and decoding process of time in more detail since I’m not picking up a few things based on my rusty calculus and laplace know-how.
https://direct.mit.edu/neco/article/24/1/134/7733/A-Scale-Invariant-Internal-Representation-of-Time
In your work, you might consider using Nengo to do your laplace transform space-time encodings since that framework is a perfect fit for this type of system. I looked in their libraries and forums and no one has tried to implement the Howard-LT on their platform. But they have a lot of libraries that do multivariable encoding/decoding that would serve as your t, x, and f cell array inputs.
https://www.nengo.ai/nengo/examples/advanced/nef-summary.html
Read their NEF summary so you know how to use it properly. I’m sure you can add a Laplace Transform to get the space-time representations.
Jacob,
It’s funny, but I didn’t even know about the paper you linked! I had been using the 2014 paper as my reference for writing code. But it appears that the 2012 paper may be even more foundational. And it’s long. Thanks for the hat tip about Nengo. Unfortunately I do not take well to Python. While I can use it, I prefer to do my scientific programming in MATLAB, and I coded my Laplace EC-CA model from scratch. (I have the MATLAB code in a Github repo (GitHub - HowardGoldowsky/LaplaceAI: Class project for Robotics), but it is currently in disarray even though it is well commented. As this may become a cornerstone of thesis work, I prefer to code from scratch so I know what’s going on at a deeper level.
For me, the Laplace EC-CA model is just a means to ingest embodied modal data in a way that sort of solves the symbol grounding problem. The Laplace representation keeps a temporal history and affords all sorts of computation over its representation. For my project I performed a simple navigation task. This is a proof-of-concept that the Laplace representation can act as a mechanism for navigation through a higher dimensional multi-modal conceptual space. Not only that, I hope to explore how it may afford computations over trajectories through conceptual space, as a function of time (or position), which would represent schema. For example, the act of speaking involves a time sequence of concepts. Any thought or action other than just thinking about a static object involves sequential trajectories through conceptual space… Anyway, I digress… this is the framework that works for me. There are a bunch of theories out there on how Grid and Place cells work. Marc Howard’s work satisfies my intuition best.
Have fun learning about this stuff!
One more thought. I have nothing against Nengo, and I plan to possibly incorporate it into my work. Chris Eliasmith does a lot of work with his Semantic Pointer Architecture, which is related to a bunch of work I’m interested in about hypervectors and how they relate to long-term memory and sparse distributed representations. A paper just came out that relates semantic pointers to SDR. SDR is currently my best intuition for how long-term memory should work, a way to save conceptual representations.
@howard.goldowsky can you provide the paper reference?
Consider exploring our HyperGridTransform concept in BrainBlocks for converting arbitrary N-dimensional data into grid-like representations. You can create 1D, 2D, and m-D grids out of n-dimensional data so long as m<=n. In practice, 1D grids are really effective, and greater than 2D grids becomes less useful. More research is needed to understand if they have any place at all.
The code.
Quick example usage.
Plot visualizations
Read all of the detailed documentation for BrainBlocks. Very interesting project. While I don’t really have time to pursue further, what interested me most was the theory of DBP. Do you guys have any papers? I have not seen much work on, say, “why all SDR are not created equal.” Intuitively I get why, but have never seen any papers. In other words, a theory of SDR has not yet been written. Other than the few papers put out by Numenta and a few others. A lot of work seems to have been done on computing with dense hypervectors, but not so much on SDR.
We have a couple papers in the pipeline, but we’re kind of stuck in the revision phase. It’s kind of hard to make a coherent yet comprehensive paper on these topics since they defy easy description and understanding.
“SDR” has been a catch-all for the binary vectors used in HTM with some claims about their properties that don’t always exist in practice. I’ve defined the term, Binary Pattern (BP), to be a superset of SDRs, which are all possible binary vectors expressed in the way that they can be consumed by HTM systems. That is, no requirement of sparsity or distributedness. Furthermore, no complex encoding processes like two’s-complement, base-2, or floating-point or anything else.
In BPs, a 1-bit indicates the presence of some evidence. A 0-bit indicates the absence of evidence. That is, a 0-bit does not indicate negative evidence. A 0-bit should not have semantic meaning.
The bit itself represents some kind of logical or semantic assertion such as, 1.0 < x < 2.0. These can be defined by an encoder, or they can be learned through an array of binary-activated neurons. Just because a bit has a zero, does not mean that the assertion is false, only that there is currently no evidence for that assertion.
BPs can be defined in terms of binary vectors or in terms of sets. Equivalent definitions for a BP can be a vector V or a set S.
V is a binary vector definition of a BP where v_i is the i’th element and n = dim(V). S is a set definition of a BP, where S \subset U, and U is the superset, such that U = \{s_i | 0 \leq i < n \} and |U| = n.
Defining a set S from binary vector V
S = \{s_i | v_i = 1\}.
Defining a binary vector V from set S
V = [v_0, v_1, ..., v_{n-1}] where v_i =1 if s_i \in S, otherwise v_i = 0.
An example of something that this new theory can tackle about BPs is this: what is the effect of the sparsity imbalance of two combined BPs? That is, given two BPs X and Y, where |X|=10 and |Y|=100, when they are concatenated (unioned) to Z = X \bigcup Y, the information from X is nearly washed out by the amount of information from Y. Clearly this is an important concept to understand since getting this balance right is required for practical applications. For now it’s kind of just left as a “oh you have to do this” when discussed on the forums when people try to wade into this field. Furthermore, in neuroscience, mental disorders occur from imbalances like this, so this has a direct analogy to the brain.
Most of the work i’ve done has just been codifying definitions and problems like this. Truly understanding what this all means requires a great deal of experimentation and research.
Does this sate your palate?
Absolutely. This was a great explanation of what you’ve been thinking about. I’m looking forward to those papers. Please let us know when they’re ready. Don’t need to fit the kitchen sink into one long paper. A dozen shorter papers that overlap in background but make one point each could also work well.
In our “A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex” paper – there’s a paragraph that references experiments suggesting grid cells exist in the neocortex. You may want to look at those. I’ve pulled the paragraph here:
Recent experimental evidence suggests that grid cells may also be present in the neocortex. Using fMRI (Doeller et al., 2010; Constantinescu et al., 2016; Julian et al., 2018) have found signatures of grid cell-like firing patterns in prefrontal and parietal areas of the neocortex. Using single cell recording in humans (Jacobs et al., 2013) have found more direct evidence of grid cells in frontal cortex (Long and Zhang, 2018), using multiple tetrode recordings, have reported finding cells exhibiting grid cell, place cell, and conjunctive cell responses in rat S1.
Thank you! The hardest things to find are those right in front of you.