Continual Lifelong Learning Paper Review / Jeff Hawkins on Grid Cell Modules - February 10, 2021

Karan Grewal gives an overview of the paper “Continual Lifelong Learning with Neural Networks: A Review” by German Parisi, et al… He first explains three main areas of current continual learning approaches. Then, he outlines four research areas that the authors advocate will be crucial to developing lifelong learning agents.

In the second part, Jeff Hawkins discusses new ideas and improvements from our previous “Frameworks” paper. He proposes a more refined grid cell module where each layer of minicolumns contains a 1D voltage-controlled oscillating module that represents movement in a particular direction. Jeff first explains the mechanisms within each column and how anchoring occurs in grid cell modules. He then gives an overview on displacement cells and deduces that if we have 1D grid cell modules, it is very likely that there are 1D displacement cell modules. Furthermore, he makes the case that the mechanisms for orientation cells are analogous to that of grid cells. He argues that each minicolumn is driven by various 1D modules that represent orientation and location and are the forces behind a classic grid cell / orientation cell module.

“Continual Lifelong Learning with Neural Networks: A Review” by German Parisi, et al… : Continual lifelong learning with neural networks: A review - ScienceDirect
“A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex” paper: https://www.frontiersin.org/articles/10.3389/fncir.2018.00121/full

Other papers mentioned in Karan’s presentation:

4 Likes

Hello there. I have some thought regarding the second part I’d like to share.

Actions an animal performs during movement are usually stereotypical, therefore they should produce repetitive sequences of motor commands. Temporal memory is able to learn and predict those sequences representing them as sequences of pyramidal neuron activations associated with motor command patterns. If those sequences will turn out to be looped, which is possible I guess, then such a looped sequence can be treated as a torus Marcus and Jeff are talking about.

The maximum speed of phase shifting is reached when an animal is performing actions that are 100% predicted by it (say, moving forward). When an animal stops moving and starts, say, turning around, another sequence comes into play but the fact that this turning is possible in that phase of the moving forward sequence suggests that this turning is also predicted as an alternative movement option and columns that are going to fire next won’t burst, hence preserving the context (not just turning left, but turning left after a particular phase of moving forward).

When an animal doesn’t move forward this particular sequence is not being looped but it’s neurons may still take part in other sequences and can activate eventually, less frequent. Some mechanism will be required that will force them to activate in that same sequence to preserve the properties of a torus. This may be a stong argument against this view. Temporal memory networks have complex dynamics and it is hard for me to develop a proper intuition about them

When you get an unpredicted input minicolumns in a temporal memory burst dropping the old context. If 1d grid cell modules are represented by temporal sequences this bursting is analogous to reanchoring.

I don’t understand your displacements concept well enough yet but I’m pretty sure that there is a place for it to fit here too.

Hope that reading this was’t a waste of time :slight_smile:

1 Like

I did not understand how the displacement module measures the displacement when the location grid cell module “wraps” or cycles. The displacement limited to the finite distance of a single cycle through the location grid cell module?

5 Likes

Correct me if i am wrong, but at 45:00 in video, Jeff is saying that we actually never want to grid cell module to repeat. That is what was fundamentally wrong about the previous work, he said.

For me it is also a bit confusing and i need to rearrange my mindset to that, but it seems that lot of problems will vanish with this.

2 Likes

I agree, I’m kind of lost too. But i think we have to see this in a combinatory way. The system does not measure in a number of units, and then need to take into account the ‘carry’. I think when the active cell activity traveled down a certain position (call it distance), the network sends out two signals from two distinct cells. One is the newly activated cell, and the other is a reference cell, that is not part of the array.

For me it is the opposite. We have this repeatable experiment that displays a regular, almost magical pattern that we are only beginning to come up explanations for, and now it is supposed to be all for nothing? It’s not supposed to repeat?

Compare this to Fibonacci in nature. It’s baffling, but we know it has purpose. From sunflower seeds to escargot shells: it displays an efficiency discovered by evolution. My intuition is that grid cells are the result of similar evolution.

3 Likes

Hmm this is interesting, although you will get stuck as Jeff did, when you try to find way how this could work with repeating grid cell module. You would have to have wired up all neurons in all combinations. That seems to be not the generic way, because it should be all over the cortex. And stating that the output of this network is learned, doesn’t help either.

The second part that Jeff et al. were fundametally right about is the combination of several GCM’s to get very large spaces, so they are not throwing everything. That is also very magical. As i said, i am not very confident at this topic right now. But it makes me calm, that i am not the only one who struggles.

Ever heard about Stephen Wolframs A New Kind of Science ? I received it as gift for Chrismas (my wish :smiley: ) (but it is online free available too) and i am impressed. His first basic motivation was exactly this: Rule 30. In other words, how simple programs can lead to very complex behavior. Counter intuitive at first, but true. It touches very lot of things, from fibonacci numbers, prime numbers … up to Computational irreducibility. BTW he now makes a giant step towards fundamental Theory of Physics Wolfram Physics Project. Joining theory of relativity with quantum world.

3 Likes

Could you please summarise that concept?

3 Likes

The best example of the concept is i think the prime numbers.
There are infinite number of primes.

One would think, that we have formula for calculating prime numbers, but we don’t.
We can just check if the number given is prime or not. We also can calculate probability that some number is prime, but that’s all.

That is pretty weird.

But we can write a very simple program that will generate this infinite sequence.

This means, there is no other way to get them, except to compute them step by step.

For more info and picture of the cellular automata see this page from the book i mentioned:
sequence of primes

4 Likes

Torus seems to be the answer to everything. Intelligence is the flowing torus, living organisms are resonators that coheres to that flow.

1 Like