Should grid cell or displacement cell modules include minicolumns?

Intuitively GC and SP share the same goal and that is to generate an sdr or loosley a sparse input encoding. Forgetting the concept of location in GC which IMO is acceptable since it needs to be agnostic to type of information anyway, then they even are more similar.

Correct me if I’m wrong but the advantage of grid cell based model is that it can use multiple modules at the same time, so a local (bits) and global (modules) consensus is possible. Whereas the SP can only use a certain region and consensus is local only. I always wish that the SP would have been processed/utilized in a stateful way and these states can vote with each other potentially accomplishing a global consensus. However this may be an irreleveant intuition for biology.

1 Like

My personal belief on this (differs from Numenta) is that the SP algorithm is not happening in the brain. Instead the functions of sparsification, topology, and voting happen via L2/3 forming hex grids (not the same thing as “grid cells”). This function would be a combination of SP and TP (not TM).


I think I remember one of your posts about your idea.

My ideas/observations are best treated as purely algorithmic only. I’m not a biologist/neuroscientist. But I like to study computational machines and determine their equivalence before even implenting them. In my experience algorithms that tries to solve the same problem are more likely to be in the same family of algorithms. I believe that the grid+cell based model is a generalization of an sp.


Maybe I don’t understand what you are saying, but the SP does this when local inhibition is enabled and minicolumn competitions happen in local neighborhoods. This competition is not voting, but the global consensus is reached through local competitions.

1 Like

Just to be sure I understand, you are talking about the physical manifestation of hexagonal patterns across cortex because of dendritic topology axonal lateral projections?


If the hex-grid resonates after firing it would act as temporal pooling in addition spatial pooling / sparsification as described in my hex-grid post. For this to work correctly the model may have to include habituation.

BTW: it’s axonal lateral projections.


Yes, this is based on some of Bitking’s previous posts. The hex-grid post describes the spatial pooling and topology aspects in detail.

Additionally, if we assume the activity in this layer is more temporally stable (as is the “object layer” described by TBT), that implies it is also performing temporal pooling due to the temporal differential between it and the less stable layers it connects with.


Forgive my inaccurate use of terms. The local inhibition you have mentioned here and its result is what I mean about local consensus. The global consensus I’m talking about is a bit different, at least in my intuition and perspective.

When the SP does online learning, it can also be intuited that it is replicating instances of itself (states). These states, by intuition prefer an input, say for example if we consider this preference as a spectrum with values 0 to 1, for a particular input A, a state can be anywhere in this spectrum of preference, 0, 0.1, .50, .99, and etc. The higher the value the better.

Now the big question is at least for me, is that which of these states are the “healthiest”? By healthiest I mean it has significantly learned something. These states can be mutually exclusive/inclusive or mixed of both, so we cannot easily ignore some of these states, also unlike backpropagation where we can easily ignore the previous config of the parameters (state) because it is getting better through adjusting of errors, the SP cannot easily guarantee that State 1000 is healthier than State 99.

Another way of imagining these states is like a vertex of a graph and each of these vertices is a partial solution. If there’s a way to form consensus on these solutions I strongly believe that the SP can improve its capabilities. Today, at least of my understanding, we only use the latest state/vertex to test things (e.g. classify, cluster or encode). I believe that a subset of the previous states are as important as the latest state. These states if utilized I believe are analogous to multiple parallel modules in a grid cell.

You really need to read my hex-grid thing with cooperation in voting on a global state with local states.
I am pretty sure this is what you are asking.

1 Like

I have started reading it. To be honest I easily get lost with imagining neuro/biology structures. But thanks for mentioning, I will certainly try to understand and contemplate on it some time.

1 Like