Hex Grids & 1000 Brains Theory

You need them because it is a powerful mechanism for lateral binding between column. The talk is about how columns vote and settle on patterns; this is a (the?) mechanism that does this. The resulting hexagonal pattern has the exact signaling properties you described in your grid video. (Scale/phase/angle)

As a side benefit is produces the exact pattern observed in biology. This was predicted by Calvin before it was observed in vivo. The prediction was for these exact dimension and properties; I consider that a very powerful confirmation.

@bitking and I had a chat Friday, so just continuing my thoughts from that conversation. The one thing that bothers me is how you pick the minicolumn that represents the beginning of a columnar hex-grid. You said it could be done during spatial pooling, by choosing one of the winning columns? I assume this would be the column that most overlaps the input, but usually there are a bunch of columns that tie for first. How do you pick it then?

You and I are very close together on this.

I see a lot of what I am thinking on your visualization video above. I won’t comment on level 1 and 2 - only on level 3 in your whiteboard sketch.

Yes, the great sea of mini-columns are all matching to a greater or lessor degree. a constant white noise of activity as the buzzing, blooming, moving world roars on around us.

We know that the SDRs are like a key that fits a given (learned) lock. The thing is - random patterns that are close to what was learned with those keys and partial matches are occurring a lot - there are constant “almost” hits everywhere in the sensory fields; weak depolarizations that are not enough to trigger firing by themselves.

You allude to lateral connections helping to recognize that the sensed object is part of a a larger pattern.

Lets work through this; of all the partial hits between pairs of mini-columns - some add, some cancel and white noise of matches essentially create some constant low level of activation. This is met with a tonic balance of inhibitory inter-neurons to prevent all the excitatory pyramidal cells from firing at once.

But - for some sub-set of mini-column - they are ALL hitting on the pattern because they have learned this before. Due to the nature of the lateral connections - about 300 to 500 um spacing - trios of mini-columns are mutually reinforcing and getting stronger in a voting system. (More on this in a minute)

Just for fun - say it take at least 3 mutually reinforcing links to be strong enough (or a VERY strong local match) to start to over-ride the inhibitory inter-neurons. These reciprocal connections give these neurons an “unfair advantage” over their lesser neighbors that are not seeing something they know. This stronger ON signal pushes these neurons to more strongly trigger the attached inhibitory OFF inter-neurons. There is still balance but the balance has shifted from weak white noise distributed across all mini-columns to the much stronger hex-grid pattern that is forming; I will guess that the overall level of activity may well remain constant.

As these strongly excited mini-columns work, just as you suggested in your video, they influence any mini-columns that are on the fence to join in the pattern. This spreads across the extent of the perceiving mini-columns like wildfire sweeping through dry timber. As these mutually resonate they are triggering the adjacent inhibitory inter-neurons suppressing the weakly responding background noise, leaving only the strongly resonating hex standing alone to signal the grid pattern to distant maps and back down the hierarchy. These are the ones that have learned the pattern we are perceiving now.

Back to that voting system - note that the column are active on the alpha rate - about 10 Hz. Many different lines point to that being the basic column processing rate.

I posit that the voting to select the winners in the voting/suppression system is running at the gamma rate - about 40 Hz. This allows 4 rounds of reinforcement/suppression for each alpha cycle. The strong get stronger, and the weakly responding get voted off the island. (lame - but I had to say it!)

The cool thing here is that the output is exactly the hex pattern that is observed in nature - a stable output pattern that signals some recognized input pattern. Remember - this is not a goal - it is a happy side-effect of the fixed sized lateral connections.

A huge and very important detail: this is the long sought pattern-2-pattern behavior that the deep learning people have been lording over HTM for so long. Some input pattern/sequence is turned into some unique and repeatable output pattern. This is formed without back-prop, without a teacher, with a small number of presentation. We can do it too!

This still allows the sequential recognition to function - it is just a whole patch of mini-columns blinking along in synchronization stepping through a 2D pattern rather than isolated twinkle lights. All yoked together like a team of horses.

Note that even as the input field steps though its sequence of this learned 2D pattern the output hex-grid code stays stable as this set of individual mini-columns are all going through the pattern that they have learned.

Over time they will add slight variations like different viewing angles as the central pattern enlists other nearby mini-columns to sing along when the main pattern is playing.

This is the mechanism I propose that binds sequential saccades into perceived visual objects. Since the entire cortex depends on motion for recognition, this should work as the highest level in all sensory hierarchies.

I am working on a post to show this now. I still have several images to make and a more detailed explanation to convey this in an approachable way.


I’m visualizing this. You’re saying each one of these activating minicolumns has an associated hex grid that might echo / repeat / transmit the signal across cortical columns?

What do you mean “links”? Temporal links through time? If so then one can be represented by a set of discrete distal dendritic segments in cortex, right? If not (if spatial), how is a link represented?

Continuing reading… will keep posting questions as I have them.

Let’s say there are 1000 minicolumns in a population of cells. At any point, each one has an overlap with an input that is updating over time. We could define “on the fence” as any minicolumns in the top “10-20%” overlap, where the top 10% are activated from feedforward input. Does this make sense to define “on the fence” to you? (or something similar anyway?)

These are the 300-500um lateral connections, roughly the width of what we think of as a cortical column. Can you define these lateral connections in your terms a bit more? In our model, they are axonal output from the a layer in the cortical column, so they would be pyramidal axons. I think this matches your model.

In this post, I laid out a lot of the basic geometry and spacing of the relationship between mini-columns and sub-components.

I made this top-view picture to show the relative spacing between the reach of mini-columns dendrites in relation to the size of the mini-columns. The blue circles are mini-columns, and the black circle is the reach of the input dendrites of the “center” mini-column.


In the HTM into hex grid post I showed a side view of the pyramidal neurons with the inputs in red and the output in blue:

The 0.5 mm reach of the dendrites are the same as the black circle above and the larger blue output axonal arbor is the mutual axonal output links between mini-columns. Each reaches out to the others in this band of influence.

Lot’s go back and look down from the top again; here is a picture of the distant minicolumns, each is surrounded by a circle that shows the range of possible influence of these output axons.

The red circle is the mini-columns that are sensing some pattern. The highlighted lines are the axonal outputs that happen to be mutual reinforcing. What I have been calling links

1 Like

Are those blue things inhibitory neurons? is the hex activation a continuous attractor model?

1 Like

The blue lines are axonal outputs.

No - I am not showing any of the inhibitory cells - this is all pyramidal cells at this point.

Yes, it is an attractor model.

Just checking because in my model of how a continuous attractor model works to generate a grid is through inhibitory neurons. Pyramidal grid cells don’t connect directly to each other from what we have seen in entorhinal cortex.

1 Like

Agreed - the inhibitory interneurons are very important.
These are already VERY busy diagrams.
I was not sure how to work inhibition in for the first presentation.

The inhibitory inter-neurons are evenly distributed and are being activated by that huge output arbor.
It takes a strong resonant interconnection to counter this and continue to activate.
Like you suggest - a top %10 or %20 maybe.

I have been reading about neurology for 20 years and had a pretty good idea what to expect.
HTM just filled in some blanks and gave me a new way to frame my understanding - I was ready to unlearn a few things and plug this theory in.

The part I was missing was the temporal part - how to go from one state to another. As soon I I saw the predictive mechanism it was not a big leap to see how it fit in and explained many of the questions I had.

There are a bunch of things related to hierarchy that I have worked out before I every saw HTM and the bit I am trying to get across here is the extension needed to fill those holes.

If this really does work this way then lots of things are explained.

I cannot stress enough how important it is to solve the basic problem of how the eye puts sequential saccades together into an object. This model does that.

I am trying to make drawings like these to get this concept across. Once you see how it works and how neatly it dovetails into the basic HTM theory you may wonder why it did not occur to you first. It’s that basic.


This is interesting, but I don’t see it in your other public posts anywhere.

You mean roughly like this, right? http://mrcslws.com/blocks/2017/04/16/grid-cells-CAN-model-visualized.html

The thing I’m struggling with is where are the grid cells? Are all the pyramidal neurons in a minicolumn grid cells? Do we consider the minicolumn itself the grid cell? Is “grid cell” just some behavior we arbitrarily attached to a neuron when it is actually expressed in a local group of neurons?

Emergent behavior of mixing HTM with lateral connections…

1 Like

It is prominent in my notes at home but it has not come up much here.

1 Like

Is it fair to say that each minicolumn has griddiness as if it were a grid cell in grid cell module?

minicolumn is to grid cell
hex grid is to grid cell module

Would you say hex grids span across cortical columns? or that they define cortical columns?

BTW I know you are at work. I’m not meeting with Marcus until tomorrow afternoon, so no rush to answer right away, I’ll keep reading and asking more questions.

1 Like

Mini-columns are the hubs of the grids. Any mini-column could be part of a grid.

The hex-grid pattern is just the state of the instantaneous collection of mutually interacting mini-columns across some part of a map. The “griddyness” part is the map of these reinforcing links.

It does NOT explain the repeating nature of cells vs location. I am hoping that the work numenta is doing will show how that code ends up being repeated as it is in the hippocampus. I have been trying to extend what I have read of the Numenta work in that direction and so far - no success.

Hex-grids: It could be as little as three mini-columns or across the entire map if it is a well learned pattern. When a grid is fully formed it also served as the sparsification step of HTM at about 5% sparsity if I remember my calculations correctly. A single grid hex serves the same function as a macro-column does.

1 Like

I’m going to ask a couple of basic questions again just to be sure:

I’m trying really hard to relate this this to how I understand grid cell modules to work. I think you are describing minicolumns as behaving exactly as grid cells behave inside a grid cell module. You’re hesitating to say that a hex grid is acting like grid cell module. Why? Am I missing something?

This feeds into the larger question of how a hex-grid is learned and how the learning extends.

As I see it a learning starts with some random best-fit on a single mini-column.

As others learn some pattern at the same time they riff off each other and reinforce when they see their part of a common pattern.

Over time this patch grows as a patch learns this pattern together. Once is it established it adds details around the edges and learns to discriminate between two similar patterns. The yellow in the picture below is a loose group - say 50 hexes. As the pattern is refined more hexes are recruited and by the time we get to the green it might be 500 hexes.

Look at this process in action: The two patterns start out the same but as details are added in the two patterns start to diverge and I would expect that the phase or scale shifts between the two patterns.

oh… thank you I have to think about this now… so the atomic computation unit we see physiologically localized in sensory cortex is distributed in other parts of cortex… right?

So the cortical column is distributed into a learned hex grid in higher areas of cortex, each hex grid behaves like a grid cell module, bumps within the module are minicolumns firing / echoing from some recognized sensory input (or something else). Am I still on the right track?

Yes, with the key difference being that in the early stages we are anchored to the sensory feed - the location have to be fixed as the sense fibers are fixed.

As we move up the hierarchy the hex-grids are free to form at any mini-column location and shift (phase/spacing/rotation) to collecting the sensory information into hex-grid coding of objects.

Then go back and look at the video you made on grid signalling and plug this in. I think you will see that it is describing the same thing from the bottom up.

1 Like