Hex Grids & 1000 Brains Theory

This thread started as a series of private messages between @rhyolight and @bitking. We decided to clean it up and post it afterwards. Also, the original video I posted was here, but I replaced it below with one I thought was cleaner and more succinct.

@bitking What do you think about this with respect to the idea of hex grids? Does it make sense of anything? Does it contradict anything you believe?

PS: the video above is just scratch and brainstorming.

2 Likes

I see some agreement with my own work and some some differences that lead to further questions.

Please consider my further offering on “well shuffled” as you move up the hierarchy.

This is not counter to the Numenta proposal but builds on it.

I have offered this several times in this forum but it seems like it is ether poorly explained or so far out that nobody takes it seriously. I really do need to make up a detailed post explaining this.

That is a shame as I think that this is the first real attempt to explain how grids are formed in that area and a plausible explanation how real vision works in a way that matches up with the known properties of the visual system.

Has anyone there ever looked at my hex-grids post and talked about it?

1 Like

Are you talking about grid cells at all here? Or is this about how cortical columns link together in hexagonal structures?

If the latter, what does the structure achieve? Why do we need them?

I have talked to Marcus about it. I did not ask him to read it, but he said he perused it and did not understand it. I also don’t understand it. I don’t want to ask Subutai or Jeff to look at it until I get it. And I just don’t yet. I’m still trying and open to further instruction :wink:

You need them because it is a powerful mechanism for lateral binding between column. The talk is about how columns vote and settle on patterns; this is a (the?) mechanism that does this. The resulting hexagonal pattern has the exact signaling properties you described in your grid video. (Scale/phase/angle)

As a side benefit is produces the exact pattern observed in biology. This was predicted by Calvin before it was observed in vivo. The prediction was for these exact dimension and properties; I consider that a very powerful confirmation.

@bitking and I had a chat Friday, so just continuing my thoughts from that conversation. The one thing that bothers me is how you pick the minicolumn that represents the beginning of a columnar hex-grid. You said it could be done during spatial pooling, by choosing one of the winning columns? I assume this would be the column that most overlaps the input, but usually there are a bunch of columns that tie for first. How do you pick it then?

You and I are very close together on this.

I see a lot of what I am thinking on your visualization video above. I won’t comment on level 1 and 2 - only on level 3 in your whiteboard sketch.

Yes, the great sea of mini-columns are all matching to a greater or lessor degree. a constant white noise of activity as the buzzing, blooming, moving world roars on around us.

We know that the SDRs are like a key that fits a given (learned) lock. The thing is - random patterns that are close to what was learned with those keys and partial matches are occurring a lot - there are constant “almost” hits everywhere in the sensory fields; weak depolarizations that are not enough to trigger firing by themselves.

You allude to lateral connections helping to recognize that the sensed object is part of a a larger pattern.

Lets work through this; of all the partial hits between pairs of mini-columns - some add, some cancel and white noise of matches essentially create some constant low level of activation. This is met with a tonic balance of inhibitory inter-neurons to prevent all the excitatory pyramidal cells from firing at once.

But - for some sub-set of mini-column - they are ALL hitting on the pattern because they have learned this before. Due to the nature of the lateral connections - about 300 to 500 um spacing - trios of mini-columns are mutually reinforcing and getting stronger in a voting system. (More on this in a minute)

Just for fun - say it take at least 3 mutually reinforcing links to be strong enough (or a VERY strong local match) to start to over-ride the inhibitory inter-neurons. These reciprocal connections give these neurons an “unfair advantage” over their lesser neighbors that are not seeing something they know. This stronger ON signal pushes these neurons to more strongly trigger the attached inhibitory OFF inter-neurons. There is still balance but the balance has shifted from weak white noise distributed across all mini-columns to the much stronger hex-grid pattern that is forming; I will guess that the overall level of activity may well remain constant.

As these strongly excited mini-columns work, just as you suggested in your video, they influence any mini-columns that are on the fence to join in the pattern. This spreads across the extent of the perceiving mini-columns like wildfire sweeping through dry timber. As these mutually resonate they are triggering the adjacent inhibitory inter-neurons suppressing the weakly responding background noise, leaving only the strongly resonating hex standing alone to signal the grid pattern to distant maps and back down the hierarchy. These are the ones that have learned the pattern we are perceiving now.

Back to that voting system - note that the column are active on the alpha rate - about 10 Hz. Many different lines point to that being the basic column processing rate.

I posit that the voting to select the winners in the voting/suppression system is running at the gamma rate - about 40 Hz. This allows 4 rounds of reinforcement/suppression for each alpha cycle. The strong get stronger, and the weakly responding get voted off the island. (lame - but I had to say it!)

The cool thing here is that the output is exactly the hex pattern that is observed in nature - a stable output pattern that signals some recognized input pattern. Remember - this is not a goal - it is a happy side-effect of the fixed sized lateral connections.

A huge and very important detail: this is the long sought pattern-2-pattern behavior that the deep learning people have been lording over HTM for so long. Some input pattern/sequence is turned into some unique and repeatable output pattern. This is formed without back-prop, without a teacher, with a small number of presentation. We can do it too!

This still allows the sequential recognition to function - it is just a whole patch of mini-columns blinking along in synchronization stepping through a 2D pattern rather than isolated twinkle lights. All yoked together like a team of horses.

Note that even as the input field steps though its sequence of this learned 2D pattern the output hex-grid code stays stable as this set of individual mini-columns are all going through the pattern that they have learned.

Over time they will add slight variations like different viewing angles as the central pattern enlists other nearby mini-columns to sing along when the main pattern is playing.

This is the mechanism I propose that binds sequential saccades into perceived visual objects. Since the entire cortex depends on motion for recognition, this should work as the highest level in all sensory hierarchies.

I am working on a post to show this now. I still have several images to make and a more detailed explanation to convey this in an approachable way.

2 Likes

I’m visualizing this. You’re saying each one of these activating minicolumns has an associated hex grid that might echo / repeat / transmit the signal across cortical columns?

What do you mean “links”? Temporal links through time? If so then one can be represented by a set of discrete distal dendritic segments in cortex, right? If not (if spatial), how is a link represented?

Continuing reading… will keep posting questions as I have them.

Let’s say there are 1000 minicolumns in a population of cells. At any point, each one has an overlap with an input that is updating over time. We could define “on the fence” as any minicolumns in the top “10-20%” overlap, where the top 10% are activated from feedforward input. Does this make sense to define “on the fence” to you? (or something similar anyway?)

These are the 300-500um lateral connections, roughly the width of what we think of as a cortical column. Can you define these lateral connections in your terms a bit more? In our model, they are axonal output from the a layer in the cortical column, so they would be pyramidal axons. I think this matches your model.

In this post, I laid out a lot of the basic geometry and spacing of the relationship between mini-columns and sub-components.

I made this top-view picture to show the relative spacing between the reach of mini-columns dendrites in relation to the size of the mini-columns. The blue circles are mini-columns, and the black circle is the reach of the input dendrites of the “center” mini-column.

dendrite%20area

In the HTM into hex grid post I showed a side view of the pyramidal neurons with the inputs in red and the output in blue:


The 0.5 mm reach of the dendrites are the same as the black circle above and the larger blue output axonal arbor is the mutual axonal output links between mini-columns. Each reaches out to the others in this band of influence.

Lot’s go back and look down from the top again; here is a picture of the distant minicolumns, each is surrounded by a circle that shows the range of possible influence of these output axons.

The red circle is the mini-columns that are sensing some pattern. The highlighted lines are the axonal outputs that happen to be mutual reinforcing. What I have been calling links

1 Like

Are those blue things inhibitory neurons? is the hex activation a continuous attractor model?

1 Like

The blue lines are axonal outputs.

No - I am not showing any of the inhibitory cells - this is all pyramidal cells at this point.

Yes, it is an attractor model.

Just checking because in my model of how a continuous attractor model works to generate a grid is through inhibitory neurons. Pyramidal grid cells don’t connect directly to each other from what we have seen in entorhinal cortex.

1 Like

Agreed - the inhibitory interneurons are very important.
These are already VERY busy diagrams.
I was not sure how to work inhibition in for the first presentation.

The inhibitory inter-neurons are evenly distributed and are being activated by that huge output arbor.
It takes a strong resonant interconnection to counter this and continue to activate.
Like you suggest - a top %10 or %20 maybe.

I have been reading about neurology for 20 years and had a pretty good idea what to expect.
HTM just filled in some blanks and gave me a new way to frame my understanding - I was ready to unlearn a few things and plug this theory in.

The part I was missing was the temporal part - how to go from one state to another. As soon I I saw the predictive mechanism it was not a big leap to see how it fit in and explained many of the questions I had.

There are a bunch of things related to hierarchy that I have worked out before I every saw HTM and the bit I am trying to get across here is the extension needed to fill those holes.

If this really does work this way then lots of things are explained.

I cannot stress enough how important it is to solve the basic problem of how the eye puts sequential saccades together into an object. This model does that.

I am trying to make drawings like these to get this concept across. Once you see how it works and how neatly it dovetails into the basic HTM theory you may wonder why it did not occur to you first. It’s that basic.

2 Likes

This is interesting, but I don’t see it in your other public posts anywhere.

You mean roughly like this, right? Grid cells: Visualizing the CAN model

The thing I’m struggling with is where are the grid cells? Are all the pyramidal neurons in a minicolumn grid cells? Do we consider the minicolumn itself the grid cell? Is “grid cell” just some behavior we arbitrarily attached to a neuron when it is actually expressed in a local group of neurons?

Emergent behavior of mixing HTM with lateral connections…

1 Like

It is prominent in my notes at home but it has not come up much here.

1 Like

Is it fair to say that each minicolumn has griddiness as if it were a grid cell in grid cell module?

minicolumn is to grid cell
as
hex grid is to grid cell module
?

Would you say hex grids span across cortical columns? or that they define cortical columns?

BTW I know you are at work. I’m not meeting with Marcus until tomorrow afternoon, so no rush to answer right away, I’ll keep reading and asking more questions.

1 Like

Mini-columns are the hubs of the grids. Any mini-column could be part of a grid.

The hex-grid pattern is just the state of the instantaneous collection of mutually interacting mini-columns across some part of a map. The “griddyness” part is the map of these reinforcing links.

It does NOT explain the repeating nature of cells vs location. I am hoping that the work numenta is doing will show how that code ends up being repeated as it is in the hippocampus. I have been trying to extend what I have read of the Numenta work in that direction and so far - no success.

Hex-grids: It could be as little as three mini-columns or across the entire map if it is a well learned pattern. When a grid is fully formed it also served as the sparsification step of HTM at about 5% sparsity if I remember my calculations correctly. A single grid hex serves the same function as a macro-column does.

1 Like

I’m going to ask a couple of basic questions again just to be sure:

I’m trying really hard to relate this this to how I understand grid cell modules to work. I think you are describing minicolumns as behaving exactly as grid cells behave inside a grid cell module. You’re hesitating to say that a hex grid is acting like grid cell module. Why? Am I missing something?