Grids into Maps!

This is a companion to the post HTM Columns into Grids

Sure Grids are cool - News Flash- so are Maps! The reinforcement within and between maps goes a long way towards a general solution of the binding problem.

One of the “breakthroughs” for me is that the cortical.io people have formed the SOM all in a single batch. While very powerful this is not biologically plausible; the brain learns online as data is presented to it.

I am thinking that with an attractor model that is formed as the content is added (forming and shaping pools of attraction) the training set is the actual data you stream at it. The stream encoder to spatially distribute the training would be a key part of making this work.

The hex-grid signal meaning by the shapes of activation. But really - what does that look like?

So what does it look like as pattern learning evolves from general to detailed? For this explanation, it’s easier to imagine one slice of the stream of a pattern of a leaf or some other element of a picture. This is just an example to aid in visualization - In the brain, the pattern learned is likely to not look like anything you can easily envision. Actual patterns are layers of patterns all jumbled together as Palimpsests.

At first, it learns a blob that could be part of any pattern. As time goes by and it sees two patterns there is some disagreement on the edges of the blob and details start to fill in. This process is triggered by learning what is different from what has already been learned until detailed patterns build up for each type of sensed pattern. It may take some time since learning is hard and only a little bit gets learned in each session. Then you have to sleep on it to consolidate this new learning.

This is how I see the pools forming in my mind’s eye. Again, the data at higher levels of representation would not look like a picture of an object.

These shapes are the unit of information that is passed from map to map.

But what do these shapes say?

One of the areas I spend some time thinking about is how grid fragments group in certain areas of the brain. Lesion studies are where someone has some metal deficit and post-mortem the brain is examined to see what was damaged. Over the years they have developed some good ideas on what is done where in the brain. It also points to units of mental processing. What does it look like if this or that part stops working? This has progressed to some workable theories as to what is parsed and stored, and where. If this interests you at all you should read this paper. I would love to go through this as it dovetails wonderfully with the material I am presenting but in the end, there are so many good things in this I would just end up reprinting the paper.

In the paper below, four semantic mechanisms are proposed and spelled out at the level of neuronal circuits:

  • referential semantics, which establishes links between symbols and the objects and actions they are used to speak about;
  • combinatorial semantics, which enables the learning of symbolic meaning from context;
  • emotional-affective semantics, which establishes links between signs and internal states of the body;
  • abstraction mechanisms for generalizing over a range of instances of semantic meaning.

Referential, combinatorial, emotional-affective, and abstract semantics are complementary mechanisms, each necessary for processing meaning in mind and brain.

How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics - Friedemann Pulvermüller

I have been noodling on how to form both the grammar and semantic content with the same training process.

The latest frisson of excitement to hit me on this is the post about a chatbot on another thread. In it, I referenced the “frames organization model” of world information; I don’t see any reason that this semantic representation could not be formed using the same process.

As the patterns are learned and distributed through the maps I expect them to cluster in some semantically useful natural classes. Studies show that this seems to be the case. This graphic is a small clue to how the connection in the connectogram group things to be correlated.

From an info-graphics view the populated semantic landscape looks something like this:

Maps are hard
All the maps - perhaps 100+ - are connected together with a complex system of loops of fibers. On first blush, this is a hideously complicated system that defies casual analysis. How will anyone ever map what is connected to what?


https://ac.els-cdn.com/S1053811913004709/1-s2.0-S1053811913004709-main.pdf?_tid=e5cebaf6-d552-4a39-a0dd-a10cccd2e8b6&acdnat=1521258659_b94d5522a452cc114b31c4ae422e3b64

The good people at the human connectome project have been working to tease out what is connected to what.
http://www.humanconnectomeproject.org/
More about the mapping project:
https://www.nature.com/nature/videoarchive/brain-map/index.html
If you want to use their tools to view the brain you can install the Connectome Workbench from here:
https://www.humanconnectome.org/software
Do work the tutorials first - this is about the most user-hostile software there is for just clicking on stuff to play with it. If you do everything right it can look like this:

Good tools to the rescue
One of the most compact forms of representation of these fiber-loop connections is the handy Connectogram; this is the same thing in a more digestible format with the endpoints labeled. I suggest that this will becomes a standard tool of the AI experimenter. I envision this as one of the standard data flow views.


More on this tool:

And more on the people making and using this tool:

Let’s look a little closer. This shows the intercortical axon projections:

Zoomed%20connectogram

No - closer yet - at the individual local maps of the brain.

Zoom in even closer.
Let’s see the axon projections landing in an area of some map; some are weak or partial signals; they are part of the pattern sensed.

Here is what the local maps receives as input:

They may even be changing in time. These patterns will be sniffed by the local grids and it may resonate with some learned pattern if it has seen it before; this indirectly learns and shapes grid formation.
Here is what the local map generates as output:

This is the signal that is fired down the axons to other maps in the connectome.

All of this is the fuel that drives the formation of the Global Workspace. (See Below)

This is the programming that determines what modules will receive samples of this or that map to combine and learn. At this level, a brain theorist can start to show how networks are formed and trace through the contribution that different maps are combining to form high-level dimensional connections.

I see this example as a different data view in the AI researcher’s toolbox, right alongside the connectogram.

What kind of programming model with the AI researcher be working with?
I have spent some time working out what the visual studio IDE for an AI researcher might look like. I think it will look something like this. Depending on who is looking and what they are trying to understand - different researchers have focused on different connection paths to work out this or that part of what the brain is doing.
The models will be something like 40% from area A and 10 % of area B, combined with certain predefined network types. Each type will have tunable parameters set on the connectogram; perhaps as a popup properties panel. The network graph connections view (above) will allow examination of the activation or training patterns as they form.
I can see that picking a part of the connectogram or graph view will cause the corresponding parts in the other view to be highlighted and perhaps second and third order connections to be indicated in some way.
I don’t expect you to try to understand these pictures here - I am trying to convey that this will be the level of programming necessary to build an AGI. (Using Hexagons @Paul_Lamb )

Progress is being made!
There are some high-level paths that have been identified as a “backbone” for much of what goes on in the brain. This is called the “default network” and seems to be the “idle hum” of the brain engine.


If this interests you more can be found here:

When the brain is running patterns are combined and processed with directing a flow of attention (activation) that drives the cortex to examine and process the various streams of the external signal and select motor programs as needed. These motor programs may unfold to drive further attention and processing.
When it’s working It looks something like this:

As you may already know, I postulate that there is a loop of recognition using these mechanisms that let you recognize your own internal states.

In figure 2 of this paper, they outline the major brain centers as being involved in an ignition of activity - and the global workspace paths of activity follow the loop of self-experience I outlined above.

The contents of this perception are combined with internal need states in the frontal lobe. This unfolds into motor planning of some sort. This can be as little as adding emphasis to your perceived state and starting “ignition in the global workspace” or simple unconscious attention. This shaps the contents of consciousness - pushing it dynamically. As you read this you can think of your left nipple; until you think of it you were not aware of it but you can direct attention to it to bring it into your global workspace. This is the flow-down of activation I was mentioning earlier. There many things that you are NOT paying any attention to at all times. An insect biting you on the left nipple can also draw your attention to the area, showing that this process is quite fluid.

Some parts of your environment may match up to part of what your prefrontal cortex knows that your limbic system needs below the level of awareness (even if that need is just exploring) and this matches up to start a motor program - the eyes are directed to look at it, bringing it into greater awareness in the experience in the temporal lobe. One of your learned motor programs (one of the first a baby learns) is scanning an object with its eyes. Some say they learn to do that inside the womb. As you grow older you learn to play the old game of 20 questions with your eyes to identify what it is you are looking at; this is all a learned motor pattern in the forebrain.

If your perception matches up with one of the internal needs the motor program may even unfold to actions involving more of the body like your limbs, grasping, walking, eating, talking.

Naturally - there is a lot more to this but these are some of the key bits.

I’ve covered a huge swath of material here. If you have made it this far your head may be spinning trying sort out what bits go where in all this.

Let’s set everything in its place:

  • The location of the columns that form the SDR processing nodes is fixed in space.
  • The loops of axons that connect the columns in one map to the next are likewise fixed.
  • The 0.5 mm range interconnections between neurons in the same area of the cortex is fixed.
  • What the columns learn - using the proximal and distal dendrites - to recognize a bit of a spatial or temporal pattern, is what changes in this system. This learning is stored in learned connections/synapses along the proximal and distal dendrites. These change as learning progresses. The dendrites may also change and grow.
  • The learned pattern is enhanced by connection going “the other way” from the forebrain to form a global pattern in the global workspace. This highlights and elevates some global pattern into awareness.
  • These columns may interact with other columns via learned connections to organize into larger assemblies that take on the characteristics of grids.

All the bits and bobs are fixed in space but a kiss from the forebrain unites them into action; the forebrain gets its marching order from the older lizard brain.

The lizard brain gets its view of the world both directly from the sensory streams and from the digested versions of the world projected back from the cortex.

Around and around it goes.

I call it a dumb boss/smart advisor model.
As I stated above, be lizard brain does instinctually what it can, in the beginning, dragging the cortex along for the ride. As the cortex learns about the world the projections back from the cortex learn to shape the lizard brain actions to make it seem smarter. Add some fairy dust of external memory in the form of maternal nurturing and cultural knowledge (or herd knowledge for other critters) and you pretty much tie a pretty ribbon around the whole package

Very much more on this later.

If you are eager to roll your own connectograms the code is out there.
A Perl version is at:
http://circos.ca/tutorials/lessons/recipes/cortical_maps/
I have been doing Perl for a long time so this was pretty intuitive to me. This is what it looks like if you get it right:

I understand that getting started with Perl can be daunting; if you are not up on Perl you may like other languages better. I see references to the thing being done in Python but I don’t do much with OOP languages so I never checked it out.
A Matlab tool:

And more Matlab

As always; I welcome your comments and ideas!

6 Likes

A localised Global Workspace?
Cortex or not?

These are only three neurons in the Claustrum:

3_claustrum_neurons_AIBSa

source: nature

2 Likes

If you have been reading my posts you know that I am a firm believer in system level mechanisms.
Something system-wide is surely going to involve all the related systems working together.

A component like the Claustrum, centrally located, and with connective reach across wide swaths of brain geography, is very likely to play roles in one or more system-wide functions. The “global workspace” is one of those system-wide functions.

From my reading, I am more inclined to start at the thalamocortical interactions as a key starting point.

2 Likes

@Bitking: Thanks for all these materials about long-range cortico-cortical connections. I understand that your main focus was the inter-area connections between L2/3 cells.

It makes me think about the other long-range connections between areas. I have read some random papers on the topic, but didn’t find precise answers. I guess that this precise mapping is not well-known yet.

More specifically, I am looking for info on the differences between long-range:

  • Cortico-cortical connections in superficial vs deep layers,
  • and cortico-cortical vs cortico-thalamo-cortical connections (direct vs indirect pathways).

Do they project to the same area ? Are they regrouped in the same axon bundles ? Are they bidirectional ?
Do you recommend specific papers on the subject ?


PS: I came across this surprising figure from a thesis from 2010 and I am very suspicious about it:

image
https://tel.archives-ouvertes.fr/tel-00863803/document

They segregate 4 different types of cortico-cortical projecting neurons (in the visual cortex):

  • L3A: Short-distance feedback
  • L3B: Short and long-distance feedforward
  • L5: Short-distance feedforward
  • L6: Long-distance feedback

It would not fit with the hex-grid theory… but it was 8 years ago and maybe it doesn’t reflect the current understanding.

I like to conceptualize the cortical processing of a given area as agnostic of the feedforward/feedback nature of its inputs. It would make more sense!

Having read that a substantial proportion of pyramidal cells send axon collaterals to both lower and higher area, I am more inclined to think that the previous figure doesn’t reflect well the inter-area connections. Agree ?

3 Likes

It is more than a little disturbing that I have an extensive collection of papers on neural circuitry and mostly - they don’t show exactly the same thing.

I have often wondered if this is due to the researcher seeing what they wanted to see or if the different methods of staining show different organizations.

3 Likes

Having read a lot since the last post, I think I have now a better view of what is going on with inter and intra-area cortical connections.

I am beginning to make some order in my notes and I chose to formalize it in a visual way to make it easier to digest, and to digitalise it to share it with you. The first slides are about those cortical connections. I am planning to do the same for a bunch of other subjects I came across…

I welcome any comment on my view on the functionality behind the inter & intra cortical connections, the speculations about the nature of representation in supra vs deep layers, and the anatomical & developmental interpretation of feedfoward/feedbak.

8 Likes

Your drawings are prettier then mine - nice work!

Referring to some very crude drawings I make on another thread, how do you feel this fits into your proposed wrap-up?

I answered directly in the other thread:

I haven’t put the emphasis on inter and intra cortical connections in my response, so it is not very related to my drawing presented above.

1 Like

Screen Shot 2020-01-14 at 4.18.44 PM
This feedforward is up the hierarchy through the thalamus, right? Can you explain the feedback connection in more detail?

1 Like

These feedforward and feedback connections are corticocortical connections. No thalamus involved.

The following illustration shows corticocortical connections between areas of similar cytoarchitecture (we can say similar hierarchical level to simplify):

If the represented cortical areas have different hierarchical level, the connections are not symmetrical :

  • Corticocortical connections from granular to agranular tend to stop earlier in deep layers (commonly referred to as feedforwards, but this term is sometimes confusing because we like to think of the prefrontal cortex as the higher level but in fact, the main direction flows are going towards the motor cortex, see next illustration).
  • Corticocortical connections from agranular to granular tend to finish more in L1 (commonly referred to as feedbacks)

Those different connection patterns come from temporal differences in cortical development between agranular (early) and granular (late) areas

image

The feedforward pathway through the thalamus is an additional pathway completely different from this one (I am currently working on this slide).

You can have a look at this good illustration from Sherman (red arrows are the “ground truth” signal in the 3VS paper, but the predictions from L6 CT are not represented here)

image

6 Likes

Here it is:

7 Likes

Thanks, I guess CT cells are inhibitory?

All cells in the previous diagram are excitatory (including CT cells).

They are locally surrounded by inhibitory interneurons and it is not yet clear if excitatory cells project directly to other excitatory cells or to inhibitory interneurons. Both cases probably exist.

3 Likes

Would it be accurate for me to say that by outputting the exact opposite of any input signal received as in my vesicle membrane project “I have been experimenting with the fundamental wave generating behavior of reciprocal excitatory connections found in intra-area coupling”?

I can’t help but see what looks to me like the exact same thing drawn on the surface of your illustration:

Gary, if I understand this correctly the wave action is coming from the thalamus to act as a coordinating control function with the cortical connections as the data that is being synchronized.

I am not absolutely certain on this as I have not spent enough time studying the thalamus but it looks to me that your work on waves would match up most closely with some inner working of the thalamus.

1 Like

I’m trying to picture how that would work. Unfortunately the inner workings of the thalamus is still for me a complicated problem.

In the illustration the connections I’m looking at would only be the blue ones on the very surface. Everything else connected to it from below would change the behavior at that location, which in turn past that point changes the pattern of the traveling wave.

Although it’s hard to say whether it’s used as such: at the far end of each area a 2D map would become 1D signals over time, sort of a unique address that depends on what was mapped onto the 2D area. How the whiskers of an animal were brushed would show up in the complex pattern that the barrel cortex cells end up propagating outward to others, a way for each in the network area to sense unique experiences that happen in the external 3D environment.

1 Like

A thought experiment for you to consider - how does the completely intermixed learned patters embedded in the cortex all play out in the same wave pattern during experience and recall? What distinguishes them one from another?
In the scheme I am proposing - the wave interrogates the contents and coordinates the sender and receiver between maps with no regard to the contents so the wave shape can be the same for both.

I can’t see how the contents form the wave; unless you can offer some explanation to tie the two mechanisms tougher I can’s see how it would work.

1 Like

In this case I’m thinking more like what does a single cell that already has a good ability to predict and respond to events get out of helping to propagate 2D (stadium) waves with a pattern resembling what is being sensed happening in the external environment.

I also see it as a “wave interrogates the contents”. In the ID Lab-6 the contents would be walls bashed into and prior memories of shock zone locations at that time, which stop propagating the waves being started by the location with food in it. The waves that bounce off or adsorbed by the mind theater’s wall and avoid locations produces a vector map showing all the safe places to travel, paths towards safety.

Which cells are you referring to? E.g. L5 CT and interneurons surrounding them, TRN, interneurons in other thalamic nuclei, etc.

You see that this is where the fact’s don’t support the concept. The videos that I have seen show the waves sweeping over the cortex without any diversions.

Thinking about this a little more deeply - the contents of individual maps/areas is a distributed representation. It is fair to say that the representation is really distributed up and down the entire hierarchy but I will restrict that to a single map/area for this post. There is not a local “wall” to bounce off of. The wave has to sweep across the entire map/area to do whatever processing that happens in that area of cortex. I expect that the idea of a wall is not fully formed until the wave complete a pass across the entire map/area.

I also see the processing of things like paths and goals as being far more distributed in time than a single map/area. I strongly suspect that going from perception to action involves the entire hierarchy up to the temporal lobe, a pass through the subcortical structures out to the forebrain to be elaborated into action.
The various maps/areas have to decompose the sensations into feature clouds for recognition. This model does not do things like goal selection and object avoidance in single maps/areas.

I can easily see how a very much simpler life form could do the kind of processing like the slime mold offered for example but by the time you get up to worms and insects most of the brains have evolved far past this simple model.

1 Like