This is a companion to the post HTM Columns into Grids
Sure Grids are cool - News Flash- so are Maps! The reinforcement within and between maps goes a long way towards a general solution of the binding problem.
One of the “breakthroughs” for me is that the cortical.io people have formed the SOM all in a single batch. While very powerful this is not biologically plausible; the brain learns online as data is presented to it.
I am thinking that with an attractor model that is formed as the content is added (forming and shaping pools of attraction) the training set is the actual data you stream at it. The stream encoder to spatially distribute the training would be a key part of making this work.
The hex-grid signal meaning by the shapes of activation. But really - what does that look like?
So what does it look like as pattern learning evolves from general to detailed? For this explanation, it’s easier to imagine one slice of the stream of a pattern of a leaf or some other element of a picture. This is just an example to aid in visualization - In the brain, the pattern learned is likely to not look like anything you can easily envision. Actual patterns are layers of patterns all jumbled together as Palimpsests.
At first, it learns a blob that could be part of any pattern. As time goes by and it sees two patterns there is some disagreement on the edges of the blob and details start to fill in. This process is triggered by learning what is different from what has already been learned until detailed patterns build up for each type of sensed pattern. It may take some time since learning is hard and only a little bit gets learned in each session. Then you have to sleep on it to consolidate this new learning.
This is how I see the pools forming in my mind’s eye. Again, the data at higher levels of representation would not look like a picture of an object.
These shapes are the unit of information that is passed from map to map.
But what do these shapes say?
One of the areas I spend some time thinking about is how grid fragments group in certain areas of the brain. Lesion studies are where someone has some metal deficit and post-mortem the brain is examined to see what was damaged. Over the years they have developed some good ideas on what is done where in the brain. It also points to units of mental processing. What does it look like if this or that part stops working? This has progressed to some workable theories as to what is parsed and stored, and where. If this interests you at all you should read this paper. I would love to go through this as it dovetails wonderfully with the material I am presenting but in the end, there are so many good things in this I would just end up reprinting the paper.
In the paper below, four semantic mechanisms are proposed and spelled out at the level of neuronal circuits:
- referential semantics, which establishes links between symbols and the objects and actions they are used to speak about;
- combinatorial semantics, which enables the learning of symbolic meaning from context;
- emotional-affective semantics, which establishes links between signs and internal states of the body;
- abstraction mechanisms for generalizing over a range of instances of semantic meaning.
Referential, combinatorial, emotional-affective, and abstract semantics are complementary mechanisms, each necessary for processing meaning in mind and brain.
How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics - Friedemann Pulvermüller
I have been noodling on how to form both the grammar and semantic content with the same training process.
The latest frisson of excitement to hit me on this is the post about a chatbot on another thread. In it, I referenced the “frames organization model” of world information; I don’t see any reason that this semantic representation could not be formed using the same process.
As the patterns are learned and distributed through the maps I expect them to cluster in some semantically useful natural classes. Studies show that this seems to be the case. This graphic is a small clue to how the connection in the connectogram group things to be correlated.
From an info-graphics view the populated semantic landscape looks something like this:
Maps are hard
All the maps - perhaps 100+ - are connected together with a complex system of loops of fibers. On first blush, this is a hideously complicated system that defies casual analysis. How will anyone ever map what is connected to what?
https://ac.els-cdn.com/S1053811913004709/1-s2.0-S1053811913004709-main.pdf?_tid=e5cebaf6-d552-4a39-a0dd-a10cccd2e8b6&acdnat=1521258659_b94d5522a452cc114b31c4ae422e3b64
The good people at the human connectome project have been working to tease out what is connected to what.
http://www.humanconnectomeproject.org/
More about the mapping project:
https://www.nature.com/nature/videoarchive/brain-map/index.html
If you want to use their tools to view the brain you can install the Connectome Workbench from here:
https://www.humanconnectome.org/software
Do work the tutorials first - this is about the most user-hostile software there is for just clicking on stuff to play with it. If you do everything right it can look like this:
Good tools to the rescue
One of the most compact forms of representation of these fiber-loop connections is the handy Connectogram; this is the same thing in a more digestible format with the endpoints labeled. I suggest that this will becomes a standard tool of the AI experimenter. I envision this as one of the standard data flow views.
And more on the people making and using this tool:
Let’s look a little closer. This shows the intercortical axon projections:
No - closer yet - at the individual local maps of the brain.
Zoom in even closer.
Let’s see the axon projections landing in an area of some map; some are weak or partial signals; they are part of the pattern sensed.
Here is what the local maps receives as input:
They may even be changing in time. These patterns will be sniffed by the local grids and it may resonate with some learned pattern if it has seen it before; this indirectly learns and shapes grid formation.
Here is what the local map generates as output:
This is the signal that is fired down the axons to other maps in the connectome.
All of this is the fuel that drives the formation of the Global Workspace. (See Below)
This is the programming that determines what modules will receive samples of this or that map to combine and learn. At this level, a brain theorist can start to show how networks are formed and trace through the contribution that different maps are combining to form high-level dimensional connections.
I see this example as a different data view in the AI researcher’s toolbox, right alongside the connectogram.
What kind of programming model with the AI researcher be working with?
I have spent some time working out what the visual studio IDE for an AI researcher might look like. I think it will look something like this. Depending on who is looking and what they are trying to understand - different researchers have focused on different connection paths to work out this or that part of what the brain is doing.
The models will be something like 40% from area A and 10 % of area B, combined with certain predefined network types. Each type will have tunable parameters set on the connectogram; perhaps as a popup properties panel. The network graph connections view (above) will allow examination of the activation or training patterns as they form.
I can see that picking a part of the connectogram or graph view will cause the corresponding parts in the other view to be highlighted and perhaps second and third order connections to be indicated in some way.
I don’t expect you to try to understand these pictures here - I am trying to convey that this will be the level of programming necessary to build an AGI. (Using Hexagons @Paul_Lamb )
Progress is being made!
There are some high-level paths that have been identified as a “backbone” for much of what goes on in the brain. This is called the “default network” and seems to be the “idle hum” of the brain engine.
If this interests you more can be found here:
When the brain is running patterns are combined and processed with directing a flow of attention (activation) that drives the cortex to examine and process the various streams of the external signal and select motor programs as needed. These motor programs may unfold to drive further attention and processing.
When it’s working It looks something like this:
As you may already know, I postulate that there is a loop of recognition using these mechanisms that let you recognize your own internal states.
In figure 2 of this paper, they outline the major brain centers as being involved in an ignition of activity - and the global workspace paths of activity follow the loop of self-experience I outlined above.
The contents of this perception are combined with internal need states in the frontal lobe. This unfolds into motor planning of some sort. This can be as little as adding emphasis to your perceived state and starting “ignition in the global workspace” or simple unconscious attention. This shaps the contents of consciousness - pushing it dynamically. As you read this you can think of your left nipple; until you think of it you were not aware of it but you can direct attention to it to bring it into your global workspace. This is the flow-down of activation I was mentioning earlier. There many things that you are NOT paying any attention to at all times. An insect biting you on the left nipple can also draw your attention to the area, showing that this process is quite fluid.
Some parts of your environment may match up to part of what your prefrontal cortex knows that your limbic system needs below the level of awareness (even if that need is just exploring) and this matches up to start a motor program - the eyes are directed to look at it, bringing it into greater awareness in the experience in the temporal lobe. One of your learned motor programs (one of the first a baby learns) is scanning an object with its eyes. Some say they learn to do that inside the womb. As you grow older you learn to play the old game of 20 questions with your eyes to identify what it is you are looking at; this is all a learned motor pattern in the forebrain.
If your perception matches up with one of the internal needs the motor program may even unfold to actions involving more of the body like your limbs, grasping, walking, eating, talking.
Naturally - there is a lot more to this but these are some of the key bits.
I’ve covered a huge swath of material here. If you have made it this far your head may be spinning trying sort out what bits go where in all this.
Let’s set everything in its place:
- The location of the columns that form the SDR processing nodes is fixed in space.
- The loops of axons that connect the columns in one map to the next are likewise fixed.
- The 0.5 mm range interconnections between neurons in the same area of the cortex is fixed.
- What the columns learn - using the proximal and distal dendrites - to recognize a bit of a spatial or temporal pattern, is what changes in this system. This learning is stored in learned connections/synapses along the proximal and distal dendrites. These change as learning progresses. The dendrites may also change and grow.
- The learned pattern is enhanced by connection going “the other way” from the forebrain to form a global pattern in the global workspace. This highlights and elevates some global pattern into awareness.
- These columns may interact with other columns via learned connections to organize into larger assemblies that take on the characteristics of grids.
All the bits and bobs are fixed in space but a kiss from the forebrain unites them into action; the forebrain gets its marching order from the older lizard brain.
The lizard brain gets its view of the world both directly from the sensory streams and from the digested versions of the world projected back from the cortex.
Around and around it goes.
I call it a dumb boss/smart advisor model.
As I stated above, be lizard brain does instinctually what it can, in the beginning, dragging the cortex along for the ride. As the cortex learns about the world the projections back from the cortex learn to shape the lizard brain actions to make it seem smarter. Add some fairy dust of external memory in the form of maternal nurturing and cultural knowledge (or herd knowledge for other critters) and you pretty much tie a pretty ribbon around the whole package
Very much more on this later.
If you are eager to roll your own connectograms the code is out there.
A Perl version is at:
http://circos.ca/tutorials/lessons/recipes/cortical_maps/
I have been doing Perl for a long time so this was pretty intuitive to me. This is what it looks like if you get it right:
I understand that getting started with Perl can be daunting; if you are not up on Perl you may like other languages better. I see references to the thing being done in Python but I don’t do much with OOP languages so I never checked it out.
A Matlab tool:
And more Matlab
As always; I welcome your comments and ideas!