Mechanism for different neuron modules to agree on common SDR schema?

I just can’t figure this out.

Say, with designated 1000 neurons in moduleA, and another 1000 neurons in moduleB, some how they managed to speak a same SDR schema so that spiking of 20 neurons in moduleA should be interpreted as the same information as that spiking of specific 20 neurons in moduleB.

Then, from the point of a neuron moduleZ, what algorithm can be used to learn its synaptic configuration, so the 2 groups of 20 neurons from moduleA and moduleB respectively, are eventually to signal the same information?

Further more, may be the same as how moduleA and moduleB come to their agreement, now for a neuron moduleC intending to speak the same SDR, what algorithm can lead it to this goal?

Many Spatial Poolers to serve as adapters?

Or the Hippocampus some how maintain 1000 neurons serving as input of moduleZ, actively to switch between those respective 1000 (though different individual) neurons of moduleA/moduleB/moduleC on demand of different tasks? Then unless the 1000 physical relative positions of neurons are fixed some way, how can it tell which individual neuron should take place of its correct rival?

2 Likes

Maybe this is another question: How can abstract concepts conveyed in form of SDR flow around concrete neurons and get interpreted correctly?

Have you considered that the connections between the modules is fixed?

That is, when a particular code is signaled the same connections and synapses convey the same information each time. What is learned on the receiving end is the same each time the particular combination of bits that make up the SDR is presented.

From each sensory stream the same elements are presented with the same pattern in the same position. The cortex may learn many many related presentations of the same object and learn to group them but at a local level a particular pattern forms a fixed SDR to be passed up the hierarchy.

The fact that a given neuron has many synapses that may learn these closely related inputs, and the output axons branches to present to many different neurons is part of how this grouping occurs in the brain.

One of the things to think about when considering how information is coded is the palimpsest. Think of the eye. As you look around the vision processors are presented with different snapshots with each visual saccade, each overlaying the last one that was seen. The low-level mechanism that drives eye pointing uses relatively stereotyped scanning patterns. At each presentation, features detectors over the entire visual field are firing and voting laterally to form a sparse firing pattern that “matches” the presentation.The sparse pattern that is formed is presented to the next map/area which is not only doing pattern matching but taking the temporal aspect of what patterns where presented prior to the current pattern in context to form it’s SDR. The stereotyped scanning patterns insures that for a given object the sequence of temporal presentations is the same each time. This sparse spatial/temporal pattern is what is fed to the association areas to build up to object recognition.

In HTM the Heriarchy and Temporal parts are important elements of framing the SDRs to make sense of what is being signaled.

The counter-flowing signal paths are key to providing context to disambiguate the many possible matches that exist. As you are trying to wrap your head about how this context thing works consider the cocktail party effect to see it in action. In the eye, you can feel the “flop” between contexts in the Necker cube.

2 Likes

I can roughly grok how the configuration of fixed connections achieve recognition tasks. But in my mind is some more advanced task like imagination / mental simuation, e.g.

I first place a circle on the imaginary canvas in my mind, then duplicate it to make 2 circles, and have them move apart from each other.

With this stage set, now I perform this very action: to feel the “roundness” of each circle there.

Mathematically, the “roundness” can be described as every point on the circle being the same radius away from the invisible center of the circle, but I can just “feel” it without other labor.

My question is: the 2 circles obviously have exact same “roundness” in my mind, how is this “sameness” represented in my mind?

If 2 equivalent “roundness” represented by 2 different groups of neurons, then I must have allocated the 2nd group neurons and trained them to be same as the 1st group, when I duplicated the circle. Despite the huge overhead to do it this way, the conceptual roundness would never settle in my mind, the next time I see the full moon, how can I perceive its “roundness” about the same as an imaginary circle?

If “roundness” is represented by Hebbian memory recall-able via some SDR, then the 2 imaginary circles in my mind will have to signal the same SDR separately, how is that done?

You can think that when we think what we “see” as one thought is not encoded as one SDR but a succession (or loop?) of few SDRs .
Some represents a circle, one its position and another the “canvas”, and maybe some others encoding bilateral relationship between “parts”.

We can’t know if this actually happens, yet if you-re looking of means to encode complex scenes in a machine, lots of ideas may pop out.

1 Like

I think if you imagine one circle at a time, it’s probably the same group of neurons. To imagine both circles at the same time, I think you need to imagine a scene with two circles.

For example, the bigger circle might be down and to the left relative to the smaller circle. I think that’s the same as seeing two instances of the same object in real life. So the question could be, how can you perceive two instances of the same object at the same time?

When you see two circles at the same time, they are at different parts of your field of view. Different patches of your retina see each circle. For example, one circle might down and to the left relative to the other circle.

Patches of the retina map to patches of the cortex, so different patches of the cortex recognize each circle.

So maybe the question could be, how can you recognize the same object when two different patches of the retina see it? That’s a major topic in the thousand brains theory. It’s probably not fully solved, and I don’t think I can explain it well enough.

1 Like

In considering 2 objects in an imaginary senses you are using a serial process. Also, this mental manipulation is using maps/areas very far away from the sensory receptors areas, in the areas by the posterior temporal lobe.

These “high level” areas process things in serial order, using the same cortical areas with other related maps holding parts of the process as placeholders. I maintain that these serial processes are performed at the “loop of consciousness“ level and are outside of the scope of the basic operations performed at the individual map/area level.

2 Likes

I think that in biology module C would learn different connections from presynaptic modules A and B, and so it would have a lot of what you consider to be “redundant” connections.

But consider that module A and B receive different sensory inputs and so they’re doing different things.The two modules can still help each other. One module might analyse the visual sensation of an object and the other module might analyse the tactile sensation of that same object. So the each module has a very different representation of the same object, and the downstream region forms different synapses to each sensory modality.

Even two patches of the retina are receiving very different inputs due to the log-polar transform that happens in the eye’s lens.

3 Likes

Just come to realize Brain waves happen even when the wetware is idle (I’m quite a newbie here & just hobbyist of neuroscience), so yes, static SDRs may very probably not the representation of abstract concepts.

Then it seems even more difficult to understand how concepts are represented in the brain, in form of SDR sequences, or the change patterns thereof.

I’m now clueless about how the “sameness” get established between different neuron groups (or Cortical Columns?), if different SDRs are learnt separately?

My naive assumption when I asked the original question is that same SDR format being shared, but now I’m sure that’s absolutely not the case, after learned from your answers.

@Casey you expressed my question much better than I could have. I’ve bought the 《Thousand Brains》 book and am actively reading it, delightful experience and much easier than groking the papers alone.

1 Like

My gut feelings now tells me that abstract concepts may much more probably be represented by features embodied in some relevant processes (neural oscillations or brain waves), instead of static data structures (like SDRs).

I’m a software engineer and in software engineering, UML approaches are utterly the way to specify how a system should work. I’m not comfortable that SDRs are not universal “data types” those could be used for exchange of information among different components, but obviously I need new thought frameworks in understanding how brain works.

I’d like to learn more about “loop of consciousness”, beyond the linked post, please share more pointers toward this idea.

@dmac now I know “redundant” connections could be the norm for all Cortical Columns each possesses “whole object information”, after read first chapters of 《Thousand Brains》. But it still bugs me, that how separate CCs come to realize they are sensing the “same” object? E.g. 2 columns taking sensory input from 2 consecutive areas from the retina, after a perfect eye-move, they take exactly the same optical-resulted patterns of sensory input, but they are still different groups of neurons. In what way they can compare the information and come to the equal conclusion?

They don’t see the same thing, they see different parts of the whole image at the same time. This is where the lateral voting comes in to connect the different parts that are being recognized at the same time.

By what means, does “lateral voting” mean “same object”? (《Thousand Brains》might have the answer, but I haven’t read through it yet.)

The lateral voting is between cortical columns- as each is sensing some part of a total object they communicate with other CCs to say that they recognize this little bit of an object.

There is no one place where the “loop of consciousness “ is detailed. I got this from reading hundreds of papers and connecting the dots.

If CCs are separate groups of neurons, with what mechanism, can they agree/disagree whether they are sensing the “same” object or not?

I learned that there should not exist a single audience of the Cartesian theater to interpret the voting result, but at least there should be some (side) effects serving the interpretation purpose, what are these effects?

Keep in mind that trying to understand all of the brain activies by focusing on a small part can be confusing.

A steering wheel is clearly important to directional control of a car but it would be difficult to understand how the steering wheel navigates from my house to my work site; there is more to it.

Likewise, I have mentioned the heirarchy and the counter-flowing contextual stream as part of the extraction of of meaning from the sensory streams.

As far as the voting aspect I will direct you to one of my older posts that details what I believe is the object identification task as performed in the parietal region.

Please note that several labs have confirmed the hexagonal signature in several cortical areas since I put this post up.

1 Like

The two neurons will form synapses between each other, if they are often active at the same time. Neurons use Hebbian learning to look for correlations between their inputs and their own activity.

The two groups of neurons should often activate at the same time or in rapid succession, which should cause synapses to form between the two groups.

1 Like