What happens in layers 5a and 6a?

Not directly answering your question but adding a different and related question:

The attention I see is strictly to “displacement” or “object” cells and how the L2/3 fits into this picture.

Why are there no questions about how this fits into map-2-map connections, thalamic-2-cortical connections, attention mechanisms, wave activation, and the other functions that are known to happen in the cortex?

For that matter - how about all the “other” cells types in the cortex - why is nobody asking how they fit into this pictures?

I’ve briefly read some of your posts in other threads addressing that problem in the overall HTM theoretical model that they’ve developed

do you think they are missing something in their L2/3 model that would prevent them from extending the thalamic-2-cortical connection for optimal performance between the layers?

2 Likes

Most of the cortical-2-cortical map connections involve L2/3.

As far as the thalamic-2-cortical connections please note that there are multiple circuits.

A major one terminates in L4. Speculation is that this is involved in attention and synchronizing waves of activity. HTM theory involves a synchronized “before” and “after” to work and these waves fit the bill perfectly.

The other circuit is involved in the lower layers and seems to be involved in hierarchy originating from the connections from the forebrain, and through that, to the sub-cortical circuits. Mixing in a huge level of speculation would suggest that this is orders/requests/filters from the dumb boss being distributed to the smart cortical adviser. in this model the smart adviser responds by providing a greatly digested version of the senses to the temporal lobe and through that, back to the EC/hippocampus/amygdala.

Are there any clues that something similar is happening in lower mammalian brains also? Would a brain be capable of “working” if it didn’t have that temporal like lobe highway through the EC/hippocampus/amygdala

What are your thoughts on why the sensory input passes through the thalamus? Do you think it does something different from the CTC connection to L4?
I see the feedback signal from higher order thalamus and non-lemniscal thalamus (matrix?) to primary cortex non-L4 (L2/3, L5 ST cells, or septa in L4) as better suited to attention because it can be facilitating for some of those, like L6 CT signals, which are thought to be involved in attention.

Which connections are you referring to specifically?
I’m starting to think that one type of L6 CT cell corresponds to a thalamic pathway which serves the initiation of the where pathway in primary cortex, tightly tied to motor copy signals, proprioception, or similar. This part of the thalamus might be modulated by the sensory input the same way that L6 CT cells modulate all thalamic cells.

For the primary cortex, maybe the dumb boss is subcortical where/how-related structures and the smart adviser is the sensory input and cortical feedback. A point on a map of saccade targets is a lot simpler/dumber than a representation of lines and whatever else, but needs advising on which points on that map are good targets for saccades.

1 Like

I am only familiar with a limited set of critters (rat, cat, primate, corvid) in regards to the mammal and avian brains. As far as I know, this is a common layout for all of them; the limbic system is well preserved though the mammal family.

While we are at it - how about the amygdala?

@gmirey may be able to shed more light on this question.

As far as “working” this is an interesting question. Patient HM lost his hippocampus after forming a working set of memories. He was not able to form new memories but from what I have read he was able to talk and reason without a functioning hippocampus. I imagine that this would not have worked as well in an infant without working memories to draw on to survive.

  1. The thalamus is an older structure; earlier this was all there was.

  2. One of the properties that I mentioned has a prior post that goes into this in some depth, see numbered references # 4 though # 8, with a focus on #8 for my statement:

I have trouble pointing to a single reference on the attention thing. I have read a bunch of papers since the original “searchlight of attention” and most of them have refined this by patching up the missing or wrong bits. Of course - there are the counter argument papers that point out the problems without proposing any alternatives.

1 Like

I could try to work through the top-down (reverse?) stream in the sub-cortical structures but Randall O’Reilly does a great job of it with this paper - I don’t seen any need to duplicate the effort:

2 Likes

Sorry about the confusion. The figure on the left is an anatomy diagram. It doesn’t say what the layers do. The figure on the right illustrates what we mean by saying a cortical column has grid cell modules and displacement cell modules. It isn’t meant to be super accurate anatomically.

Notice in the figure on the right I didn’t label L5a vs. L5b and L6a vs. L6b. It is implied by the figure but unintentionally so. I was only trying to show that displacement cell modules are in L5 (somewhere) and grid cell modules are in L6 (somewhere). This figure is from a talk. The main point of the talk was the functionality of a cortical column and less about specific anatomical details. We are working on these details and we hope to write a paper in the coming months where we can be more specific about layers, what they represent, and how they function.

6 Likes

I think for now we can afford to ignore some of the connections based on their terminations due to fact that the actual output of the human system is a collective activity of multiple neural regions throughout the body, which isn’t a condition for optimal output in terms of intelligence (amygdala can hijack the system when a person experiences emotional stress).
There is a need to identify the circuits that make the core algorithms run and isolate other circuits that might be responsible for relaying and combining outputs (except cortico-cortical).

I see.

So you are proposing taking the drum apart to see what is making the noise?

Check out this paper:
How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics
Friedemann Pulvermüller
https://www.cell.com/trends/cognitive-sciences/pdf/S1364-6613(13)00122-8.pdf
Direct your attention to figure one. When you read this you should see that semantic meaning is distributed to the areas that are related to the motor-sensory loops that are involved in running your own body. This is the physical substrate for your grounding of semantic structures.

Note that a significant part of the semantic structure is these “input/output” parts that you seem to dismiss as unimportant to computational functions. Producing speech is a motor performance task and the perception of speech is an input parsing task linked to the motor system.

In the Dad’s song group we propose that at an early stage the critter hears (and learns) speech and later manipulates the sound generation hardware to produce speech. I will add that much of what we think of as higher level cognition starts from speech. A human without some form of speech has significant cognitive deficits.

I have pointed out that “thinking” is an internal elaboration of motor patterns relayed to other internal brain maps maps rather than to external motor drive.

In other threads I have pointed out that sequencing of thought patterns is mediated by the cerebellum - a brain area frequently denigrated as “merely coordinating motion.” Humans without a functional cerebellum have significant cognitive deficits.

If you are going to eliminate the messy input/output parts you will be left with nothing to do the computation.

@abshej - (amygdala can hijack the system when a person experiences emotional stress)
As far as “hijacking” the amygdala is an integral part of forming affective weighting in storing memories as “good” or “bad” so useful decision making can occur later. It is a very effective form of predictive processing. Critters generally don’t reason out decisions - they form snap judgments based on the combination of the good/bad weighting of the factors present in the consciousness theater of the here and now. If they waited for the whole “reasoning things out” process to work they would end up as someone’s lunch.

Even us “clever humans” seldom make reasoned decisions - much to the delight of the advertising and political industries.

The brain is not a computer and trying to force it into the mold of a “processing engine” is not going to work very well.

I agree we can ignore some connections, or at least save them for understanding later. Layers process things somewhat in parallel and connections can do things like add context.

I see the problem as figuring out what is just a specialization, what is universal, and what is universal but implemented differently. Physical structures like sublayers for different submodalities are clearly probably not universal, but thalamic matrix cells target L5a in rats and L2/3 in primates, and I have a hard time believing both of those projections are specializations. Having two implementations of an algorithm is useful for hypothesis testing. Specializations can give clues to the core algorithms since they need to plausibly make sense in context of those algorithms. It’s also important to find exceptions to supposed rules and see if there is an explanation.

Some connections are simple but don’t make sense. Getting the ideas right is very very very hard.

For example, layer 5 is usually thought of as just a motor output, but there are many reasons it isn’t. It even targets the first subcortical stop for some sensory inputs (a trigeminal sensory nucleus for whiskers, not the primary thalamus, but you could argue that too in one case). You could still argue it’s just a motor output.

L6 is usually thought of as sending feedback signals to the thalamus, but the deepest part (or in some species, cells in the white matter below) also seems to send feedforward signals to the thalamus up the hierarchy. You could argue that’s just a remnant of development, though. You could also argue those are matrix cells or it has something to do with them receiving direct sensory inputs.

One author speculated that the thalamic nucleus POm drives activity in the septa in barrel cortex, and eventually that became taken as fact maybe for a decade until it was found to be wrong.

Hierarchy is usually thought of as entirely cortical or through the thalamus, but secondary cortex and probably others receive direct sensory inputs in rodents and a little in primates. But the response during those sensory inputs might usually get inhibited by the zona incerta.

It’s an endless rabbit hole of caveats.

2 Likes

This means that an adapted functional circuit is reused for a task, therefore the input patterns and the desired output are similar in both input types(speech(some part of speech) and motor). This presents no conflict with breaking apart cortical circuits to get to the algorithms.
These different types of semantics are necessary for different tasks but from an algorithmic perspective they are only combining different input and output types for different purposes.
Lets say we figure out the exact circuit for one task and break apart the functional circuits throughout the path, there is a chance we will come across many redundant or unessential circuits; or networks that have evolved alternatives.

My idea on the brain is that its just a converter. Its not a processing engine or computational machine… Its just a converter that converts electrochemical impulses into sound. Intelligence is a black joke played by reality on us.
The brain takes noise and eliminates some noise to make some sound. Its arbitrary. The meaning we perceive is ultimately an illusion.

I agree. My understanding is that only the core and rudimentary algorithmic understanding is sufficient to create better suited, intelligent networks. BUT, this means that a broad understanding of functional circuits and their connections is required.

Great! Please post your findings as they develop. I wish you the best of luck in your efforts.

Perhaps you could start with taking a marmoset monkey and differentiating that with a human brain. The difference would be human intelligence?

I will keep along the (clearly wrong?) path of trying to understand how the only functioning example of intelligence working works.

At this moment all I am sure of is that nature uses the brain to select actions to move the body. A huge fraction of the brain is involved with perception and processing that into action selection. At the end of the day - driving the nerves to drive muscles in the most useful way is what keeps the animal alive. Evolution favors the critters that do this better than the next critter. There may be some tiny part that adds the intelligence icing on this cake but it seems inseparable to me.

While you are working this all out keep in mind that the sub-cortical structures have been around much longer than the cortex in an evolutionary sense - they are extremely powerful on their own and contribute much to the “thinking” process.

I’m not sure what you are disagreeing with. Wouldn’t this just mean subcortical processes are just essential too? Are you just disagreeing with any of the following?

A converter implies the brain doesn’t care about anything except the present, but it clearly does care about what happened a second ago. It could also be taken to mean the brain is just a bunch of arbitrary, chaotic signalling which has evolved to do stuff. I disagree with that because the cortex is copied (more or less) for a bunch of different things, like different senses or executive control. It’s unlikely this is just for efficiency reasons because circuits are conserved, in many ways with zero exceptions unlike what you would expect if it’s just for efficiency.
You also say we need to find core algorithms, which I take to mean universal algorithms, so maybe I’m misunderstanding.

I imagine solving intelligence might have two stages. First is figuring out how a single cortical region works and how they interact with each other. Second is figuring out interesting ways to put them together, possibly alongside very detailed subcortical circuits, or just simple uninteresting analogs. For things like decision making.

1 Like

I have to apologize to both of you. This is old baggage for me.

The baggage aspect is from several prior exchanges with @abshej on this same general topic. I have been seeing this general viewpoint from various AI researchers for years. This revolves around the concept that there is some sort of intelligence function that is separable from running a body and if we can just isolate that we can dispense with the messy biological bits like emotion and the body I/O functions. Often this is mixed in with cortex chauvinism that the older parts of the brain are just some sort of relay stations to get the cortex the data it needs to do what it does.

Some sort of general G thing with a sprinkle of the pixie dust of self-awareness.

This runs contrary to what I have been reading to the point where I am wondering why they can’t see that the brain just does not work that way. I am convinced that function follows form here and that trying to extract the “intelligence” from the brain will mostly end up duplicating the functions (if not the actual form) of the brain.

The brain - It’s not a filter. It’s not a computer. It’s not some heightened expression of panpsychism. It’s not an algorithm or a cost reduction function. I really think it’s silly to try and tie in quantum uncertainty. I’m sorry that these are triggers for me but after 35 years of reading about this stuff these sort of statements are annoying to me.

I will try to keep this in check but it’s a struggle.

For example:

1 Like

Cost reduction functions annoy me too, a lot, because it’s a reason to focus on preventing AI from killing us, but unless a lot of people are stupid the real problem is that AI will put everyone out of a job if it isn’t introduced correctly. I understand your annoyance by assumptions.

I do think there is an algorithm for intelligence, in the sense that there is a set of interacting mechanisms or whatnot you could program. I don’t think people usually use the word algorithm in the strictest sense on this forum.

I assumed there is a purely cortical hierarchy (+thalamus) in rats, which in retrospect made it confusing to resolve with primates. I now think the rodent sensory hierarchy is mostly derived from different subcortical inputs (passing through the thalamus), almost like different sensory submodalities, except also signals from L5 of lower cortical levels. L5 happens to be the only cortical output to most of the subcortex, so it’s arguably all subcortical in a way.
It seems like the commonly known purely cortical hierarchy only evolved in primates, and in somatosensory regions it only evolved in simians. It might only be purely cortical in the first couple levels in primates because primary cortex is like 9 out of 10 studies.

1 Like

Hey,
You seem to be prejudiced. I don’t think that there is an ultimate algorithm that we are looking for here. I do not think that intelligence is something apart from running a body.

I am not saying the two are different.

The neocortex or the brain or whatever it is you are trying to reduce is nothing but memory database linked to some circuitry. Memory collected from noise using certain core algorithms that somehow helps in survival of the body.

And how it does that is the question. There is nothing different we are looking for here.

There is nothing but the present activity in the brain. The sparse activity is all there is. It contains the older semantics but that’s another black joke.

You are saying as though there must have been a lot of circuits to choose from. This is a relatively recent popup in the human biosoup.

I hate to butt in like this, but that is exactly what we are looking for. We (Numenta) have been pretty clear about this. Sure we started focused on the neocortex, but the ultimate algorithm is not a simple thing. It contains other algorithms. It is what defines intelligence, it’s what we’ve always been looking for.

4 Likes

I think there are spatial pattern detection layers and temporal pattern or sequence detection layers(that collect the temporal sequences of said spatial pattern layers). There is a spatiotemporal pattern conversion converting sequences of patterns into static spatial signals and the conversion of static spatial signals(from the temporal or sequence detection layers) into a sequences, and these sequences in turn trigger further sequences.

I think the system can handle and work with such simplicity in part due to the phenomena of postdiction, we feel we predicted certain things, but the actual conscious sensation of the present is constructive and done in a postdictive manner.

Regards hebbian learning, metaplasticity, stdp, I think the existence of a portion of long range connections allows the smaller simpler lower dimensional patterns from different sensory organs to act in a self reinforcement loop positively selecting those patterns that are part of a larger sparse spatiotemporal pattern throughout the multilevel structure, a higher dimensional model of an external object or causal actor. This internal evolutionary competitive force causes an extremely rapid convergence towards association of patterns to their true fundamental predictive causes.

2 Likes