Symbolic semantics might be an accessory rather than a necessity

Borrowing from other threads, I would like to discuss about the apparent necessity of symbolic representations and manipulation rules, which I think what language is, other than being a form of communication. A lot of people seem think to that language is necessary for higher cognition and that without it, human-level intelligence is not possible.
The Role of Language in Intelligence
Intelligence and Language
Is Language the Key to Human Intelligence?

My view is,

In light of Hierarchical Temporal Memory, I would like to suggest that the data that indicates language absence leads to depletion in relative cognitive function and ability is a result of the limitation of neocortical capacity to combine patterns and process information space. In other words, symbolic representation and manipulation rules are a good tool for reducing the space required to associate objects or other patterns into meaningful sequences (that is partly what the cortex does using high level representations). I tend to think that a much larger neocortical tissue will be able to perform those high-level cognitive functions currently associated with the need for language, without language.

Symbolic representations can get combined at different levels of hierarchy forming different but interconnected representations of relations that can be sensorimotor influenced by single or multiple modalities or purely manipulated in the abstract space.

I would like to know what everyone thinks about this.

PS: There is evidence and is also obvious that language shapes how a cortex thinks, but this doesn’t necessarily refute the above argument.

1 Like

I weigh in with language production is a learned motor task. (imitating speech)
The speaker learns to associate a sound or motion with the perceived learning of symbols and in the process progresses to symbolic thinking and semantics in later stages.
We are because we talk!

Although that might be true, and it seems like it is, I would like to focus on the abstract symbols and their semantic representations rather than the motor involved operations, including those pertaining to learning the symbols. A symbol can be conveyed visually, audibly or in some other way, but the the functional significance for high level cognition in forming sequences is not dependent on that.

In this paper, four semantic mechanisms are proposed and spelt out at the level of neuronal circuits: referential semantics, which establishes links between symbols and the objects and actions they are used to speak about; combinatorial semantics, which enables the learning of symbolic meaning from context; emotional-affective semantics, which establishes links between signs and internal states of the body; and abstraction mechanisms for generalizing over a range of instances of semantic meaning. Referential, combinatorial, emotional-affective, and abstract semantics are complementary mechanisms, each necessary for processing meaning in mind and brain.

*Please pay special attention to figure 2
This is the sensorimotor areas of speech and backbone of the semantic links in figure 1.
http://www.cell.com/trends/cognitive-sciences/pdf/S1364-6613(13)00122-8.pdf
Excerpt:
“Theories that rely on a symbolic system functionally detached from sensory and motor mechanisms cannot explain semantic grounding.”

I have read the general mechanism proposed in the paper (you referenced it elsewhere) (although I haven’t gone through all details, will do later) and it is quite interesting. My thoughts on the embodied and disembodied semantics are that the hierarchical organization of layers that contribute to semantic symbolic representations and the their relations result in these semantics. It is intuitive to me that the lower layers contribute to the embodied semantics whereas the higher layers contribute to the disembodied semantics. The four separate symbolics mentioned can be grasped as and are in the domain of HTM functions.
Of course, I think underlying semantic association mechanisms might be an instance of relative mapping.

It is not intuitive to me. I see semantic function distributed across the maps. I see the various layers forming interlocking local function such as temporal pattern sensing, spatial pattern sensing, grid forming (local binding), and maintaining thalamocortical resonance (spreading activation to support global binding).

In my mind trying to attribute various parts of semantics to the different layers is like taking the drum apart to see which part make the slow beats and which part makes the fast beats; it’s a category error.

Perhaps its not that intuitive unless you assign location mapping to a hierarchy of layers similar to sensory processing layers. Imagine a pair of sensory processing layer and location processing layer as mentioned in latest HTM(if I am not wrong) in hierarchical network of cortical columns.

Regardless of how drums work, the paper you cited also suggests different symbolic semantics that interact for complete perception. I don’t see why it is a stretch to localise them to layers.

As you read the paper note the co-location of embodied semantic content with the associated sensorimotor cortex and WHAT & WHERE stream processing. These locations are taken directly from fMRI measurements.

They allow that disembodied semantics may be due to different mechanisms and I don’t have any strong opinions on this. I do note that if you subtract the four categories of semantics that are described in the paper, not a lot is left over to be explained.

I am waiting for in vivo measurements from Numenta to validate the “location in the layers” proposal.

1 Like

Yes, exactly.

I agree. They do potentially explain everything that can be wanted and done to produce and manipulate symbolic semantics.

Me too.

1 Like

Bitking, thanks for the article reference it is quite good.

1 Like

Let’s push this a step further:
Motor cognition–motor semantics - Action perception theory of cognition and communication.

1 Like

A very intriguing paper, thanks @Bitking. I’ll try to put my initial inferences into symbols for whatever it’s worth.
The paper goes on to highlight that the neocortex is not just important for the processing modalities, but also for the corticocortical connections that link multiple cortical regions which enable elegant learning mechanisms. In particular it describes the connections between sensory perception and motor cortical processing areas that enable language learning and semantic meaning formation. The main parts that got me thinking were that emotional, abstract meanings were linked to the motor cortex and also that truly abstract semantics are processed in higher order association areas that are distinct from the sensorymotor modalities. I tend to think that he we evolved such action-perception circuits in particular because action is essential for survival, but a truly intelligent system can work without such integration where constantly changing sensory data can be given to the system externally, without the system’s complicated interference in the mechanism. What I am trying to say about this can be imagined further- if we think of motor neurons taking part in the representations of emotional meanings as classic neocortical neurons that could function on sensory data, then the motor cortex is a pattern processing entity, which happens to relate action data with emotions; in which case, we can replace the motor cortex with cortical regions that process other sensory patterns and the connections would allow these cortical regions to associate emotional abstractions with that particular sensory information. I can see a lot of neocortical theories converge on classic HTM.
Perhaps everything I mentioned is pretty obvious but it is important to mention it in the light of symbolic semantics- particularly because the paper goes on to show that the symbols are primarily defined by how the various interconnected regions in the brain form and interpret them(meaning that the interconnections are as important as the regions). If we eliminate the need for heavy social interaction then we can see that symbols can be formed using different interconnection patterns that process sensory input differently(the paper mentions that the most of the input to the connected motor and association regions in the brain is indirect input from the sensory modalities).
Perhaps this isn’t sufficient evidence to suggest that symbolic semantics are just one mechanism to process lots of sensory data(by reducing the space) triggered by the limited size of neocortical tissue, but I would like to know if any evidence suggests otherwise.

2 Likes

I am spending a fair amount of time learning to master connectograms.
http://circos.ca/tutorials/lessons/recipes/cortical_maps/
The Circos program is a clumsy tool (it was created to view genomic data) but it is the most powerful tool I have found for this task. Putting the Circos rendering engine in a pipe is slow but very effective.

I am working to create a tool to explore tractography vs. known functions (mostly from lesion studies) to do some heuristic central tendency analysis. The connectogram seems like the best framework to support this effort.

The end goal remains to do the automatic online creation of the Cortical IO semantic stores with extensions for grammar and whatever else I can parse out along the way. Tract mapping may end up being a useless diversion but it looks like the best way forward at this time.

1 Like

I have floated the importance of subcortical (particularly the amygdala) emotional modulation of learning rates as a key ingredient of semantic learning. Some of our members with a more theoretical focus have pooh-poohed this as a minor secondary function unimportant for the theoretical understanding of cognition. I see this signaling combined with the sensorimotor loops as sufficient for much of observed behavior.

I understand the goal of making a “pure” cognition engine and replacing the “messy” sensorimotor parts. I think that this is throwing the baby out with the bath. It is easy to observe the frontal lobe making a plan and this plan unfolding into motor acts as a unitary activity. What seems to be underappreciated is that the “earlier” planning stages (more abstract?) also have connections to the more abstract sensory areas. An “unfolding motor plan” has effects on the interior mental space. This does not have to be gated to actual exterior action - I think of it as subvocalizing. Flexing your forebrain “mental muscle” may well be the only thing that drives cognition.

Antonio Damasio places very high weighing on the influence of “emotional” signals.

I disagree with your implication that reducing motor connections and influence in the system is a disadvantage. Even though both the sensory and motor areas are active, I think the same planning can be done by abstract action schemas without the motor cortex, focusing on cumulative end result verification.

In the paper you cited earlier, there was mention of studies done on learning words by simply listening to their repetitions instead of listening and speaking out and I think they said that the sensory perception areas were used instead of the motor relay area(arcuate fascicle)(I am not entirely sure).

Crucially, both inferior-frontal activation in speech perception and left-laterality of speech-elicited brain activity would only be expected for types of language learning that provide correlated auditory-articulatory information – for example when new word forms are articulated overtly (articulatory learning), but not if subjects learn new words just by listening to them (perceptual learning).

In the control condition of perceptual learning of novel spoken word forms, the increase of cortical activity was due to bilaterally symmetric superior-temporal sources, without inferior-frontal contribution.

A reference about amygdala’s involvement in semantic formation please.

Yes, theoretically motor actions once learned could be interpreted symbolically to form elaborate sequences without the involvement of motor specialized circuits, just like forming sequences of sensory data. I like the subvocalizing reference.

I get this from passing references in many texts. When people talk about the amygdala they seem to imply that you have to be running for your life in abject fear or be screaming angry to be considered an emotional memory. What is missed is subtle shading of all memory consolidation - when something is just a little good or a little bad. As the semantic categories are formed the coloring allows you to choose between different choices as the relative “goodness” or “badness” of things is part of the learned properties. Being afraid of rejection by a mate, or a reduction in social standing among your peers, or disfavor of the boss is still fear.
Without this grounding from the subcortical structures, the semantic categories in the cortex are just arbitrary facts without any good or bad meaning.

What does it look like if this is missing?
In the Rita Carter book “Mapping the mind” chapter four starts out with Elliot, a man that was unable to feel emotion due to the corresponding emotional response areas being inactivated due to a tumor removal. Without this emotional coloring, he was unable to judge anything as good or bad and was unable to select the actions appropriate to the situation. He was otherwise of normal intelligence.

This lack of judgment learning is often described as “reduction in loss aversion.”
I call it lack of emotional weighting in semantic learning.

There is copious evidence that the chemical messengers from the amygdala modulate the learning rate due to this “emotional content:”
http://www.pnas.org/content/pnas/93/24/13508.full.pdf

“The amygdala has been associated with enhanced retention of memory. Because of this, it is thought to modulate memory consolidation. The effect is most pronounced in emotionally charged events.”
https://courses.lumenlearning.com/boundless-psychology/chapter/memory-and-the-brain/

In a symbolic form in speech:
http://www.pnas.org/content/pnas/96/18/10456.full.pdf

And more to the point in symbolic access:

The amygdala’s modulation of consolidation
The second stage of hippocampal memory formation is retention or storage. There is also evidence that the amygdala can influence the storage of memory. Hippocampal-dependent memories are not stored in an all or none fashion. After encoding, there is a period of time in which these memories are somewhat fragile and prone to disruption. It takes time for these memories to become more or less ‘set’, at which point their retrieval is less dependent on the hippocampus. This process is called consolidation. It has been suggested that one reason for this slow consolidation process is to allow an emotional reaction to an event an opportunity to influence the storage of that event.
https://pdfs.semanticscholar.org/2559/1f5d9fde558ce5cd7f49607ef87e48e6287f.pdf

“we speculate that the amygdala likely charges autobiographical memories with emotional, social, and self-relevance.”

“These findings suggested the possibility that endogenous stress-related hormones released by training experiences may play a role in regulating memory storage”
https://pdfs.semanticscholar.org/5414/12539e4557cbdb0f531e6f48e4575dbc21ba.pdf

I have dozens more papers like these but I think this shows the general ideas.

2 Likes