Language the base of conscious thought?



Qianli Liao and Tomaso Poggio of the Center for Brains, Minds, and Machines, McGovern Institute for Brain Research, Massachusetts Institute of Technology use two of the emerging paradigms, symbols in neural networks and orientation, scale as supplemental inputs for sensing, in their memo Object-Oriented Deep Learning.

I would call the use of symbols, the use of language. In addition to Liao and Piggio, Bengio proposes the use of symbols in his paper “The Consciousness Prior”. The use of supplemental inputs for sensing is also proposed by Hinton, and Hawkins. On the fundamental place of language for conscious thought we have the work of Joseph Brauer (1842-1925), Julian Jaynes book “The Origin of Consciousness” 1976, and now the theory leaders of neural networks.


If someone solves the language problem, they have solved the AGI problem.


People are starting to try. Exciting times.


Speaking of the language problem, I came across this interesting piece the other day, which effectively highlights one facet of the problem: capturing meaning/ context without the benefit of life experiences.


Language is partly innate and partially learned.

We have the necessary mental hardware but without the social training, we can’t learn the tricks that come with developing the motor skills that we call language. Learning these motor skills come with a corresponding training of the matching sensory skill set. Part of this learned skill set is symbolic representation.

There are sad cases of people that are raised without learning language past the critical period for language plasticity. It has been reported that these people are not able to do certain things that we take for granted. They are stuck in the here-and-now and are lacking in symbolic representation skills. Physiological testing shows deficits in understanding what most people think of as innate symbolic thinking. They see the world in more concrete terms and are not able to work out complicated cause-and-effects relationships. I don’t know this for sure but I suspect that they have no theory of mind.

From Calvin, "HOW BRAINS THINK "
“Joseph saw, distinguished, categorized, used; he had no problems with perceptual categorization or generalization, but he could not, it seemed, go much beyond this, hold abstract ideas in mind, reflect, play, plan. He seemed completely literal — unable to juggle images or hypotheses or possibilities, unable to enter an imaginative or figurative realm… He seemed, like an animal, or an infant, to be stuck in the present, to be confined to literal and immediate perception, though made aware of this by a consciousness that no infant could have.”

A collection of similar cases.

There are socialized humans that do acquire language late in life. They do somewhat better.

Note the story of Ildefonso making the breakthrough that a sound or sign means a thing; that you can communicate that you want something. I recall reading that Helen Keller had the same realization and was similarly struck with wanting to know the names of everything in a frenzy. This strongly suggests that naming is not an innate property but a learned thing using our coincidence detector hardware.

So - if a human performs at the level of an ape or a dog without learning the symbolic representations skill set - is that consciousness? The literature suggests that they are aware and frequently quite clever but have never learned some mental tricks associated with language.

Once you answer that - what about the dog or ape? Are they conscious?

I find the catalog of these learned mental tricks surprising and this does inform my ideas about what the underlying neural hardware does and does not do.

An even more troubling case is humans that do not learn socialization when they are infants. (social plasticity period) Eastern Europe has orphanages where the children are never held or talked to as babies. These children grow up with a fully formed Antisocial Personality disorder. Adoptions of these children can be very problematic.

As a researcher in the AGI field, I spend considerable time thinking about the various mental defects and wonder if I would consider it a win to create a fully functional profoundly autistic AGI.

Or a fully functional psychotic one.

A response to "Building machines that think and learn like humans"


This is why I think that a simple grammar store is only part of the job.
The concepts of frames, schema, and scripts are also needed.

The good thing about this is that this can be learned by feeding your AGI big data. In this case - mining for these scripts in literature.