Language the base of conscious thought?

Qianli Liao and Tomaso Poggio of the Center for Brains, Minds, and Machines, McGovern Institute for Brain Research, Massachusetts Institute of Technology use two of the emerging paradigms, symbols in neural networks and orientation, scale as supplemental inputs for sensing, in their memo Object-Oriented Deep Learning.

I would call the use of symbols, the use of language. In addition to Liao and Piggio, Bengio proposes the use of symbols in his paper “The Consciousness Prior”. The use of supplemental inputs for sensing is also proposed by Hinton, and Hawkins. On the fundamental place of language for conscious thought we have the work of Joseph Brauer (1842-1925), Julian Jaynes book “The Origin of Consciousness” 1976, and now the theory leaders of neural networks.

1 Like

If someone solves the language problem, they have solved the AGI problem.

1 Like

People are starting to try. Exciting times.

2 Likes

Speaking of the language problem, I came across this interesting piece the other day, which effectively highlights one facet of the problem: capturing meaning/ context without the benefit of life experiences.

Language is partly innate and partially learned.

We have the necessary mental hardware but without the social training, we can’t learn the tricks that come with developing the motor skills that we call language. Learning these motor skills come with a corresponding training of the matching sensory skill set. Part of this learned skill set is symbolic representation.

There are sad cases of people that are raised without learning language past the critical period for language plasticity. It has been reported that these people are not able to do certain things that we take for granted. They are stuck in the here-and-now and are lacking in symbolic representation skills. Physiological testing shows deficits in understanding what most people think of as innate symbolic thinking. They see the world in more concrete terms and are not able to work out complicated cause-and-effects relationships. I don’t know this for sure but I suspect that they have no theory of mind.

From Calvin, "HOW BRAINS THINK "
http://williamcalvin.com/bk8/bk8ch5.htm
“Joseph saw, distinguished, categorized, used; he had no problems with perceptual categorization or generalization, but he could not, it seemed, go much beyond this, hold abstract ideas in mind, reflect, play, plan. He seemed completely literal — unable to juggle images or hypotheses or possibilities, unable to enter an imaginative or figurative realm… He seemed, like an animal, or an infant, to be stuck in the present, to be confined to literal and immediate perception, though made aware of this by a consciousness that no infant could have.”

A collection of similar cases.

There are socialized humans that do acquire language late in life. They do somewhat better.


Note the story of Ildefonso making the breakthrough that a sound or sign means a thing; that you can communicate that you want something. I recall reading that Helen Keller had the same realization and was similarly struck with wanting to know the names of everything in a frenzy. This strongly suggests that naming is not an innate property but a learned thing using our coincidence detector hardware.

So - if a human performs at the level of an ape or a dog without learning the symbolic representations skill set - is that consciousness? The literature suggests that they are aware and frequently quite clever but have never learned some mental tricks associated with language.

Once you answer that - what about the dog or ape? Are they conscious?

I find the catalog of these learned mental tricks surprising and this does inform my ideas about what the underlying neural hardware does and does not do.

An even more troubling case is humans that do not learn socialization when they are infants. (social plasticity period) Eastern Europe has orphanages where the children are never held or talked to as babies. These children grow up with a fully formed Psychopath Personality disorder. Adoptions of these children can be very problematic.

As a researcher in the AGI field, I spend considerable time thinking about the various mental defects and wonder if I would consider it a win to create a fully functional profoundly autistic AGI.

Or a fully functional psychotic one.

8 Likes

Paul,

This is why I think that a simple grammar store is only part of the job.
The concepts of frames, schema, and scripts are also needed.

The good thing about this is that this can be learned by feeding your AGI big data. In this case - mining for these scripts in literature.

3 Likes

Some interesting insights:

3 Likes

Lera is fantastic and that TED talk is a very nice summary of Sapir-Whorf hypothesis.

Another analogy can be made using programming languages: depending on the programming language you are using, you may have vastly different ways of perceiving and solving problems. For example Erlang programmers may think very differently from C programmers.

I can extend similar analogies outside of linguistics. For example, my friends in Japan tell me that in some daycare centers the kids are completely confused by the concept of “turning a page in a book”, because they are only ever exposed to tablets; when exposed to glossy printed books they are often found trying to “swipe” the index finger across the page to no avail. I can imagine that within a few decades the use of pens/pencils may become completely obsolete, in favour of tapping, typing and finger and hand gestures.

To me all of this falls in the same category. Early conditioning (e.g. in childhood) on specific set of higher-level abstractions (e.g. language, fork vs chopsticks, finger gestures vs holding a pencil) influences our perception and reasoning in the future.

On the subject of linguistics, Neal Stephenson has often used language in his work, most notably in Snow Crash, where the idea was presented that language (ancient Sumerian language in this case) is encoded in our “firmware” and that a “linguistic virus” can be created such that causes a crash in our cognitive functions. A fascinating piece of fiction!

2 Likes

Please explain what’s the language problem associated with ai. I have some ideas… I’m working on my own theory😎.

The language problem is the AI problem: how to make something that understands the world. An entity cannot engage in communication via language without an understanding of the concepts the language is representing, the symbols that exist in the consensus reality the language represents.

2 Likes

Nail, meet head. BAM!

I believe it was Wittgenstein who said,

"Observation (read: Knowing) is a phenomenon of distinction, in the domain of language."

Packed within that statement are a slew of fundamental truths such as the fact that a thing has no existence until it is observed. It requires the existence of other things other than it, which have the capacity to “know” it - to then “observe” it - in order for it to be brought into existence. The ontological (study of being) existence of a thing is brought about via an ACT of codification/representation/conceptualization. It’s a Lorentz transformation in the domain of language where one thing is represented in a separate system by noting the boundaries of a given thing and what separates that thing from other things.

Which means, things are dependent on their “boundaries” (what is not that thing), in order to exist!

Another odd thing is the immense power human beings have to create calling things into being through language - and then that very capacity is the substrate of its difficulties.

That is, human beings collapse the representation of a thing and the thing itself! We think therefore that we can actually impact the observed thing or observed state of a thing - when nothing could be further from the truth! By the time a thing has been observed, it is in the past and gone and what we’re left with is the memory of a thing - not the thing itself! We can trace most of the problems in modern society to that very phenomenon. We argue until blue in the face about the nature of reality not even realizing that what we’re arguing about is the representation and not the thing itself. Werner Erhard called it, “…eating the menu in life, struggling valiantly to get some of the meal”.

We sit down, open the menu and envision what it would be like to eat the boeuf bourguignon - and somehow we’re under the misconception that the memory of what something tastes like is the same thing as the experience of eating that thing! The two things are different!

So we go to war over our concepts of God, and righteous living - when being God-like and righteous is phenomenologically a totally different thing… I think most of us would agree. We confuse the people “over there” and their ontological presence - the actual experience of that person - with our concepts of what that type of person might be - and we think the two things are the same! :slight_smile:

Anyway, everything lives in the domain of language, without which there can be nothing - no love, no hate, no personal history, no religion, no topology, no texture - no “no”. It all requires language.

Doesn’t HTM address this problem with unions of SDRs?


Furthermore I think the Numenta website should have a merchandise section offering the Numenta cup.

2 Likes

I second this motion.

2 Likes

I have read about SDR theory and a question presents itself.

If an SDR is just synapse connections along a dendrite, and dendrites follow fixed paths around a cell body - how can they be joined to do math like a union?

I see how that might work in a computer program but i do not see a way that this could happen in the biology.

1 Like

I third this motion!

2 Likes

It seems the current trend (with pretty amazing results) is unsupervised language modelling on a MASSIVE scale - given tens of gigabytes of text, preferably of high quality and from different sources, and given a massive transformer based model with a billion parameters, and one week of training time on a dozen TPUs, you get something like this: https://arxiv.org/abs/1810.04805 and https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf

Yes, but this is really just an extremely large Deep Learning network. It does not understand anything it is creating.

1 Like

What’s the meaning of understanding?
What exactly differentiates us from that?

Ok let’s get philosophical.

To understand something, you must experience it. To experience something you must explore it. Exploration implies action. Action implies direction. Direction implies a director.

The OpenAI system did not explore its text corpus. It was fed the input, and trained over 7.5 billion parameters to come to it’s understanding of the text. But it cannot tell you what the text means. The only lessons it will construct from it are those it parcels together from the thousands of similar texts it has seen during training. Every word transition it emits has been seen before in some context, in some way, or else it statistically would not have emerged.

You, however, can easily put together new concepts, and this is because you’ve always had the creative power to direct your movements through space as you explored your work.

1 Like