Yann LeCun on GI and much ballyhooed "consciousness"

"
But I have a speculation about what causes the illusion of consciousness. My hypothesis is that we have a single world model “engine” in our prefrontal cortex. That world model is configurable to the situation at hand. We are at the helm of a sailboat; our world model simulates the flow of air and water around our boat. We build a wooden table; our world model imagines the result of cutting pieces of wood and assembling them, etc. There needs to be a module in our brains, that I call the configurator, that sets goals and subgoals for us, configures our world model to simulate the situation at hand, and primes our perceptual system to extract the relevant information and discard the rest. The existence of an overseeing configurator might be what gives us the illusion of consciousness. But here is the funny thing: We need this configurator because we only have a single world model engine. If our brains were large enough to contain many world models, we wouldn’t need consciousness. So, in that sense, consciousness is an effect of the limitation of our brain!
"

3 Likes

I see the function of consciousness about the same thing as as stored program computer as opposed to a hard-wired plugboard computer.

Current DL models are the plugboard computers in this comparison.

The awareness of the contents of sensory, planning, and memory are all elements of consciousness.

Then you leave no place for animal intelligence, unless you say they have consciousness.

Intelligent animals can acquire a vast range of new behaviours, like a self-programmable computer.

Who said they don’t have consciousness?
If you have a cat or dog surely you have seen them play and solve problems. I know that I have.

They don’t have speech and all the skills that come with that. Please try to separate that out from consciousness.

4 Likes

LeCun: I think significant progress in AI will come once we figure out how to get machines to learn how the world works like humans and animals do: […]

Based, but then he goes off and tries to explain how animals learn …

I think he was talking about hierarchical pipeline of learning, sensory-motor - first. It doesn’t need to copy the mechanics of BNN, that’s obviously grossly suboptimal. But yeah, that brain map he showed is way off. I need to go through his paper, linked above.

2 Likes

On the one hand: I know and I’m just poking fun at him.
But on the other hand, he’s advocating for a specific type of learning algorithm: self-supervised learning. I’ve argued elsewhere that animals do all of the types of learning algorithms.
He shows a glimmer of awareness of the fact that AI can copy from animal intelligence, but then he doesn’t follow through with any real facts about how animals work.

2 Likes

They do, doesn’t mean we should. All their supervision mechanisms are biologically specific. And bio-version of RL is primarily mediating and translating those animalistic urges. Which are irrelevant for either GI or whatever useful application of it. GI per se, the general part of “I” is unsupervised learning.

1 Like

Animals are not conscious. They are aware and they certainly can learn things. They have internal maps but are largely unaware of time. They do not experience time as a timeline and they absolutely cannot traverse it forward and backward. They communicate, but their languages are not recursive. To really see this you have to look to primitive human tribes that have non-recursive language.

Are you certain that you are not mixing skills learned with learning speech (mental time travel and certain symbol manipulation) with consciousness?

The concept of attaching labels to things can be extended to time as a thing.

3 Likes

His meeting with Yoshua Bengio and Lex Fridman should be interesting.
Here-s a recent interview with Bengio about unsupervised GFlowNets where he also shares his view on consciousness.

2 Likes

It depends on your definition of consciousness, which remains one of the most confusing terms of all time.

I go with that defined by Julian Jaynes. It is not reactivity, which is what you lose when you get knocked ‘unconscious’ and it is not awareness, which you are when you are not unconscious. Animals have both of those and some display self-awareness, which is awareness of self.

We don’t really know what consciousness is.

Some of us know exactly what it is.

He doesn’t claim he or anyone knows either, but also observed the related research has made advances.

PS and as long as one isn’t certain what consciousness is, they can’t be certain on what it is not.

2 Likes

And they do not sequence. And they do not collaborate. Etc. Etc.

But there is no question that:

  • animals generally have evolved complex behaviours (level 1)
  • higher level animals (presumably via the cortex) learn more complex behaviours (level 2)
  • humans have some unique abilities to learn and model even more complex behaviours (level 3).

So why think that AGI is only level 3? For our purposes we can write code to do most of level 1, but no higher. IMO a software implementation of AGI at level 2 would be an immensely valuable achievement. Do you not agree?

2 Likes
1 Like

On the topic of unconscious learning:

1 Like

I recall using that technique extensively in class during my undergrad years.

1 Like

Arguing about consciousness is about as productive as trying to drive a chocolate wheel through a scorching hot plane.

Figure out the process of intelligence first and the “consciousness” will be there, it’s just a byproduct of the recursive/looping complexity.

Taking the Sherlock Holmes principle of deduction… conciousness ends up being nothing but an illusion.

2 Likes

Maybe consciousness really is just an illusion?

I’m just kidding, here is an article describing a particular theory of consciousness.
Of particular interest for this conversation is chapter 7: “Imaging States of Conscious Access and Non-Conscious Processing”, which describes some measurable correlates of conscious perception.

The Global Neuronal Workspace Model of Conscious Access: From Neuronal Architectures to Clinical Applications

Stanislas Dehaene, Jean-Pierre Changeux, and Lionel Naccache, 2011
https://www.cs.helsinki.fi/u/ahyvarin/teaching/niseminar4/Dehaene_GlobalNeuronalWorkspace.pdf

Abstract While a considerable body of experimental data has been accumulated
on the differences between conscious and non-conscious processing, a theory is
needed to bridge the neuro-psychological gap and establish a causal relationship
between objective neurophysiological data and subjective reports. In the present
review, we first briefly outline the detailed postulates and predictions of our
working hypothesis, referred to as the global neuronal workspace (GNW) model.
We then compare these predictions to experimental studies that have attempted to
delineate the physiological signatures of conscious sensory perception by
contrasting it with subliminal processing, using a variety of methods: behavioral,
PET and fMRI imaging, time-resolved imaging with ERP and MEG, and finally
single-cell electrophysiology. In a final section, we examine the relevance of these
findings for pathologies of consciousness in coma and vegetative states.

1 Like