Yann LeCun on GI and much ballyhooed "consciousness"

Subjective conscious mind is an analog of what is called the real world. It is built up with a vocabulary or lexical field whose terms are all metaphors or analogs of behavior in the physical world. Its reality is of the same order as mathematics. It allows us to shortcut behavioral processes and arrive at more adequate decisions. Like mathematics, it is an operator rather than a thing or repository. And it is intimately bound up with volition and decision.

<Julian Jaynes, The Origin>

1 Like

Ants and bees are good at reacting to their surroundings… if the definition is valid. Then consciousness is about storing informations… Is it wrong?

I think consciousness is residing in both the mind and the object. You know if one is absent the other will be non-functional. These two will blend and gives the concept of consciousness.

Damn, your confusion is off the charts.


If mind-object interaction is consciousness. Then our thoughts are just the informations about objects. Then what is mind?

You hopefully meant “human species conscious”. Another example of human exceptionalism which is a comforting myth. You recognize consciousness only in fellow humans who can talk. Are primates conscious, or so called language recursiveness is the gold standard ? What about people with Down syndrome or mawglies?

1 Like

The only consciousness I work with is human. Any other discussion of the term is most likely talking about awareness or reactivity.

No, even Pan troglodytes are not conscious. Those with Down’s vary, consciousness is learned and refined over time. A 3 year old is typically less ‘conscious’ than a 5 year old. Then there are the adults who one questions whether or not they have attained consciousness, but let’s not go there.

You have me at a disadvantage with ‘mawglies’.

“Mawglies”: Feral child - Wikipedia

Are you familiar with works of cog.sci/mathematician Stan Dehaene ? He explores c-word with neuroscientific methods Dr Stanislas Dehaene - Consciousness: From Computation to Cognition, Cog Neuroscience and Clinic - YouTube

“Those with Down’s vary, consciousness is learned and refined over time.” So it is a spectrum ? And learned as language ? That silent talk in our heads and responses by other humans seems to be a culprit…

Here’s question : can machines (AGI) be eventually conscious ? They will be able to talk and even think (silent speech).

If I am correct, then yes.

Yes, machines will be conscious.

Dehaene is a NCC researcher. Great stuff (NCC), but will most likely not figure out that C has no direct neural correlates. Sort of like AI was, goofing around, floundering for years, until business came along and made it work.

Lately I have questioned whether or not we even want a conscious machine. C brings a number of very bad consequences (insanity, anxiety, deceit), but then again, without it innovation moves at a snails pace.

Would it just be a very slow problem solver, or maybe more like a tool? For example, without consciousness, could a machine solve all of science on its own? If it’s just very slow, that might not be a big deal, because silicon is fast.

No, machines will not be conscious anytime soon, if at all. What’s the point? It’s all downside.

The only models we have for intelligence are mammal and avian brains. We can expect AGI to perform physical actions, interpret images and sounds, use materials and suitable tools better than we can. Eg drive a vehicle, win car/bike races, drive taxis, construction vehicles, cranes, aeroplanes, welding, custom assembly, etc.

We might also expect high language skills: draft and interpret legal documents, PR, instruction manuals, legislation, etc.

I think that’s enough for now, don’t you?

Without consciousness the machine needs to be given problems to solve. This is what we are doing now and the machine is getting increasingly better at it. That said, it has no volition, no purpose other than to do what we tell (program) it to do. This nonsense of AGI is just that, nonsense. Daily we see new examples of a machine doing things that they had not done before. It’s a wonderful thing to behold, but it is not consciousness. Most of the people touting AGI have no clue what that is and have even lesser understanding of what we have already in the way of AI. With C, we have to be careful what we wish for.

In the meantime…work like HTM is trying to resolve the basic computational structure of the brain, which absolutely is not DL. It also is not going to lead to some sort of spontaneous, emergent consciousness either. Hawkins basically said that in one of his videos and I could not agree more.

1 Like

Thank you, we agree on that: “If I am correct, then yes.”

On “The Retina model of the cortical IO company corresponds very closely to Broca’s area and Wernicke’s area*” - cortical.io ? “Numenta” noted in private conversation that it’s actually not. Having read their patent,it’s a “fingerprint” of a word in a set of Wiki/other articles.

So it’s the language (in case of humans) or any other representation of a world model that can be recited silently that is the basis of C ? I tend to agree.

Imagine a smart car (say Tesla Model S :wink: that is aware of its status,location, sensor data and able to communicate all that to outside world ? And it calculating (thinking) of a best route to charge itself when fully autonomous. Does it not have a model of itself (pun intended),thinking and reporting - what is needed for C-word ? Is it not beneficial ?

In the same vein: human are given goals - by evolution… So ?

1 Like

I suspect the retina is loaded with very non-biological processing - likely SOM mapping. It could seen as guidance to what level of semantic mapping can be performed with a map. A more biologically inspired update method would be nice.

Yes, mixing inside and outside perceptions together into the same what/where stream - does not have to be language. But it does have to be able to form joins with the object store recall for presentation to the subcortex.

1 Like

The key thing here is the model. An AGI must form models of parts of reality, including itself, in order to choose strategies. This part of current AI is sorely missing.

The AGI car is aware of itself (location, dimensions, condition etc) but there is nothing in AGI to provide motivation, goals, ethics, emotions, etc. It performs tasks assigned to it with considerable skill and that’s all.

For many on this forum this is personal and I fail to see why. An AGI that does what I describe would be a fantastic achievement and incredibly useful. Why do you need more?


I don’t NEED more, I see consciousness as a key part of the operating system of an AGI.

1 Like

Some of you might find Ramachandran’s book The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human useful, especially the Epilogue. You can read/borrow it for free from the Internet Archive.