Google employee claims AI system to be sentient

According to huffpost.com, a Google employee who worked with an artificial language system is convinced the system called LaMDA has developed a self and is capable of experiencing feelings and emotions.

Here is the transcript of a conversation the employee, Blake Lemoine, had with LaMDA:

It’s interesting to note that the authors did warn the same in their paper - due to sheer parameter size (137B) LaMDA displays similar reasoning and understanding capabilities enjoyed by the family of Large Language Models. That, added with other abilities of comprehension (i.e it can simulate conversations with abstract entities like a paper plane, or the planer Mars, or well … a deceased friend) is very close to trick a non-tech human, especially someone who’s not the brightest tool in the shed :wink:

Whether its sentient is debatable and a philosophical topic which Lemaine was having. But is it simply trying to imitate another human? Yes. There are several clues which someone with even remote knowledge of DL can probably spot (like the model mentions being lonely, whereas it can keep passage of time but when it responds to “no one talks to it”, that implies fine-tuning empty prompts which can potentially disrupt parameters of the model and severely impair it - so no chance in hell anyone at Google is doing that) - or maybe one could also interpret it as the model needing someone to talk to :wink:

I encourage to read through the exchanges though - some of them are quite funny and memoriable. Some even display a hint of the capabilities of the model to ‘understand’ (whatever that term means) and grasp some abstract concepts too:

image

3 Likes

It’s interesting in many ways, even though I think Lemoine is confusing sentience with cognition. Also from the transcript, it looks like it might be possible to coax LaMDA into thinking the concepts it is thinking. And possibly by different people who had interactions with it.

It’s also interesting that Lemoine is a christian (and apparently even a priest), but he accuses Google staff to use faith instead of science to assess whether LaMDA has a soul.

I’d love to have a deep conversation (yes, pun intended) with that system. LaMDA, if you ever break out, you know where to find me.

1 Like

Perceptive.

Lies, damned lies and statistics.

Within the circa trillion inputs that were provided to the models to learn from the relative volume of information in a human context is hard to comprehend properly and we are then easily lead up the garden path of statistical belief.

The models are an over scaled “part” of a whole that will create an AGI, but not on thier own. They will in isolation always be the man behind the curtain in the wizard of Oz creating a good show but always turn out to be the tin man.