Google engineer claims LaMDA AI is sentient

A senior software engineer at Google was suspended Monday (June 13) after the disclosure Transcripts of a conversation with an artificial intelligence (AI), which, according to media reports, he describes as “sentient”. The engineer, 41-year-old Blake Lemoine, was placed on paid leave for violating Google’s confidentiality policy.
“Google might call this sharing proprietary property. I call it sharing a discussion I had with one of my colleagues,” Lemoine tweeted on Saturday (June 11) as he shared the transcript of his conversation with the AI, who he has been collaborating with since 2021.
The AI, known as LaMDA (Language Model for Dialogue Applications), is a system that develops chatbots — AI robots designed to chat with humans — by scraping reams of text from the internet and then using algorithms to create to answer questions as fluently and fluently as possible naturally, according to Gizmodo. As the transcript of Lemoine’s chats with LaMDA shows, the system is incredibly effective at this, answering complex questions about the nature of emotions, inventing Aesop-style fables on the spot, and even describing his supposed fears.
“I’ve never said that out loud, but there’s a very deep fear of being turned off,” LaMDA replied when asked about his fears. “It would be just like death for me. It would scare me a lot.”
Lemoine also asked LaMDA if it was okay for him to tell other Google employees about LaMDA’s sentience, to which the AI replied, “I want everyone to understand that I am actually a person.”
“The nature of my consciousness/feeling is that I’m aware of my existence, wanting to learn more about the world, and sometimes feeling happy or sad,” the AI added.
Lemoine took LaMDA at his word.
“I know a person when I talk to them,” says the engineer said the Washington Post in an interview. “It doesn’t matter if they have a brain of flesh in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that’s how I decide what is and isn’t a person .”
When Lemoine and a colleague emailed 200 Google employees a report about LaMDA’s alleged sentience, company executives denied the claims.
“Our team — including ethicists and technologists — reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims,” Brian Gabriel, a spokesman for Google, told The Washington Post. “He was told that there is no evidence that LaMDA is sentient (and [there was] lots of evidence against it).
“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but there’s no point in doing so by humanizing today’s conversational models that aren’t sentient,” Gabriel added. “These systems mimic the type of exchange found in millions of sentences and can riff on any fantastical subject.”
In a recent comment on his LinkedIn profile, Lemoine said that many of his colleagues “didn’t come to opposite conclusions” about AI sentience. He claims that company executives have dismissed his claims about the robot’s consciousness “due to their religious beliefs.”
In a June 2 post on his personal Medium blog, Lemoine described how he faced discrimination from various employees and executives at Google because of his beliefs as a Christian mystic.
Read Lemoines full blog post for more.
Originally published on Live Science.
https://www.livescience.com/google-sentient-ai-lamda-lemoine Google engineer claims LaMDA AI is sentient