The engineer said the program is believed to have achieved a level of awareness after hundreds of interactions with the latest unreleased AI system called LaMDA.
Technology companies are constantly working to augment the capabilities of ever-evolving artificial intelligence. But Google was quick to dismiss allegations that one of its programs had advanced so much that he became aware.
Engineer Blake Lemoine works in the artificial intelligence organization responsible at Google. He was testing whether the lambda model generated discriminatory language or hate speech.
The engineer’s concerns arose from the persuasive responses he saw as the AI system generated about his rights and the ethics of robots.
In April, he shared a document with executives titled “Is Lambda Conscious?” It contains a transcript of his conversations with the AI model.
After being put on leave, Lemoyne posted the text on his Medium account, which he says shows the model argues that it is sensitive because it contains feelings, emotions and subjective experience.
Google believes that Lemoine’s actions relating to his work at Lambda have violated its confidentiality policies. Lemoine invited a lawyer to represent the AI system and spoke to a representative from the House Judiciary Committee about the company’s alleged unethical activities.
On June 6 via Medium, the day Lemoyne was placed on administrative leave, the engineer said he had requested a small amount of outside consulting to help guide him in his investigation of the AI ethical concerns he had raised within the company. The list of people he had discussions with included US government employees.
The search giant announced lambda publicly at Google I/O 2021. Google hopes that lambda will help improve conversational AI assistants and make conversations more natural.
The company uses language model technology similar to Gmail’s smart typing feature, or to search engine queries.
“There is no evidence that lambda is conscious,” said Brian Gabriel, a Google spokesman. Our team reviewed Blake’s concerns according to AI principles and informed him that the evidence does not support his claims. He was told that there was no evidence that Lambda was conscious (and there is plenty of evidence against him.
Google claims the engineer violated confidentiality policies
Gabriel added: Some in the broader AI community are studying the long-term potential of conscious or general AI. But it does not make sense to do so by embodying today’s unconscious models of conversation. These systems simulate the kinds of exchanges found in millions of sentences, and can delve into any fictional topic. Hundreds of researchers and engineers have spoken with lambda and we are not aware of anyone else who makes such extensive assertions, or embodies lambda as Blake did.
Many in the AI community dismissed the engineer’s claims in interviews and public statements. While some pointed out that his account highlights how technology can lead people to assign human traits to it.
But arguably, the belief that AI can be conscious highlights concerns about what this technology can do.
The linguistics professor agreed that it was a mistake to associate persuasive written responses with sense. “We now have machines that can generate words without thinking,” said University of Washington professor Emily M. Bender. But we have not learned how to stop imagining the mind behind it.
Timnit Gebru, the prominent AI ethicist who was suspended by Google in 2020, said the debate about AI awareness may derail the most important ethical conversations related to the use of AI.
Despite his concerns, Lemoyne said he plans to continue working on AI in the future. And he wrote in a tweet: I intend to stay in the field of artificial intelligence, whether with Google or others.