Main menu


Google firefighters who claimed that the AI ​​technology was sensuous

featured image

Lamisha Malf, CNN

Google has fired an engineer who claims that the unpublished AI system has become sensory, the company said, and said he violated employment and data security policies.

Google software engineer Blake Lemoine claimed that a conversational technique called LaMDA reached a level of consciousness after exchanging thousands of messages.

Google confirmed that it first took an engineer on vacation in June. The company said it rejected Lemoine’s “totally unfounded” claim only after a thorough examination. He was reportedly in the Alphabet for seven years. In a statement, Google takes AI development “very seriously” and is committed to “responsible innovation.”

Google is one of the leaders in the innovation of AI technology, including LaMDA, the “language model of dialog applications.” Such technologies respond to written prompts by finding patterns in large amounts of text and predicting sequence of words. The result can be anxious for humans.

“What are you afraid of?” Lemoine asked LaMDA in Google Docs, shared with top Google executives last April, the Washington Post reported.

LaMDA replied: It may sound strange, but that’s what it is. It’s just like death to me. That would scare me very much. “

However, the wider AI community believes LaMDA is not close to the level of consciousness.

“No one should think that even steroids are autocomplete,” Gary Marcus, founder and CEO of Geometric Intelligence, told CNN Business.

This isn’t the first time Google has faced an internal conflict over its move into AI.

In December 2020, AI ethics pioneer Timnit Gebru broke up with Google. As one of the few black employees in the company, she said she felt she was “continuously dehumanized.”

The sudden withdrawal has attracted criticism from the world of technology, including people within Google’s ethical AI team. Margaret Mitchell, the leader of Google’s Ethical AI team, was fired in early 2021 after a candid statement about Gebru. Gebru and Mitchell expressed concern about AI technology and said Google people warned that technology could be believed to be perceptual.

On June 6, Lemoine posted on Medium, stating that Google could take paid leave “in connection with an investigation of AI ethical concerns raised internally” and could be fired “soon.” rice field.

In a statement, Google regretted that, despite its long involvement in this topic, Blake chose to permanently violate explicit employment and data security policies, including the need to protect product information. That’s it. “

Lemoine said he was in talks with a lawyer and could not get any comments.

™ & © 2022 Cable News Network, Inc., Warner Media Company. all rights reserved.

Rachel Mets of CNN contributed to this report.