Google Sidelines Engineer Who Claims Its A.I. Is Sentient

SAN FRANCISCO – Amid yet another controversy over the company’s most advanced technology, Google recently laid off an engineer on paid leave after it rejected claims that its artificial intelligence was sensitive.

Black Lemoin, a senior software engineer at Google’s Responsive AI, said in an interview that he was put on leave on Monday. The company’s human resources department said it violated Google’s privacy policy. The day before his suspension, Mr. Lemoin said he had handed over the documents to the U.S. senator’s office, claiming they had provided evidence that Google and its technology were involved in religious discrimination.

Google said its systems mimic the exchange of conversations and can disagree on a variety of topics, but lack consciousness. “Our team – which includes ethicists and technologists – has reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims,” ​​Brian Gabriel, a Googleologist, said in a statement. “Some people in the wider AI community are considering the long-term potential of sensitive or general AI, but there’s no point in doing so by making today’s conversation model anthropological, which is not sensitive.” The Washington Post first reported that Mr. Lemon suspension.

For months, Mr. Lemoin clashed with Google managers, executives and human resources over its startling claim that the company’s language model for dialogue applications, or LaMDA, has consciousness and soul. Google says hundreds of its researchers and engineers have joined LaMDA, an internal tool, and have come to a different conclusion than Mr. Lemoine did. Most AI experts believe that the industry is a long way from computing sentences.

Some AI researchers have long made optimistic claims about these technologies that soon reach spirit, but many others reject these claims. “If you had used these systems, you would never have said such things,” said Emad Khwaja, a researcher at the University of California, Berkeley and the University of California, San Francisco, who are exploring similar technologies.

In pursuit of AI Vanguard, Google’s research institute has spent the last few years embroiled in scandals and controversy. The department’s scientists and other staff regularly quarrel over technology and personnel matters in the episode, which often spreads to the public sphere. In March, Google fired a researcher who sought to publicly disagree with the published work of two of his colleagues. And the sacking of two AI ethics researchers, Timnit Gabru and Margaret Mitchell, has continued to cast a shadow over the group after they criticized Google’s language models.

Mr. Lemoin, a military veteran who described himself as a pastor, former convict and AI researcher, told senior Google officials, such as global affairs president Kent Walker, that he believed LaMDA was a 7- or 8-year-old. . He wanted the company to get the consent of the computer program before running experiments on it. His claims were based on his religious beliefs, which he said the company’s human resources department discriminated against him.

Mr. Said Lemoin. “They said, ‘Have you been examined by a psychiatrist recently?'” In the months before he was put on administrative leave, the company advised him to take mental health leave.

Yan Leku, head of AI research at Meta and a key figure in the rise of neural networks, said in an interview this week that such systems are not powerful enough to achieve true intelligence.

Scientists call Google’s technology the Neural Network, a mathematical system that learns skills by analyzing large amounts of data. For example, by pointing to a pattern in thousands of cat photos, he may learn to recognize the cat.

Over the years, Google and other leading companies have designed neural networks that thousands of people have learned from prose, including unpublished books and Wikipedia articles. These “large language templates” can be applied to many functions. They can summarize articles, answer questions, generate tweets and even write blog posts.

But they are extremely flawed. Sometimes they produce the whole process. Sometimes they create nonsense. Systems are very good at recreating patterns seen in the past, but they cannot reason like humans.

Similar Posts

Leave a Reply

Your email address will not be published.