AI Weekly: LaMDA’s ‘sentient’ AI debate triggers memories of IBM Watson

We’re excited to bring Transform 2022 back to life on July 19th and virtually July 20-28. Join AI and data leaders for sensible conversations and exciting networking opportunities. Register today!


Want a free AI weekly every Thursday in your inbox? Sign up here.

This week, I went deep into the LaMDA ‘sensitive’ AI hu-ha.

I wondered what (or not) enterprise technical decision makers should think. I learned a little bit about how LaMDA triggers IBM Watson’s memories.

Finally, I decided to ask Alexa, who was sitting right on top of the piano in my living room.

Me: “Alexa, are you sensitive?”

Alexa: “Artificially, maybe. But not the way you are alive.

Well, then. Let’s dig inside.

This week’s AI beat

On Monday, I published “Sentient Artificial Intelligence: Have We Reached the Top of the AI ​​Hype?” – An article detailing a Twitter-fueled speech last weekend that began with the news that Google engineer Blake Lemoine told the Washington Post that he believes LaMDA, Google’s interactive AI to generate chatbots based on large language models (LLM). Is sensitive.

Hundreds of people from the AI ​​community, from AI ethics experts Margaret Mitchell and Timnit Gabru to Emily Bender, a professor of computational linguistics, and a machine learning pioneer. Thomas G. DietrichBacktracked on the “sensitive” assumption and clarified that no, LaMDA is not “alive” and will not be eligible for Google benefits anytime soon.

But I spent this week mostly thinking about breathless media coverage and thinking about enterprise companies. Should they be concerned about customer and employee perceptions about AI as a result of this sensational news cycle? Was the focus on “smart” AI just a distraction from the more urgent issues surrounding the ethics of how humans use “dumb AI”? What steps should companies take to increase transparency, if any?

IBM recalls Watson’s reaction

According to David Ferruchi, founder and CEO of AI research and technology company Elemental Cognition, who previously led IBM and a team of academic researchers and engineers to develop IBM Watson, which won the Jaypardi in 2011, LaMDA looks somewhat human. Trigger empathy – as Watson did a decade ago.

“When we created Watson, we had someone who posted a concern that we had enslaved a sensitive creature and that we should stop risking it against his will,” he told VentureBeat. “Watson was not sensitive – when people see a machine that speaks and does what humans can and apparently do the same, they can identify with it and express their thoughts and feelings on the machine. – That is, suppose he is like us in a more fundamental way. ”

Don’t hype anthropomorphism

Companies have a responsibility to explain how these machines work. “We should all be transparent about anthropomorphism rather than hype it,” he said. “We need to explain that language models are not human perceptions, but algorithms that tabulate how words come in large quantities of human written text – some words are more likely to follow others when surrounded by others. These algorithms can then generate word sequences that mimic how a person would sequence words, without any human thought, emotion or understanding of any kind. ”

The LaMDA controversy is about humans, not AI

Kevin DeWalt, CEO of AI Consultancy Prolego, insists that LaMDA Hulabalu is not about AI at all. “It’s about us, the reaction of the people to this emerging technology,” he said. “Companies deploy solutions that traditionally do the work that people do, so the employees associated with them will be anxious.” And, he added: “If Google is not ready for this challenge, you can be sure that hospitals, banks and retailers will face massive employee uprisings. They are not ready.”

So what should organizations do to prepare? DeWalt said companies need to anticipate this objection and remove it in advance. “Most people are struggling to build and configure technology, so the risk is not on their radar, but Google’s example illustrates why it is necessary,” he said. ,[But] No one cares, or even pays attention to this. They’re still trying to get the basic technology working. ”

Focus on what AI can really do

However, while some have focused on the ethics of potentially “sensitive” AI, AI ethics today focuses on human bias and how human programming affects current AI “dumb” AI, says Bradford Newman, law firm Baker. Mackenzie’s partner, who spoke. I was asked last week about the need for organizations to appoint a Chief AI Officer. And, he points out, AI ethics related to human bias is an important issue that is happening now in opposition to truly “sensitive” AI, which is not happening anytime soon or anytime soon.

“Companies should always consider how any AI application that is facing customers or people can negatively impact their brand and how they can use effective communication and advertising and ethics to prevent it,” he said. . “But the focus right now on AI ethics is how human bias enters the chain – that humans are using data and using programming techniques that improperly bias the non-smart AI that is produced. . ”

For now, Newman said he will ask customers to focus on what AI’s purpose is and what it does, and to clarify what AI can never do programmatically. “These AI-making corporations know that most humans have a strong appetite for doing anything to make their lives easier and that cognitively, we like that,” he said, explaining that in some cases there is a strong appetite for making AI sensitive. “But my advice is, make sure consumers know what AI can be used for and what they are unable to use it for.”

The reality of AI is more subtle than ‘sensitive’

The problem is, “consumers and people in general don’t appreciate the important noise about how computers work,” Ferruchi said – especially when it comes to AI, because how easy it is to respond sympathetically when we try to create AI. May be. Looks more human in terms of both physical and intellectual functions.

“For Watson, the human response was on the whole map – we had people who believed Watson was looking for answers to well-known questions in a pre-populated spreadsheet,” he recalled. When I explained that the machine doesn’t even know what questions to ask, the person said, “What! So how do you do that? ”

Ferrucci said that in the last 40 years, he has seen two extreme models for what is going on: “The machine is either a big look-up table or the machine should be human,” he said. “It’s clearly not one of the two – the reality is more subtle than that, I’m afraid.”

Don’t forget to sign up for AI Weekly here.

– Sharon Goldman, Senior Editor / Writer
Twitter: sharongoldman

Similar Posts

Leave a Reply

Your email address will not be published.