HAL in 2001: A Space Odyssey. Ava in Ex Machina. Samantha’s voice in Scarlett Johansson’s movie Her. For decades, pop culture has promised us a future where artificial intelligence (AI) has developed enough to build relationships with humans. But in every tech, the future that was predicted by writers, film directors and actors has been missed.
The first AI-human relationship in pop culture was the brainchild of Mary Shelley, who created Frankenstein in 1818. In doing so, she gave readers a dream of a day in which empathetic robots could fulfill a human desire for a real connection.
Today, thanks to incredible innovations in the field of artificial intelligence, that day has come. Humans are now building relationships with AI on a daily basis, and the intersection of machine learning in everyday human life is so sharp that it is almost impossible to strip our relationship with machine learning from our relationships with other sensitive humans. Whenever we use Google Assistant to make a phone call to a loved one, lean on facial recognition to upload photos to the Internet, or browse online stores for gifts for a friend or family member, we integrate AI into ours. We are. Social, family and romantic life.
But when the future comes, it’s completely different from what movies predict. When it comes to pop culture and AI-human interactions, Hollywood finds it all wrong. A quick look at the recent pop culture interpretation of AI-human relations reveals serious flaws. In 2001: A Space Odyssey, HAL 9000, Discovery One’s onboard computer, crashes when David and Frank decide to reprogram it, and its computerized error translates into an AI version of a murderous mental break.
In her, an operating system called Samantha has impeccable language skills and impeccable ability to communicate, so much so that her user, Theodore, falls in love with her. It’s not just a virtual assistant – it’s an invisible temptation.
In Ex Machina, the so-called gynoid, Ava, has a human face on her robotic body, but her computer-generated heart also has a full human emotion – hatred.
In each of these dramatic interpretations, AI is pushed emotionally by algorithms. Hate, love, psychosis – these are all human experiences that the flesh-and-blood writers of these plays have imposed on their computerized characters, but for anyone who watches cinema in the hope of gaining a true understanding of the potential of AI, it will be abandoned. With a little continuity.
There is no doubt that AI has made impressive leaps in recent years, but despite its evolution, the technology is still in its infancy. The truth about AI has very little to do with the dramatic works provided by popular culture. Here is what we really know.
Yes, people make bonds with AI. But still, they know that AI is not human.
ElliQ, a voice-operated care partner, has improved the lives of many seniors by keeping older adults engaged and active in their own homes. She is digital and AI-powered, but nonetheless, seniors using her have noticed that she feels less lonely, especially during the long lockdown period of COVID-19. She tells jokes, encourages people to exercise, reminds them to drink water and communicates as an antidote to loneliness.
But despite her skill and sense of humor, all ElliQ users say they know she’s not a real person. The bonds they form with them are different from their relationships with the people who form their support circle.
At the heart of our immersive sales simulation is the AI sales coach, with whom we have observed this remarkably different human-robotic relationship in observing how people interact. In companies like Zoom, Jenny is considered a team member and is also given her own HR profile. It provides live interactions with sellers to help them improve their performance.
But while she is as friendly and approachable as a human team member, our research suggests that the source of her appeal is actually derived from the fact that she is not human, and therefore provides an emotional assessment without embarrassing her practice partner. Her strength stems from the fact that she is AI-powered, which removes embarrassment and obstruction from her coaching sessions. The computer can only evaluate on the basis of defined criteria and as a result, those who use its services can improve with less negative emotions.
Be careful: for AI to be successful, humans want to know that they are talking to the computer from the very first moment.
As AI continues to grow in its emotional range, businesses should remember that fraud is the number one obstacle to AI success. When humans are deceived into thinking that they are talking to humans when in fact it is AI, it will eventually frustrate them, breaking emotional bonds. But when humans learn from an outlet that they are talking to a robot, they inadvertently adjust their communication – they do not argue, and they do not become overly personal.
In the future, this knowledge of AI will open up significant avenues for emotional healing, mental health treatment and social and professional growth. The story of Joshua Barbu, who communicates with his dead fianc દ્વારા through AI to help cope with adversity, is a compelling indicator of the potential when AI is accepted without being deceived.
Of course, we must proceed with caution. Due to the shortage of physicians following the epidemic and the crisis of mental health problems, chatbot mental health therapies, such as Talkspace, are rapidly becoming mainstream. Yet it remains extremely dangerous. Potential, and AI has shown great promise as a frontline tool in tackling the growing mental health crisis, especially in suicide prevention. But technology is young, and testing data is scarce. Even with the most advanced technology, there are no quick fixes or an end to Hollywood.
There is no doubt that when we talk about a future where humans and robots communicate and form emotional bonds, that future has come. But unlike the dramatic previews of films and literature, that future is also with significantly less fanfare. AI technology is still very new, and its ability to help humans grow and develop is promising. But if you think you know anything about AI from watching a movie, you should think again.
Ariel Hitron is the co-founder and CEO of Second Nature,
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including tech people working on data, can share data-related insights and innovations.
If you would like to read about the latest ideas and latest information, best practices and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing to your own article!
Read more from DataDecisionMakers