Get ready for your evil twin

We’re excited to bring Transform 2022 back to life on July 19th and virtually July 20-28. Join AI and data leaders for sensible conversations and exciting networking opportunities. Register today!


A chilling academic study was published earlier this year by researchers at Lancaster University and UC Berkeley. Using a sophisticated form of AI called the GAN (Generative Adverse Network), they created artificial human faces (i.e. photorealistic fakes) and showed these fake hundreds of human subjects with a mixture of real faces. They discovered that this type of AI technology has become so effective that we humans can no longer tell the difference between real people and virtual people (or as I call them “people”).

And that wasn’t their most terrifying invention.

You see, they also asked their test subjects to rate the “reliability” of each face and found that customers find AI-generated faces to be significantly more reliable than real faces. As I described in a recent academic paper, this result makes it highly likely that advertisers will make extensive use of AI-generated people instead of human actors and models. This is because working with virtual people will be cheaper and faster, and if they are also perceived as more reliable, they will also be more motivating.

This is a troubling direction for print and video ads, but it’s terrifying when we look at new forms of advertising that Metavers will be coming out soon. As consumers spend more time in the virtual and augmented world, digital advertising will transform from simple images and videos into AI-powered virtual ones that engage us in promotional conversations.

Equipped with an extensive database of personal information about our behaviors and interests, these “AI-powered communication agents” will be highly effective advocates for whatever third party pays them to deliver messages. And if this technology is not regulated, these AI agents will also track our emotions in real time, monitoring our facial expressions and voice expressions so that they can understand their communication strategies (i.e. their Sales pitch) To maximize their motivational effect.

While this somewhat points to dystopian metavars, this AI-powered promotional avatar would be a legitimate use of virtual people. But what about fraudulent uses?

This brings me to the topic of identity theft.

In a recent Microsoft blog post by executive VP Charlie Bell, he states that metavers fraud and phishing attacks can come from “familiar faces – literally – avatars that pretend to be your coworkers.” I totally agree. In fact, I’m concerned that the ability to hijack or duplicate avatars could destabilize our sense of identity, leaving us forever unsure whether the people we are talking to are individuals we know or are qualitatively fake. Is.

In metavars, accurate replication of a person’s appearance and voice is often referred to as creating a “digital twin”. Earlier this year, NVIDIA CEO Jensen Hong delivered a keynote address using the cartoon Digital Twin. “Loyalty will grow exponentially in the coming years as well as increase the ability for AI engines to autonomously control your avatars so you can stay in multiple locations at once,” he said. Yes, digital twins are coming.

That’s why we need to be prepared for what I call “evil twins” – the exact virtual replicas of your looks, voices and manners (or people you know and trust) that are being used against you for deceptive purposes. . This form of identity theft will occur in Metawars, as it is a direct integration of current technologies developed for deep-fake, voice emulation, digital-twining and AI-powered avatars.

And cheaters can be quite expansive. According to Bell, bad actors can lure you into a fake virtual bank, complete with a deceptive teller asking for your information. Or fraudsters turning to corporate espionage may invite you to a fake meeting in a conference room that looks exactly like the virtual conference room you always use. From there, you will leave anonymous third parties without realizing the confidential information.

Personally, I doubt that hypocrites need to know this in detail. After all, facing a familiar face that looks, sounds, and acts like the person you know is a powerful tool in itself. This means that the Metavers platform requires equally powerful authentication techniques that validate whether we are interacting with the real person (or their authorized twins) and Not at all An evil twin that was deceitfully deployed to deceive us. If the platforms do not resolve this issue as soon as possible, Metavers could collapse under the avalanche of fraud and identity theft.

Whether you are keen on Metavers or not, the main platform is on our way. And because the technology of virtual reality and augmented reality is designed to fool the senses, this platform will skillfully blur the boundaries between real and fake. When used by bad artists, such abilities can quickly become dangerous. This is why pushing for tight security is in the best interests of everyone, consumers and corporations. The alternative would be a massive fraudulent metavarus, the result of which it will never recover.

Lewis Rosenberg, PhD is the CEO of Consensus AI and a leader in VR, AR and AI.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including tech people working on data, can share data-related insights and innovations.

If you would like to read about the latest ideas and latest information, best practices and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing to your own article!

Read more from DataDecisionMakers

Similar Posts

Leave a Reply

Your email address will not be published.