New Technology Helps Celebrities Resist AI Deepfakes: NPR

New Technology Helps Celebrities Resist AI Deepfakes: NPR

How the new AntiFake tool works

Washington University in St. Louis


Hide caption

Toggle caption

Washington University in St. Louis

When Scarlett Johansson discovered that her voice and face had been used to promote an artificial intelligence app online without her consent, the actress took legal action against the app’s maker, Lisa AI.

The video has since been removed. But many of these “deep fakes” can float online for weeks, like a recent photo showing MrBeast, where an unauthorized likeness of the social media personality can be seen selling $2 iPhones.

Artificial intelligence has become so good at imitating people’s physical appearance and voices that it has become difficult to tell whether they are real or fake. Nearly half of respondents in two recently released surveys on AI — from Northeastern University, Voicebot.ai, and Pindrop — said they couldn’t distinguish between artificial content and human-generated content.

This has become a particular problem for celebrities, whose attempt to stay one step ahead of the AI ​​bots has become a game of whack-a-mole.

Now, new tools can make it easier for the public to spot these deepfakes, and more difficult for AI systems to create them.

“Generative AI has become an enabling technology that we believe will change the world,” said Ning Zhang, assistant professor of computer science and engineering at Washington University in St. Louis. “But when it is abused, there has to be a way to build a layer of defense.”

Trotting signals

Zhang’s research team is developing a new tool that may help people combat deepfake breaches, called AntiFake.

“It scrambles the signal so that it prevents the AI-based synthesis engine from generating an effective imitation,” Zhang said.

Zhang said AntiFake was inspired by the University of Chicago’s Glaze, a similar tool aimed at protecting visual artists from having their work copied by generative AI models.

This research is still very new; The team will present the project later this month at a major security conference in Denmark. It is currently unclear how it will be expanded.

But basically, before publishing a video online, you must upload your audio track to the AntiFake platform, which can be used as a standalone application or accessed via the web.

AntiFake scrambles the audio signal so that it confuses the AI ​​model. The edited track still sounds natural to the human ear, but it sounds disruptive to the system, making it difficult for it to create a clean audio copy.

The website describing how the tool works includes many examples of real sounds transformed by the technology, from sounds like this:

AntiFake Real human audio clip

To this:

AntiFake distorted audio clip

You will retain all of your rights to the track; AntiFake will not use it for other purposes. But Zhang said AntiFake won’t protect you if you’re someone whose voice is widely available online. That’s because AI bots already have access to the voices of all kinds of people, from actors to journalists in public media. It only takes a few seconds of an individual’s speech to perform high-quality reproduction.

“All defenses have limits, right?” Zhang said.

But Zhang said that when AntiFake becomes available in a few weeks, it will provide people with a proactive way to protect their speech.

Deep fake detection

In the meantime, there are other solutions, such as deepfake detection.

Some deepfake detection technologies embed digital watermarks in video and audio so users can determine if they are made by artificial intelligence. Examples include Google SynthID and Meta’s static signature. Other devices, developed by companies like Pindrop and Veridas, can tell if something is fake by examining fine details, such as how the sounds of words sync up with the speaker’s mouth.

“There are certain things that humans say that are difficult for machines to represent,” Pindrop founder and CEO said. Vijay Balasubramanian.

But the problem with deepfake detection is that it only works on content that has already been posted, said Siwei Liu, a computer science professor at the University of Buffalo who studies artificial intelligence system security. Sometimes, unauthorized videos can exist online for days before they are labeled as AI-generated fakes.

“Even if the gap between something appearing on social media and it being determined to be AI-made is only a few minutes, it can cause harm,” Liu said.

The need for balance

“I think this is the next evolution of how we can protect this technology from misuse or misuse,” said Rupal Patel, professor of applied artificial intelligence at Northeastern University and vice president of Veriton AI. “I just hope that with this protection, we don’t end up throwing the baby out with the bathwater.”

Patel believes it’s important to remember that generative AI can do amazing things, including helping people who have lost their voices speak again. For example, actor Val Kilmer has relied on a synthetic voice since he lost his real voice to throat cancer.

Youtube

Developers need large collections of high-quality recordings to produce these results, and they wouldn’t have those recordings if their use was completely restricted, Patel said.

“I think it’s a balance,” Patel said.

Consent is key

When it comes to preventing deepfakes abuse, consent is key.

In October, US senators announced that they were debating a new bipartisan bill – the “Current Original Works, Promotion of the Arts, and Preservation of the Integrity of Entertainment Act of 2023” or “Prevention of Counterfeiting Act of 2023” for short – that would detain creators. . Deep fakers are liable if they use people’s images without permission.

“The bill would provide a uniform federal law where the right of publicity currently varies from state to state,” said Yael Weitz, an attorney at Kay Spiegler, an arts law firm in New York.

Currently, only half of US states have “right of publicity” laws, which give an individual the exclusive right to license the use of their identity for commercial promotion. They all offer different degrees of protection. But federal law may be years away.

This story was edited by Jennifer Vanasco. Audio produced by Isabella Gomez Sarmiento.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *