close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

Fake Viral Celebrity Ad Warns Against Using AI to “Trick You Not to Vote”
aecifo

Fake Viral Celebrity Ad Warns Against Using AI to “Trick You Not to Vote”

Electoral interference increasingly relies on artificial intelligence and deepfakes. That’s why a viral public service ad uses them as a warning sign.

“Bad actors in this election will use AI to trick you into not voting,” the ad says. “Do not be fooled. This threat is very real.”

The “Don’t Let AI Steal Your Vote” video features Hollywood stars like Rosario Dawson, Amy Schumer, Chris Rock and Michael Douglas. But a lot of them aren’t real. Douglas, Rock and Schumer, for example, are deepfakes.

“The artists involved in this were very excited to do it,” Joshua Graham Lynn, CEO and co-founder of RepresentUs, the national, nonpartisan anti-corruption organization behind the video, told Scripps News .

“Everyone you see there gave us their image or volunteered in person. They were all really excited to do it to help get out the vote because they know it’s a really important election “Lynn added.

RELATED STORY | Scripps News Spoofed to See How AI Could Impact Elections

The video, which has been viewed more than 6 million times on YouTube, warns voters to pay more attention to what they see and hear online.

“If something seems wrong, it probably is,” the real Rosario Dawson says in the video.

“Right now, it’s very difficult to tell what’s real and what’s fake on the Internet,” Lynn said. “Just watch any new video, and sometimes you can’t tell if it was made entirely by AI.”

“Technology is evolving rapidly and, more importantly, bad actors will always be on the front lines,” he added.

Disinformation experts and community leaders have denounced the use of AI-generated content to sow chaos and confusion around the election. The Department of Homeland Security, ABC News previously reported, warned state election officials that AI tools could be used to “create false election records; impersonate election staff to access sensitive information; generating fake voter calls to overwhelm call centers; and more convincingly. spreading false information online. »

“And so what we want is for voters to use their brains,” Lynn said. “Be skeptical if you see something telling you not to participate. If you see something about a candidate you support, question it. Check it out.”

While deepfakes could be used to spread election misinformation, experts warn they could also be used to destroy public trust in official sources, facts or their own instincts.

“We have situations where we all start to doubt the information we receive, especially information related to politics,” Kaylyn Jackson Schiff, a professor at Purdue University and Scripps News, told Reuters. “And then with the election environment that we’re in, we’ve seen examples of claims that the actual images are fakes.”

Schiff said this phenomenon, this widespread uncertainty, is part of a concept called “the liar’s dividend.”

“Being able to credibly claim that real images or videos are fake thanks to widespread awareness of deepfakes and manipulated media,” she said.

RELATED STORY | San Francisco sues websites used to create fake nudes of women and girls

Schiff, who is also co-director of Purdue’s Responsible Governance and AI Lab, and Christina Walker, a doctoral student at Purdue University, have tracked political deepfakes since June 2023, capturing more than 500 cases in their database. political deepfake incidents.

“A lot of the things we capture in the database, the goal of the communication is actually satire, so almost more similar to a political cartoon,” Walker told Scripps News. “It’s not always because everything is very malicious and intended to cause harm.”

Still, Walker and Schiff say some of the deepfakes mean “reputational damage,” and even parody videos intended for entertainment can take on new meaning if shared out of context.

“There remains concern that some of these deepfakes initially spread for fun could mislead individuals who are unaware of the original context if that message is then re-shared later,” Schiff said.

Although the deepfakes in the “Don’t Let AI Steal Your Vote” video are difficult to spot, Scripps News took a closer look and found that visual artifacts and shadows were disappearing. Deepfake technology has improved, but Walker said there are still telltale signs for now.

“This could be extra or missing fingers, blurred faces, writing in the image, things that aren’t quite right or don’t line up. All of these can indicate that something is a deepfake.” , Walker said. “As these models improve, it becomes harder and harder to tell. But there are still ways to check the facts.”

Fact-checking a deepfake or any video that triggers an emotional response, especially around elections, should begin with official sources such as secretaries of state or secretaries of state. vote.gov.

“We encourage people to seek out additional sources of information, especially if it involves politics and as an election approaches,” Schiff said. “As well as thinking generally about the source of the information and what motivations they might have for sharing that information.”

“If anything says to you as a voter, ‘Don’t go to the polls. Things have changed. There is trouble. Things have been delayed. You can come back tomorrow”, check your sources. This is the most important thing right now.”