close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

Generative misinformation is real: you’re just not the target, warns nonprofit that tracks deepfakes
aecifo

Generative misinformation is real: you’re just not the target, warns nonprofit that tracks deepfakes

Many feared that the 2024 elections would be affected, and may have decided: by AI-generated disinformation. Even if there were to be found, it was much less than expected. But don’t let that fool you: the threat of misinformation is real – you’re just not the target.

At least that’s what Oren Etzioni, a longtime AI researcher whose nonprofit organization TrueMedia has his finger on the pulse of disinformation being generated.

“There are, for lack of a better word, a diversity of deepfakes,” he told TechCrunch in a recent interview. “Each serves its own purpose, and some are more well-known than others. Let me put it this way: for every one thing you actually hear about, there are a hundred that aren’t meant for you. Maybe a thousand. It’s really just the tip of the iceberg that reaches the mainstream press.”

The fact is that most people, and Americans especially, tend to think that what they are experiencing is the same as what others are experiencing. This is not true for many reasons. But in the case of disinformation campaigns, America is actually a difficult target, given a relatively well-informed population, readily available factual information, and a press that is trusted by at least most of the public. time (despite all the noise to the contrary).

We tend to think of deepfakes as something like a video of Taylor Swift doing or saying something she wouldn’t do. But the truly dangerous deepfakes are not those of celebrities or politicians, but those of situations and people who cannot be so easily identified and neutralized.

“The biggest thing people don’t understand is variety. I saw an Iranian plane over Israel today,” he noted – something that did not happen but cannot easily be refuted by someone who is not not on the ground. “You don’t see it because you’re not on the Telegram channel or in some WhatsApp groups, but millions are.”

TrueMedia offers a free service (via web and API) to identify images, videos, audio and other materials as fake or real. This is not a simple task and cannot be completely automated, but they are slowly building a ground truth database that powers the process.

“Our primary mission is detection. Academic benchmarks (for evaluating fake media) have long been outdated,” Etzioni explained. “We train on things uploaded by people around the world; we see what different vendors say about it, what our models say about it, and we generate a conclusion. As a follow-up, we have a forensic team doing a deeper, deeper, slower investigation, not on all the items but on a significant fraction, so we have ground truth. We do not assign a truth value unless we are sure of it; we can always be wrong, but we are significantly better than any other single solution.

The main mission is to quantify the problem in three main ways described by Etzioni:

  1. How many are there there? “We don’t know, there’s no Google for that. There are various indications that this phenomenon is ubiquitous, but it is extremely difficult, if not impossible, to measure it accurately.
  2. How many people see it? “It’s easier because when Elon Musk shares something, you see, ’10 million people saw it.’ The number of eyeballs is therefore easily counted in hundreds of millions. Every week I see articles that have been viewed millions of times.
  3. What impact did this have? “This is perhaps the most important. How many voters didn’t go to the polls because of Biden’s fake calls? We’re just not able to measure that. The Slovak campaign (a disinformation campaign targeting a presidential candidate in February) happened at the last minute, and then he lost. This may well have tipped this election.

All this work is in progress, some is just beginning, he stressed. But you have to start somewhere.

“Let me make a bold prediction: Over the next four years, we’re going to get much better at measuring this,” he said. “Because we have to.” Right now we’re just trying to get by.

As for some industrial and technological attempts to make generated media more obvious, such as watermarking images and text, they are harmless and perhaps beneficial, but don’t even begin to solve the problem, he said. declared.

“To put it another way, don’t bring a watermark to a shootout.” These voluntary standards are useful in collaborative ecosystems where everyone has a reason to use them, but they offer little protection against bad actors who want to avoid detection.

This all sounds pretty dire, and indeed it is, but the most important election in recent history just happened without much AI shenanigans. This is not because generative disinformation is not commonplace, but because its purveyors have not found it necessary to participate in it. Whether this scares you more or less than the alternative is up to you.