close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

The impact of artificial intelligence on the 2024 elections
aecifo

The impact of artificial intelligence on the 2024 elections

At a Congressional Internet Caucus Academy briefing This week, experts discussed the impact of artificial intelligence The 2024 election was less extreme than expected – but deepfakes and misinformation still played a role.

There was major concerns ahead of 2024 elections that AI would disrupt elections through false information; overall, the impact was less extreme than what the experts predicted. However, AI has still had an effect, as deepfakes show, as Biden robocall and disinformation AI-powered chatbots.

“We have not seen widespread use of AI tools to create deepfakes that could influence elections,” said Jennifer Huddleston, a senior technology policy fellow at the Cato Institute.


And while the “apocalypse“Some experts’ predictions haven’t come to pass, but there’s still a lot of misinformation out there. Biden’s robocall was the most notable example of deepfake this election cycle. But as Tim Harper explains, senior policy analyst and project manager at the Center for Democracy and Technology, there have been several cases of AI misuse. These included fake websites generated by foreign governments and deepfakes spreading. incorrect information about candidates.

In addition to this kind of misinformation, Harper pointed out that a major concern is how AI tools could be used to target people on a more micro level than has been seen before, which he said , happened during this election cycle. Examples include AI-generated texts aimed at Wisconsin students that were found to be intimidating, and incidents of non-English-speaking disinformation campaigns targeting Spanish-speaking voters intended to create confusion. AI’s role in this election, Harper said, impacted public trust and perceptions of the truth.

One positive trend seen this year, according to Huddleston, is that the existing information ecosystem has helped combat AI-powered misinformation. For example, with Biden’s robocall, the response was rapid, allowing voters to be more informed and better discern what to believe.

Huddleston said she thinks it’s too early to predict precisely how this technology will evolve and what public perception and adoption of AI might look like. But she said using education as a policy tool can help improve understanding of AI risks and reduce misinformation.

Internet literacy continues to grow, Harper said. he expects a similarly slow progression in AI knowledge and adoption: “I think public education about these types of threats is really important. »

AI REGULATION AND ELECTIONS

While bipartisan legislation was introduced to combat AI-generated deepfakes, it was not adopted before the elections. However, other regulatory protections exist.

Harper pointed to the Federal Communications Commission’s (FCC) ruling that the Telephone Consumer Protection Act (TCPA) regulates robocalls using artificially generated speech. So this applies to Biden’s robocall, the for which the perpetrators were held responsible.

Unfortunately, regulatory gaps remain even here. The TCPA does not apply to nonprofit organizations, religious institutions, or calls to landlines. Harper said the FCC was transparent and worked to close such “loopholes.”

Regarding legislation to combat AI risks, Huddleston said that in many cases, some protections are already in place, and she argued that the problem is not always AI technology itself, but rather inappropriate use. She said those regulating this technology should be careful not to wrongly condemn technologies that may be beneficial, but rather ask whether the problems are new or are existing problems with AI creating a layer additional.

There was many states have implemented their own AI legislationand Huddleston warned that this “patchwork” of legislation could create barriers to the development and deployment of AI technologies.

Harper pointed out that there are legitimate First Amendment concerns about the overregulation of AI. He argued that more regulation is needed, but it remains to be seen whether this could be done through agency-level regulation or new legislation.

To combat the lack of comprehensive federal legislation addressing the use of AI in elections, many private sector technology companies have attempted to self-regulate. According to Huddleston, this is not only due to government pressure, but also consumer demand.

Huddleston noted that broad definitions of AI in the regulatory world could also inadvertently restrict beneficial applications of AI.

She explained that many of them are innocuous applications, such as text-to-speech software and navigation platforms for finding the best route between campaign events. Using AI for things like captioning can also boost the capabilities of campaigns with limited resources.

AI can help identify potential instances of a campaign being hacked, Huddleston said, helping campaigns be more proactive in the event of a security threat.

“It’s not just campaigns that can benefit from certain uses of this technology,” Harper said, noting that election officials can use this technology to educate voters, inform planning, conduct post-election analysis and increase voting. ‘efficiency.

While this briefing addresses the impact of AI on elections, questions remain about the impact of elections on AI. It is important to note that the program of the new administration included the revocation of the mandate of the Biden administration. decree on AIHuddleston said, adding that it remains to be seen whether he will be revoked and replaced or revoked without replacement.