close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

The damage caused by AI: from suicide to deepfake pornography | Technology
aecifo

The damage caused by AI: from suicide to deepfake pornography | Technology

Danos AI
Ukraine’s First Lady Olena Zelenska, falsely accused of purchasing a Bugatti in an AI-generated smear campaign, was on an official visit to London last May.Dylan Martinez (EL PAÍS)

Sewell Setzer, fourteen years old committed suicide last February after developing a romantic attachment to an artificial intelligence-generated character on the Character.AI platform, according to a lawsuit filed by his family against the company. The late Paul Mohney never saw combat, nor did Olena Zelenska, the wife of the Ukrainian president, buy a rare Bugatti Tourbillon sports car.

False information generated by artificial intelligence (AI) was distributed with the aim of profiting from advertisements on obituary pages or promoting Russian propaganda.

In Edinburgh, a school cleaner and single mother of two lost her benefits due to bias in the AI ​​system, like many women in similar situations. A customer of a payment platform was alerted in error by the algorithm of a transaction that never took place. Lawsuit Challenges Vehicle Safety Due to Alleged Programming Error, Thousands of Users Have Their Data Viewed used without consent.

At the end of the AI ​​chain are real people, but responsibility for the damage caused remains unclear. “We are faced with an alarming legislative vacuum,” warns Cecilia Danesi, author of Consumer rights at the crossroads of artificial intelligence.

Profiting from the deaths of strangers

Making money from the deaths of strangers has become easy and inexpensive thanks to artificial intelligence, even if it comes at the cost of spreading lies that accentuate the grief of the deceased’s loved ones. This practice occurs on obituary pages, where AI generates information about the deceased using both real and fabricated details – such as Mohney’s alleged military history – to drive traffic and generate ad revenue.

“There’s a whole new strategy in search rankings,” said SEO expert Chris Silver Smith. Fast business. “It relies on getting that information that someone has died, and there is a small increase in traffic, perhaps in (a specific region), for that person’s name, and about quickly optimizing and publishing articles about the person to get those drops of information. search traffic.

Misinformation and pornfakes

THE AI incidents on the website reports dozens of alerts each month concerning incidents generated by artificial intelligence or cases of abuse. It has already identified more than 800 complaints. Among its latest recordings are false information about the attempted assassination of Donald Trump, disinformation regarding Democratic presidential candidate Kamala Harris, and realistic and deepfake pornography. involving British politicians.

Concerns about the impact of these creations and their potential for viral spread in democratic processes are growing; a survey carried out for the European Technology Outlook Report 2024 by the Center for Change Governance (CGC) revealed that 31% of Europeans believe that AI has already influenced their voting behavior.

“Citizens are increasingly concerned about the role of AI in elections. And while there is still no clear evidence that it led to substantial changes in election results, the emergence of AI has increased concerns about disinformation and deepfake technology around the world,” said Carlos Luca de Tena, executive director of CGC.

“When it comes to creating a fake video or image using generative AI, it is clear that AI serves as a medium, a tool, so the responsibility lies with the creator,” he said. Danesi explained. “The main problem is that in most cases it is impossible to identify the creator. For example, the case of fake pornography (AI-generated images with pornographic content) has a direct impact on the gender gap, as platforms Iencourage their use with images of women. The increased volume of these images leads to greater precision in imitating women’s bodies, which results in greater marginalization and stigmatization of women. Therefore, in the age of misinformation and cancel culture, education is extremely important. As users, it is imperative that we double-check the content we come across and verify it before engaging with it.

Danesi – member of UNESCO Women4Ethical AI and co-author of report presented to the G20 Brazil on algorithmic audits – also worries about the effects of disinformation: “An algorithm can play a dual role: one in creating fake news through generative AI and another in amplifying false content via search engines search engines or the social media algorithms that make them go viral. . In the latter case, it is clear that platforms cannot be expected to verify every published content; it’s just not feasible.

Automatic selectivity

Another concern about the misuse of AI is the bias that harms single-parent families90% of which are led by women, within a Scottish welfare system. “Although the AI ​​law includes several provisions aimed at preventing bias (including requirements for high-risk systems), its lack of civil liability regulations fails to provide victims with the means to obtain compensation. The same applies to the Digital Services Act, which imposes certain transparency obligations on digital platforms,” explains Danesi.

Defective products

The AI ​​Incidents page features an open legal case regarding a potential defect in a vehicle’s programming that could affect safety. In this context, Danesi explains: “Regarding the reform of the Defective Products Directive, it remains incomplete. The problem lies in the types of damages that can be claimed under the law, as it does not encompass moral damages, for example. Breaches of privacy or cases of discrimination are excluded from the protections offered by the directive.

According to Danesi, these cases highlight the urgent need for legal reforms regarding civil liability in light of advances in AI. “Consumers are highly exposed to the potential harm that AI can cause. Without clear rules on how to deal with such harm, individuals are left unprotected. But clear rules on civil liability ensure legal certainty, promote innovation and facilitate agreements in the event of harm,” explains the researcher, adding that these rules also allow companies to make more informed investment decisions.

Danesi notes that the European Union is discussing initiatives to address these issues, including the Artificial Intelligence Act, the Digital Services Act – which establishes measures affecting the algorithms of digital platforms, social networks and engines research – the proposed directive on liability in matters of AI, and a reform of the Product Liability Directive.

“This directive had become obsolete. There was even debate over whether this was applicable to AI systems, since the definition of a product was based on something physical rather than digital. The amendment expands the concept of product to include digitally produced computer files and programs. The regulation focuses on individual protection, so it doesn’t matter whether the damage comes from a physical or digital product,” she explains.

Register for our weekly newsletter to get more English media coverage from EL PAÍS USA Edition