close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

AI-generated images of child sexual abuse are spreading
aecifo

AI-generated images of child sexual abuse are spreading

A child psychiatrist who edited a first day of school photo he saw on Facebook to show a group of naked girls. US Army soldier accused of creating images depicting children he knew were being sexually abused. A software engineer responsible for generating hyperrealistic and sexually explicit images of children.

Law enforcement across the United States is cracking down on a disturbing distribution of images of child sexual abuse created using artificial intelligence technology – from manipulated photos of real children to computer-generated graphic depictions of children. Justice Department officials say they are aggressively going after offenders who exploit AI tools, while states are racing to ensure that people generating “deepfakes” and other harmful images of children can be prosecuted under their laws.

“We need to signal early and often that this is a crime, and that it will be investigated and prosecuted when the evidence supports it,” said Steven Grocki, who heads the crime section. child exploitation and obscenity from the Department of Justice, in an interview with The Associated Press. “And if you think otherwise, you are fundamentally wrong. And it’s only a matter of time before someone calls you to account. »

The Justice Department says existing federal laws clearly apply to this type of content and recently filed what is believed to be the first federal case involving purely AI-generated images, meaning children depicted are not real but virtual. In a separate case, federal authorities arrested a U.S. soldier stationed in Alaska in August, accused of distributing innocent photos of real children he knew through an AI chatbot in order to make the images sexually explicit.

Trying to catch up with technology

The lawsuits come as children’s rights advocates urgently work to combat the misuse of technology to avoid a flood of disturbing images officials fear could make it more difficult the rescue of the real victims. Law enforcement officials worry that investigators will waste time and resources trying to identify and trace exploited children who don’t really exist.

Meanwhile, lawmakers are passing a series of laws to ensure local prosecutors can bring charges under state laws for AI-generated “deepfakes” and other sexually explicit images of children. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered images of child sexual abuse, according to a study by the National Center for Missing and Exploited Children.

“As law enforcement, we’re playing catch-up with technology that, frankly, is evolving much faster than we are,” said Erik Nasarenko, district attorney for Ventura County, California.

Nasarenko pushed for legislation signed last month by Governor Gavin Newsom which makes clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not pursue eight cases involving AI-generated content between last December and mid-September because California law required prosecutors to prove the images depicted a real child.

AI-generated images of child sexual abuse can be used to medicate children, law enforcement says. And even if they are not physically abused, children can be deeply affected when their image is morphed to appear sexually explicit.

“I felt like a part of me had been taken away. Even though I wasn’t physically raped,” said Kaylin Hayman, 17, who starred on the Disney Channel show “Just Roll with It” and helped advance the California bill after being victim of “deepfake” images.

Hayman testified last year in the federal trial of the man who digitally superimposed his face and those of other child actors onto bodies performing sex acts. He was sentenced in May to more than 14 years in prison.

Open source AI models that users can download to their computers are known to be favored by offenders, who can refine or modify the tools to produce explicit depictions of children, experts say. Attackers trade tips in dark web communities on how to manipulate AI tools to create such content, officials say.

A report last year by the Stanford Internet Observatory revealed that a search dataset that was the source of major AI image makers such as Stable Diffusion contained links to sexually explicit images of children, contributing to the ease with which some tools were able to produce harmful images. The dataset was deleted and the researchers later said they deleted more than 2,000 web links to alleged child sexual abuse images.

Leading technology companies including Google, OpenAI and Stability AI have agreed to work with anti-child sexual abuse organization Thorn. to combat the spread images of child sexual abuse.

But experts say more should have been done up front to prevent abuse before the technology was widely available. And moves by companies to make it harder to abuse future versions of AI tools “won’t do much to stop” offenders from running older versions of models on their computers “without detection” , a Justice Department prosecutor noted in recent court documents.

“There hasn’t been any time spent making products safe, as opposed to effective, and that’s very difficult to do after the fact, as we’ve seen,” said David Thiel, chief technologist of the Stanford Internet Observatory.

AI images become more realistic

Last year, the National Center for Missing and Exploited Children’s CyberTipline received approximately 4,700 reports of content involving AI technology, a small fraction of the more than 36 million total reports of suspected sexual exploitation of children. As of October this year, the group was seeing about 450 reports per month on AI-related content, said Yiota Souras, the group’s legal director.

These figures could be underestimated, however, because the images are so realistic that it is often difficult to tell whether they were generated by AI, experts say.

“Investigators spend hours trying to determine whether an image actually depicts a real minor or whether it is generated by AI,” said Rikole Kelly, a deputy Ventura County prosecutor who helped write the draft. California law. “Previously, there were very clear indicators…with the advancement of AI technology, that is no longer the case.”

Justice Department officials say they already have the necessary tools under federal law to prosecute offenders using such images.

The United States Supreme Court in 2002 overturned a federal ban on virtual child sexual abuse material. But a federal law signed the following year prohibited the production of visual depictions, including drawings, of children engaging in sexually explicit behavior deemed “obscene.” This law, which the Justice Department says has been used in the past to charge cartoon images with child sexual abuse, specifically notes that there is no requirement “that the minor depicted exists really “.

The Justice Department brought the charge in May against a Wisconsin software engineer accused of using the AI ​​tool Stable Diffusion to create photorealistic images of children engaged in sexually explicit behavior, and was arrested after sending it to a 15 year old boy via direct message. message on Instagram, authorities say. The man’s lawyer, who is pushing to have the charges dismissed on First Amendment grounds, declined to comment further on the allegations in an email to the AP.

A Stability AI spokesperson said the man was accused of using an earlier version of the tool released by another company, Runway ML. Stability AI says it has “invested in proactive features to prevent the misuse of AI for the production of harmful content” since taking over exclusive development of the models. A spokesperson for Runway ML did not immediately respond to a request for comment from the AP.

In cases involving “deepfakes,” when a photo of a real child has been digitally altered to make it sexually explicit, the Justice Department brings charges under the federal “child pornography” law. In one case, a North Carolina child psychiatrist who used an AI app to digitally “undress” girls posing on the first day of school in a decades-old photo shared on Facebook was found guilty of charges federal last year.

“These laws exist. They will be used. We have the will. We have the resources,” Grocki said. “It’s not going to be a low priority that we ignore because no children are actually involved.”