close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

A law cracking down on images of child sexual abuse by AI | News, Sports, Jobs
aecifo

A law cracking down on images of child sexual abuse by AI | News, Sports, Jobs

WASHINGTON — A child psychiatrist who edited a first day of school photo he saw on Facebook to show a group of naked girls. US Army soldier accused of creating images depicting children he knew were being sexually abused. A software engineer responsible for generating hyperrealistic and sexually explicit images of children.

Law enforcement agencies across the United States are tackling a disturbing spread of child sexual abuse images created using artificial intelligence technology – from manipulated photos of real children to to computer-generated graphic representations of children. Justice Department officials say they are aggressively going after offenders who exploit AI tools, while states rush to ensure that people generating “deepfakes” and other harmful images children can be prosecuted under their laws.

“We need to signal early and often that this is a crime, and that it will be investigated and prosecuted when the evidence supports it,” said Steven Grocki, who heads the crime section. child exploitation and obscenity from the Department of Justice, in an interview with The Associated Press. “And if you think otherwise, you are fundamentally wrong. And it’s only a matter of time before someone calls you to account. »

The Justice Department says existing federal laws clearly apply to this type of content and recently filed what is believed to be the first federal case involving purely AI-generated images, meaning children depicted are not real but virtual. In a separate case, federal authorities arrested a U.S. soldier stationed in Alaska in August, accused of distributing innocent photos of real children he knew through an AI chatbot in order to make the images sexually explicit.

Trying to catch up with technology

The lawsuits come as children’s rights advocates urgently work to combat the misuse of technology to avoid a flood of disturbing images officials fear could make it more difficult the rescue of the real victims. Law enforcement officials worry that investigators will waste time and resources trying to identify and trace exploited children who don’t really exist.

Meanwhile, lawmakers are passing a series of laws to ensure local prosecutors can bring charges under state laws for AI-generated “deepfakes” and other sexually explicit images of children. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered images of child sexual abuse, according to a study by the National Center for Missing and Exploited Children.

“We’re playing catch-up as law enforcement with technology that, frankly, is evolving much faster than we are,” said Erik Nasarenko, district attorney for Ventura County, California.

Nasarenko pushed legislation signed last month by Gov. Gavin Newsom that makes clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not pursue eight cases involving AI-generated content between last December and mid-September because California law required prosecutors to prove the images depicted a real child.

AI-generated images of child sexual abuse can be used to medicate children, law enforcement says. And even if they are not physically abused, children can be deeply affected when their image is morphed to appear sexually explicit.

“I felt like a part of me had been taken away. Even though I wasn’t physically raped,” said Kaylin Hayman, 17, who starred on the Disney Channel show “Just Roll with It” and helped advance the California bill after being victim of “deepfake” images.

Hayman testified last year in the federal trial of the man who digitally superimposed his face and those of other child actors onto bodies performing sex acts. He was sentenced in May to more than 14 years in prison.

Open source AI models that users can download to their computers are known to be favored by offenders, who can refine or modify the tools to produce explicit depictions of children, experts say. Attackers trade tips in dark web communities on how to manipulate AI tools to create such content, officials say.

A report released last year by the Stanford Internet Observatory found that a search dataset that was the source of major AI image creators such as Stable Diffusion contained links to sexually explicit images of children, contributing to the ease with which certain tools were able to produce harmful images.