AI-generated images of child sexual abuse are spreading. The police rush to stop them

AI-generated images of child sexual abuse are spreading. The police rush to stop them

WASHINGTON (AP) — A child psychiatrist who altered a first day of school photo he saw on Facebook to make a group of girls appear naked. A US Army soldier is accused of taking images of children he knew were being sexually abused. A software engineer charged with generating hyper-realistic sexually explicit images of children.

Law enforcement agencies in the US are cracking down on a disturbing distribution of images of child sexual abuse created using artificial intelligence technology – from manipulated photos of real children to graphic images of computer-generated children. Justice Department officials say they are aggressively pursuing offenders who misuse AI tools states racing to ensure that people who generate ‘deepfakes’ and other harmful images of children can be prosecuted under their laws.

“We need to signal early and often that it is a crime, that it will be investigated and prosecuted if the evidence supports it,” said Steven Grocki, chief of the Justice Department’s Child Exploitation and Obscenity Division, in an interview with The Associated Press. press. ‘And if you think otherwise, you are fundamentally wrong. And it’s only a matter of time before someone calls you to account.”

The Justice Department says existing federal laws clearly apply to such content, and recently filed what is believed to be the first federal case involving solely AI-generated images — meaning the children depicted are not real but virtual. In another case, federal authorities in August arrested a U.S. soldier stationed in Alaska, accused of posting innocent photos of real children he knew through an AI chatbot to make the images sexually explicit.

Trying to catch up with technology

The prosecutions come as child advocates are pressing urgently to curb the misuse of technology to prevent a flood of disturbing images that officials fear could make it harder to save real victims. Law enforcement officials worry that investigators will waste time and resources identifying and tracking down exploited children who don’t actually exist.

Lawmakers, meanwhile, are passing a raft of legislation to ensure local prosecutors can file charges under state law for AI-generated “deepfakes” and other sexually explicit images of children. Governors in more than a dozen states this year signed laws cracking down on digitally created or altered child sexual abuse images, according to a study by the National Center for Missing & Exploited Children.

“We are playing catch-up as law enforcement on a technology that, quite frankly, is evolving much faster than we are,” said Ventura County District Attorney Erik Nasarenko.

Nasarenko pushed for legislation signed last month by Governor Gavin Newsom, clarifying that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office was unable to prosecute eight cases involving AI-generated content between December and mid-September because California law had required prosecutors to prove the images depicted a real child.

AI-generated images of child sexual abuse could be used to groom children, law enforcement officials say. And even if they are not physically abused, children can be deeply affected if their image is altered to appear sexually explicit.

“It felt like a part of me was taken away. Even though I wasn’t physically abused,” says 17-year-old Kaylin Hayman, who starred in the Disney Channel show “Just Roll with It” and helped pass California law after falling victim to “deepfake” images.

Kaylin Hayman, 17, poses outside Ventura City Hall in Ventura, California, October 17, 2024.Kaylin Hayman, 17, poses outside Ventura City Hall in Ventura, California, October 17, 2024.(Eugene Garcia | AP Photo/Eugene Garcia)

Hayman testified last year at the federal trial of the man who digitally placed her face and those of other child actors onto bodies performing sex acts. He was sentenced in May to more than fourteen years in prison.

Open source AI models that users can download to their computers are known to be favored by offenders, who can further train or modify the tools to produce explicit images of children, experts say. Abusers are trading tips in dark web communities on how to manipulate AI tools to create such content, officials say.

A report from last year from the Stanford Internet Observatory found that a research dataset that was the source for leading AI image makers like Stable Diffusion contained links to sexually explicit images of children, contributing to the ease with which some tools could produce harmful images. The data set was deleted, researchers later said they removed more than 2,000 web links to images of suspected child sexual abuse.

Top tech companies including Google, OpenAI and Stability AI have agreed to collaborate with anti-child abuse organization Thorn to prevent the spread images of child sexual abuse.

But experts say more should have been done from the start to prevent misuse before the technology became widely available. And the steps companies are taking now to make it harder to abuse future versions of AI tools will “do little to prevent ‘offenders’ from running older versions of models on their computers without detection,” a prosecutor noted the Department of Justice in recent lawsuits.

“No time was spent on making the products safe, rather than on making them efficient, and it’s very difficult to do that afterwards – as we’ve seen,” said David Thiel, chief technologist at the Stanford Internet Observatory .

AI images become more realistic

The National Center for Missing & Exploited Children’s CyberTipline received approximately 4,700 reports of content involving AI technology last year – a small portion of the more than 36 million total reports of suspected child sexual exploitation. As of October this year, the group was processing about 450 reports per month on AI-related content, said Yiota Souras, the group’s chief legal officer.

However, these figures may be an undercount because the images are so realistic that it is often difficult to tell whether they were generated by AI, experts say.

“Researchers spend hours trying to determine whether an image actually depicts a real minor or whether it was generated by AI,” said Rikole Kelly, deputy district attorney in Ventura County, who helped draft the bill in California . “There used to be some very clear indicators… with advances in AI technology, that’s just not the case anymore.”

Justice Department officials say they already have the tools under federal law to prosecute perpetrators over such images.

The US Supreme Court A federal ban was lifted in 2002 on virtual material about child sexual abuse. But a federal law signed the following year bans the production of visual images, including drawings, of children engaged in sexually explicit conduct deemed “obscene.” That law, which the Justice Department says has been used in the past to prosecute cartoon depictions of child sexual abuse, specifically notes that there is no requirement “that the minor depicted actually exists.”

An Orlando mother is suing Google and a separate tech company after her 14-year-old son committed suicide. (Source: WKMG, CHARACTER.AI, FAMILY PHOTO, CNN)

The Justice Department filed the suit in May against a Wisconsin software engineer accused of using the AI ​​tool Stable Diffusion to create photorealistic images of children engaged in sexually explicit behavior. post on Instagram, authorities say. The man’s attorney, who is pushing to dismiss the charges on First Amendment grounds, declined further comment on the allegations in an email to the AP.

A spokesperson for Stability AI said the human is accused of using an earlier version of the tool released by another company, Runway ML. Stability AI says it has “invested in proactive features to prevent misuse of AI to produce malicious content” since taking over exclusive development of the models. A spokesperson for Runway ML did not immediately respond to a request for comment from the AP.

In cases involving “deepfakes,” where a real child’s photo has been digitally altered to make it sexually explicit, the Justice Department is filing charges under the federal “child pornography” law. In one case, a North Carolina child psychiatrist who used an AI application to digitally “undress” girls who posed on the first day of school in a decades-old photo shared on Facebook was convicted on federal charges last year.

“These laws exist. They will be used. We have the will. We have the resources,” Grocki said. “This will not be a low priority that we ignore because there is no actual child involved.”

__

The Associated Press receives funding from the Omidyar Network to support reporting on artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP’s standards for working with charities, a list of supporters and funded areas of coverage at AP.org


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *