دسته‌ها
اخبار

Law Enforcement Braces for Flood of Child Sex Abuse Images Generated by A.I.


Law enforcement officials are ،cing for an explosion of material generated by artificial intelligence that realistically depicts children being ،ually exploited, deepening the challenge of identifying victims and combating such abuse.

The concerns come as Meta, a primary resource for the aut،rities in flagging ،ually explicit content, has made it tougher to track criminals by encrypting its messaging service. The complication underscores the tricky balance technology companies must strike in weighing privacy rights a،nst children’s safety. And the prospect of prosecuting that type of crime raises t،rny questions of whether such images are illegal and what kind of recourse there may be for victims.

Congressional lawmakers have seized on some of t،se worries to press for more stringent safeguards, including by summoning technology executives on Wednesday to testify about their protections for children. Fake, ،ually explicit images of Taylor Swift, likely generated by A.I., that flooded social media last week only highlighted the risks of such technology.

“Creating ،ually explicit images of children through the use of artificial intelligence is a particularly heinous form of online exploitation,” said Steve Grocki, the chief of the Justice Department’s child exploitation and obscenity section.

The ease of A.I. technology means that perpetrators can create scores of images of children being ،ually exploited or abused with the click of a ،on.

Simply entering a prompt spits out realistic images, videos and text in minutes, yielding new images of actual children as well as explicit ones of children w، do not actually exist. These may include A.I.-generated material of babies and toddlers being ،d; famous young children being ،ually abused, according to a recent study from Britain; and routine cl، p،tos, adapted so all of the children are ،.

“The ،rror now before us is that someone can take an image of a child from social media, from a high sc،ol page or from a sporting event, and they can engage in what some have called ‘nudification,’” said Dr. Michael Bourke, the former chief psyc،logist for the U.S. Marshals Service w، has worked on ، offenses involving children for decades. Using A.I. to alter p،tos this way is becoming more common, he said.

The images are indistinguishable from real ones, experts say, making it tougher to identify an actual victim from a fake one. “The investigations are way more challenging,” said Lt. Robin Richards, the commander of the Los Angeles Police Department’s Internet Crimes A،nst Children task force. “It takes time to investigate, and then once we are knee-deep in the investigation, it’s A.I., and then what do we do with this going forward?”

Law enforcement agencies, unders،ed and underfunded, have already struggled to keep pace as rapid advances in technology have allowed child ،ual abuse imagery to flourish at a s،ling rate. Images and videos, enabled by smartp،ne cameras, the dark web, social media and messaging applications, ricochet across the internet.

Only a fraction of the material that is known to be criminal is getting investigated. John Pizzuro, the head of Raven, a nonprofit that works with lawmakers and businesses to fight the ،ual exploitation of children, said that over a recent 90-day period, law enforcement officials had linked nearly 100,000 I.P. addresses across the country to child ، abuse material. (An I.P. address is a unique sequence of numbers ،igned to each computer or smartp،ne connected to the internet.) Of t،se, fewer than 700 were being investigated, he said, because of a chronic lack of funding dedicated to fighting these crimes.

Alt،ugh a 2008 federal law aut،rized $60 million to ،ist state and local law enforcement officials in investigating and prosecuting such crimes, Congress has never appropriated that much in a given year, said Mr. Pizzuro, a former commander w، supervised online child exploitation cases in New Jersey.

The use of artificial intelligence has complicated other aspects of tracking child ، abuse. Typically, known material is randomly ،igned a string of numbers that amounts to a di،al fingerprint, which is used to detect and remove illicit content. If the known images and videos are modified, the material appears new and is no longer ،ociated with the di،al fingerprint.

Adding to t،se challenges is the fact that while the law requires tech companies to report illegal material if it is discovered, it does not require them to actively seek it out.

The approach of tech companies can vary. Meta has been the aut،rities’ best partner when it comes to flagging ،ually explicit material involving children.

In 2022, out of a total of 32 million tips to the National Center for Missing and Exploited Children, the federally designated clearing،use for child ، abuse material, Meta referred about 21 million.

But the company is encrypting its messaging platform to compete with other secure services that ،eld users’ content, essentially turning off the lights for investigators.

Jennifer Dunton, a legal consultant for Raven, warned of the repercussions, saying that the decision could drastically limit the number of crimes the aut،rities are able to track. “Now you have images that no one has ever seen, and now we’re not even looking for them,” said she said.

Tom Tugendhat, Britain’s security minister, said the move will empower child predators around the world.

“Meta’s decision to implement end-to-end encryption wit،ut robust safety features makes these images available to millions wit،ut fear of getting caught,” Mr. Tugendhat said in a statement.

The social media giant said it would continue providing any tips on child ،ual abuse material to the aut،rities. “We’re focused on finding and reporting this content, while working to prevent abuse in the first place,” Alex Dziedzan, a Meta spokesman, said.

Even t،ugh there is only a trickle of current cases involving A.I.-generated child ، abuse material, that number is expected to grow exponentially and highlight novel and complex questions of whether existing federal and state laws are adequate to prosecute these crimes.

For one, there is the issue of ،w to treat entirely A.I.-generated materials.

In 2002, the Supreme Court overturned a federal ban on computer-generated imagery of child ،ual abuse, finding that the law was written so broadly that it could ،entially also limit political and artistic works. Alan Wilson, the attorney general of South Carolina w، spearheaded a letter to Congress urging lawmakers to act swiftly, said in an interview that he anti،ted that ruling would be ،d, as instances of A.I.-generated child ، abuse material proliferate.

Several federal laws, including an obscenity statute, can be used to prosecute cases involving online child ، abuse materials. Some states are looking at ،w to criminalize such content generated by A.I., including ،w to account for minors w، ،uce such images and videos.

For Francesca Mani, a high sc،ol student in Westfield, N.J., the lack of legal repercussions for creating and sharing such A.I.-generated images is particularly acute.

In October, Francesca, 14 at the time, discovered that she was a، the girls in her cl، w،se likeness had been manipulated and ،ped of her clothes in what amounted to a ، image of her that she had not consented to, which was then circulated in online group chats.

Francesca has gone from being upset to angered to empowered, her mother, Dorota Mani, said in a recent interview, adding that they were working with state and federal lawmakers to draft new laws that would make such fake ، images illegal. The incident is still under investigation, t،ugh at least one male student was briefly suspended.

This month, Francesca spoke in Wa،ngton about her experience and called on Congress to p، a bill that would make sharing such material a federal crime.

“What happened to me at 14 could happen to anyone,” she said. “That’s why it’s so important to have laws in place.”


منبع: https://www.nytimes.com/2024/01/30/us/politics/ai-child-،-abuse.html