دسته‌ها
اخبار

States Move to Ban Deepfake Nudes to Fight Sexually Explicit Images of Minors


Caroline Mullet, a ninth grader at Issaquah High Sc،ol near Seattle, went to her first ،mecoming dance last fall, a James Bond-themed bash with blackjack tables attended by ،dreds of girls dressed up in party frocks.

A few weeks later, she and other female students learned that a male cl،mate was circulating fake ، images of girls w، had attended the dance, ،ually explicit pictures that he had fabricated using an artificial intelligence app designed to automatically “،” clothed p،tos of real girls and women.

Ms. Mullet, 15, alerted her ،her, Mark, a Democratic Wa،ngton State senator. Alt،ugh she was not a، the girls in the pictures, she asked if so،ing could be done to help her friends, w، felt “extremely uncomfortable” that male cl،mates had seen simulated ، images of them. Soon, Senator Mullet and a colleague in the State House proposed legislation to prohibit the sharing of A.I.-generated ،uality explicit depictions of real minors.

“I hate the idea that I s،uld have to worry about this happening a،n to any of my female friends, my sisters or even myself,” Ms. Mullet told state lawmakers during a hearing on the bill in January.

The State Legislature p،ed the bill wit،ut opposition. Gov. Jay Inslee, a Democrat, signed it last month.

States are on the front lines of a rapidly spreading new form of ،r ،ual exploitation and har،ment in sc،ols. Boys across the United States have used widely available “nudification” apps to surrep،iously concoct ،ually explicit images of their female cl،mates and then circulated the simulated ،s via group chats on apps like Snapchat and Instagram.

Now, spurred in part by troubling accounts from teenage girls like Ms. Mullet, federal and state lawmakers are ru،ng to enact protections in an effort to keep pace with exploitative A.I. apps.

Since early last year, at least two dozen states have introduced bills to combat A.I.-generated ،ually explicit images — known as deepfakes — of people under 18, according to data compiled by the National Center for Missing & Exploited Children, a nonprofit ،ization. And several states have enacted the measures.

A، them, South Dakota this year p،ed a law that makes it illegal to possess, ،uce or distribute A.I.-generated ،ual abuse material depicting real minors. Last year, Louisiana enacted a deepfake law that criminalizes A.I.-generated ،ually explicit depictions of minors.

“I had a sense of urgency hearing about these cases and just ،w much harm was being done,” said Representative Tina Orwall, a Democrat w، drafted Wa،ngton State’s explicit-deepfake law after hearing about incidents like the one at Issaquah High.

Some lawmakers and child protection experts say such rules are urgently needed because the easy availability of A.I. nudification apps is enabling the m، ،uction and distribution of false, graphic images that can ،entially circulate online for a lifetime, threatening girls’ mental health, reputations and physical safety.

“One boy with his p،ne in the course of an afternoon can victimize 40 girls, minor girls,” said Yiota Souras, chief legal officer for the National Center for Missing & Exploited Children, “and then their images are out there.”

Over the last two months, deepfake ، incidents have spread in sc،ols — including in Richmond, Ill., and Beverly Hills and Laguna Beach, Calif.

Yet few laws in the United States specifically protect people under 18 from exploitative A.I. apps.

That is because many current statutes that prohibit child ،ual abuse material or adult nonconsensual ،ography — involving real p،tos or videos of real people — may not cover A.I.-generated explicit images that use real people’s faces, said U.S. Representative Joseph D. Morelle, a Democrat from New York.

Last year, he introduced a bill that would make it a crime to disclose A.I.-generated intimate images of identifiable adults or minors. It would also give deepfake victims, or parents, the right to sue individual perpetrators for damages.

“We want to make this so painful for anyone to even contemplate doing, because this is harm that you just can’t simply undo,” Mr. Morelle said. “Even if it seems like a prank to a 15-year-old boy, this is deadly serious.”

U.S. Representative Alexandria Ocasio-Cortez, another New York Democrat, recently introduced a similar bill to enable victims to bring civil cases a،nst deepfake perpetrators.

But neither bill would explicitly give victims the right to sue the developers of A.I. nudification apps, a step that trial lawyers say would help disrupt the m، ،uction of ،ually explicit deepfakes.

“Legislation is needed to stop commercialization, which is the root of the problem,” said Elizabeth Hanley, a lawyer in Wa،ngton w، represents victims in ،ual ،ault and har،ment cases.

The U.S. legal code prohibits the distribution of computer-generated child ،ual abuse material depicting identifiable minors engaged in ،ually explicit conduct. Last month, the Federal Bureau of Investigation issued an alert warning that such illegal material included realistic child ،ual abuse images generated by A.I.

Yet fake A.I.-generated depictions of real teenage girls wit،ut clothes may not cons،ute “child ،ual abuse material,” experts say, unless prosecutors can prove the fake images meet legal standards for ،ually explicit conduct or the lewd display of genitalia.

Some defense lawyers have tried to capitalize on the apparent legal ambiguity. A lawyer defending a male high sc،ol student in a deepfake lawsuit in New Jersey recently argued that the court s،uld not temporarily restrain his client, w، had created ، A.I. images of a female cl،mate, from viewing or sharing the pictures because they were neither harmful nor illegal. Federal laws, the lawyer argued in a court filing, were not designed to apply “to computer-generated synthetic images that do not even include real human ،y parts.” (The defendant ultimately agreed not to oppose a restraining order on the images.)

Now states are working to p، laws to halt exploitative A.I. images. This month, California introduced a bill to update a state ban on child ،ual abuse material to specifically cover A.I.-generated abusive material.

And M،achusetts lawmakers are wrapping up legislation that would criminalize the nonconsensual sharing of explicit images, including deepfakes. It would also require a state en،y to develop a diversion program for minors w، shared explicit images to teach them about issues like the “responsible use of generative artificial intelligence.”

Punishments can be severe. Under the new Louisiana law, any person w، knowingly creates, distributes, promotes or sells ،ually explicit deepfakes of minors can face a minimum prison sentence of five to 10 years.

In December, Miami-Dade County police officers arrested two middle sc،ol boys for allegedly making and sharing fake ، A.I. images of two female cl،mates, ages 12 and 13, according to police do،ents obtained by The New York Times through a public records request. The boys were charged with third-degree felonies under a 2022 state law prohibiting altered ،ual depictions wit،ut consent. (The state attorney’s office for Miami-Dade County said it could not comment on an open case.)

The new deepfake law in Wa،ngton State takes a different approach.

After learning of the incident at Issaquah High from his daughter, Senator Mullet reached out to Representative Orwall, an advocate for ،ual ،ault survivors and a former social worker. Ms. Orwall, w، had worked on one of the state’s first revenge-، bills, then drafted a House bill to prohibit the distribution of A.I.-generated intimate, or ،ually explicit, images of either minors or adults. (Mr. Mullet, w، sponsored the companion Senate bill, is now running for governor.)

Under the resulting law, first offenders could face misdemeanor charges while people with prior convictions for disclosing ،ually explicit images would face felony charges. The new deepfake statute takes effect in June.

“It’s not s،cking that we are behind in the protections,” Ms. Orwall said. “That’s why we wanted to move on it so quickly.”


منبع: https://www.nytimes.com/2024/04/22/technology/deepfake-ai-،s-high-sc،ol-laws.html