Congress introduces new anti-nonconsensual deepfake law following Taylor Swift controversy

Lisa Battaglia
4 min readFeb 1, 2024
Photo by Kathryn Riley/Getty Images

In the last week of January 2024, X (formerly known as Twitter) dealt with a trust & safety crisis that was a long time coming. Nonconsensual deepfake explicit images of Taylor Swift circulated widely throughout the social media app, finally igniting action from lawmakers.

While actresses like Gal Godot, Emma Watson, and most recently Bollywood actress Rashmika Mandanna have become victims of deepfake NCEI (nonconsensual explicit imagery), Taylor Swift’s images were another wake-up call to provide the victims with some sort of legal recourse.

Less than a week later, US lawmakers introduced The DEFIANCE Act in response to the AI-generated images. Senators Durbin (D-IL), Graham (R-SC), Klobuchar (D-MN), and Hawley (R-MO) state that this bill “will address nonconsensual deepfake pornography by providing victims a civil right of action to seek justice.” The DEFIANCE Act creates a federal civil remedy for victims in a “digital forgery,” which is defined as “a visual depiction created through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means to falsely appear to be authentic.”

The bill builds upon a provision of the Violence Against Women Act Reauthorization Act of 2022, which provides victims with a similar right of action for non-faked videos and images. Congress acknowledged that “the volume of deepfake content available online is increasing exponentially as the technology used to create it has become more accessible to the public,” while addressing a 2019 study by Sensity claiming that 96 percent of deepfake videos were nonconsensual pornography.

Congress also noted that “currently, there are no laws addressing deepfake pornography.” Although 48 states and Washington D.C. passed laws prohibiting the distribution or production of nonconsensual pornography, only California, Virginia, and Texas enacted laws focusing on deepfakes. Texas became the first to enact a law outlawing political deepfakes, while Virginia and California specify deepfake pornography.

But will any of these laws even make an impact?

After studying the law surrounding this issue, I have compiled six main points that you should know about any deepfake legislation Congress tries to implement.

1. Free Speech

Any legislation will raise huge questions about artistic expression and free speech. How is using AI to create a deepfake different from an artist drawing a pornographic image of Taylor Swift? What if they use a digital illustrator and it is a really realistic photo? Is AI not just another artistic tool?

2. Anonymous Creators

Third and most importantly, it would be nearly impossible to track down the original creator of the image and anyone who distributes it. People can so easily stay relatively anonymous online. MrDeepFakes, one of the most prominent deepfake porn websites, advertises jobs for deepfake creators. Offering to compensate employees in cryptocurrency, MrDeepFakes keeps the exchange anonymous and untraceable. The technology itself also protects the deepfake creators: “perpetrators go to great lengths to initiate such attacks at such an anonymous level that neither law enforcement nor platforms can identify them.” If the creator lives internationally, US law would not help.

3. Social Media Immunity

Could Taylor Swift sue X? At this point, no. Section 230 protects social media companies from liability even if the deepfake is distributed on their platforms. Although most platforms have zero tolerance policies for revenge porn, we saw easily and quickly the Taylor Swift photo circulated on X. Their best solution has been to block any searching for Taylor Swift in general (barely).

4. Section 230

If Congress decides to repeal or revise Section 230, Taylor Swift might have standing to sue any social media company that distributes the images. But would that stop creators from going to another platform and distributing them there?

5. Vague Language

The language in this bill remains vague: how will a court determine whether the image looks enough like Taylor Swift to warrant a case?

6. What about the Supreme Court?

The Supreme Court evaluated a similar question in Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002). The Free Speech Coalition challenged the Child Pornography Prevention Act of 1996 (CPPA) which expanded the prohibition on child pornography to include not only pornographic images made using actual children, but also “any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture,” that “is, or appears to be, of a minor engaging in sexually explicit conduct.” The CPPA “bans a range of sexually explicit images, sometimes called ‘virtual child pornography,’ that appear to depict minors but were produced by means other than using real children, such as through the use of youthful-looking adults or computer-imaging technology.” The Supreme Court concluded that computer-generated child pornography not involving actual children was considered protected speech.If “virtual child pornography” is considered protected speech under Ashcroft v. Free Speech Coalition, adult victims might struggle to fight against the First Amendment protections for deepfake creators.

With deepfake NCEI on the rise, Congress and social media companies are taking the right actions. While I am not glad this happened to Taylor Swift, I am glad that it was the catalyst for some action from Congress. Hopefully social media companies will take it as a forewarning to implement more focused and strategic crisis protocols so that no other person becomes another victim of an explicit deepfake.

For communications consulting and marketing services for trust & safety and tech policy companies, email me at lisa@lisabtag.com

For more tech policy content, follow me here on Medium or subscribe to my podcast The Elevated Podcast wherever you listen to your podcasts.

--

--