How the Supreme Court Could Eliminate Online Sex Work With One Decision

Lisa Battaglia
15 min readMay 18, 2023

This is my final capstone for my Masters in Legal Studies at UCLA Law.

Link To Presentation

Intro — Slides 1–11

2) Ashley Mathews: 3) 31-year-old wife and mother, 4) clothing-line owner, and 5) sex-positive activist. 6) She is better known as Riley Reid — one of the most famous porn performers in the world. 7) Although she has quit mainstream pornography, she still faces backlash, censorship, and scrutiny because of her previous job. 8) After gaining millions of followers all over the globe and creating a living through her social media, Instagram permanently removed a dozen of her accounts, not because she created content that violated community guidelines, but because of her title: commercial sex worker.

9) She is not the only one. Sex workers, porn performers, sex educators, sex artists, and content creators all have faced social media censorship in the last six years. This regulation of not sexually explicit material, but content made by current and former sex workers results from social media policies and practices that impose the harms they intended to prevent. Social media’s censorship and de-platformization of sex workers stems from the unpredictable outcomes of law surrounding social media regulation. 10) So does Riley Reid have a right to express herself and share her voice on social media despite her past work? 11) The Supreme Court is currently evaluating a law that allows social media platforms to continue censoring sex workers’ speech as much as they choose, which continues to threaten their safety and rights. 12) This capstone evaluates the laws that lead to the censorship of sex workers’ content on social media, and how the Supreme Court could further enforce this censorship with one pending case.

State of the Law Today, Slides 13–18

13) The 47 U.S. Code § 230(c)(1) and ©(2) of the Communications Decency Act of 1996, commonly referred to as Section 230, has been no stranger to the news in the last few years. From Twitter de-platforming the former President to a Supreme Court case evaluating YouTube’s liability in recommending ISIS videos through their algorithms, the Internet’s most important law made its way to the Supreme Court in February 2023 through Gonzalez v. Google, 2 F.4th 871 (9th Cir. 2021). 14) Gonzalez, a U.S. citizen, was killed in the Paris terrorist attacks in 2015. Terrorist organization ISIS claimed responsibility for the attacks by issuing a YouTube video. Under the Anti-Terrorism Act, Gonzalez’s family filed action against YouTube, Facebook, and Twitter for aiding and abetting an act of terrorism by allowing ISIS to use their platforms.

15) Gonzalez v. Google raises the issue of whether interactive computer services’ targeted recommendations fall under the scope of liability provided by Section 230(c)(1). 47 U.S. Code § 230(c)(1) of the Communications Decency Act of 1996 states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The Supreme Court will evaluate whether Section 230(c)(1) protects YouTube from recommending ISIS content through its automated algorithm. Gonzalez will be the Supreme Court’s first examination of the liability scope Section 230 provides.

16) While the Supreme Court evaluates Google’s liability under Section 230(c)(1) in Gonzalez, this capstone addresses the problems stemming from Section 230(c)(2). 17) Section 230(c)(2) adds “(2) No provider or user of an interactive computer service shall be held liable on account of — (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph.” Section 230(c)(2) shields interactive computer services from liability for removing or restricting content. 18) Section 230 as a whole allows companies to moderate or not moderate with no liability for the mass amounts of information on their sites. While the Supreme Court analyzes ©(1), this presentation examines the effect of ©(2) on one group of content creators: commercial sex workers.

History of 230, Slides 19–27

19) Section 230 received criticisms in the last few years especially. Because no direct historical analog exists, courts and legislators attempt to piece together solutions for the complex issues of new technologies. Without Congress’ implementation of Section 230, we would have relied on the following two conflicting cases that limited the Internet’s growth in its early days.

Cases that Led to Section 230: To Moderate or Not To Moderate?

20) Before Section 230 passed in 1996, Cubby v. CompuServe, 776 F.Supp. 135 (1991) examined the scope of liability for an internet service provider hosting defamatory material on its bulletin board website. CompuServe provides “computer-related products and services, including electronic bulletin boards, interactive online conferences, and topical databases that subscribers can access with a membership fee.” Plaintiffs Cubby, Inc. and Robert Blanchard “developed a computer database designed to publish and distribute electronic news and gossip.” Separately, a publication named Rumorville used CompuServe to publish statements about the plaintiffs, which the plaintiffs alleged were defamatory. Plaintiffs then sued CompuServe for carrying the defamatory statements. CompuServe argued “that it acted as a distributor, and not a publisher, of the statements, and cannot be held liable for the statements because it did not know and had no reason to know of the statements.”

The District Court of the Southern District of New York granted summary judgment for CompuServe stating that because CompuServe did not track the content posted, it was not liable. The District Court declared that “CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so.” Because CompuServe refrained from editing any content, it was not liable for a user’s defamatory speech. This decision encouraged other internet providers to take a hands-off moderation approach to avoid litigation.

21) Four years later, however, the New York Supreme Court faced a similar issue in Stratton Oakmont Inc. v. PRODIGY Services Co., N.Y.S.2d (1995). PRODIGY, a computer network, had at least two million subscribers who communicated with each other and with the general subscriber population on bulletin boards. On PRODIGY, an unidentified user posted statements claiming that Stratton Oakmont, a securities investment banking firm, committed fraud and criminal acts. Stratton Oakmont commenced action against PRODIGY for hosting this libelous content. In various publications written by Geoffrey Moore, PRODIGY’s Director of Market Programs and Communications, “PRODIGY held itself out as an online service that exercised editorial control over the content of messages posted on its computer bulletin boards, thereby expressly differentiating itself from its competition and expressly likening itself to a newspaper.” Because PRODIGY monitored and decided to remove content that did not fit its community standards, the court determined it to be a publisher of information and held it liable for defamation claims. The New York Supreme Court differentiated Stratton from Cubby stating that because PRODIGY “utilized technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and bad taste, PRODIGY is clearly making decisions as to content…and such decisions constitute editorial control.” The court concluded that editorial control led to liability.

22) These two cases resulted in a moderator’s dilemma — if a company chose to moderate content to keep its users safe, it was determined to be a publisher, and therefore liable for any piece of content on its platform. If companies chose to not moderate and allow all speech, users would be less likely to return given the risk of viewing disturbing or libelous content. 23) The solution became the 47 U.S. Code Telecommunications Act of 1996 which included Section 230, allowing internet service providers to take down content or leave content on their site without liability.

24) President Bill Clinton at the time emphasized this Act’s promotion of Internet growth: “Over the past 3 years, my Administration has worked vigorously to produce legislation that would provide consumers greater choices and better quality in their telephone, cable, and information services. This legislation puts us squarely on the road to a brighter, more productive future. In the world of the mass media, this Act seeks to remove unnecessary regulation and open the way for freer markets.”

25) Section 230 gets its name because the rest of the Telecommunications Act was struck down in 1997. The Telecommunications Act made it a crime “for anyone to engage in online speech that is ‘indecent’ or ‘patently offensive’ if the speech could be viewed by a minor.” 26) In Reno v. American Civil Liberties Union, 521 U.S. 844 (1997), the ACLU challenged 47 U.S.C. § 231. Although the Communications Decency Act intended to protect minors from “obscene or indecent material,” its broad application infringed on the protected speech between consenting adults. 27) Reno concluded that the Communications Decency Act “threatened to torch a large segment of the Internet community” and unconstitutionally restricted free speech. Because of this, the rest of the Act was struck down, but Section 230 remained.

28) This was not the end of the battle. Reno v. ACLU was not the last attempt to “torch a large segment of the Internet community” by restricting sexual speech.

Moderation of Sex Work: SESTA/FOSTA, Slides 29–34

29) Although Section 230 promoted the Internet’s growth, critics question whether Section 230 is necessary or beneficial to a healthy Internet 26 years since its passing. When legislation attempted to regulate particular types of social media content in the past, such as sex trafficking content, unfavorable repercussions ensued. The H.R.1865 Allow States and Victims to Fight Online Sex Trafficking Act of 2017, specifically the Stop Enabling Sex Traffickers Act (SESTA) and the Fight Online Sex Trafficking Act (FOSTA) were a pair of legislative acts passed under the Trump administration amending 47 U.S.C. § 230(e)(5) to prevent the online exploitation of trafficked persons.

30) The laws carved out an exception to Section 230, making social media platforms liable for sex trafficking content. FOSTA amends the act “to create an exception for sex trafficking, making it easier to target websites with legal action for enabling such crimes. The Communications Decency Provisions can no longer be construed to impair or limit civil action or criminal prosecution relating to sex trafficking; and those benefiting from ‘participating in a venture,’ knowingly assisting supporting, or facilitating an act of sex trafficking are now in violation of the Federal criminal code.”

31–32) With a seemingly positive intention, SESTA/FOSTA ended up “pushing sex workers and trafficking victims into more dangerous and exploitative situations.” According to the Decriminalize Sex Work Campaign, SESTA/FOSTA: “1) endangers survivors and sex workers” by removing “communication and safety networks for sex workers and other at-risk communities,” 2) “impedes law enforcement’s efforts to find victims and prosecute traffickers” by “removing one of the most important avenues that law enforcement used for intelligence gathering and sting operations: the online platforms themselves,” and 3) “censors free speech on the Internet and endangers the livelihood of informal sector service workers” because “websites grow increasingly conservative about the content they allow to be posted.” FOSTA established criminal penalties for those who promote or facilitate prostitution and sex trafficking through their control of online platforms.

33) The restriction and removal of sex workers from social media heightened already-existing challenges law enforcement faces in gathering tips and evidence. 34) Because social media companies became liable for sex trafficking content, they changed their policies to reduce all sex-related content to avoid litigation. Additionally, “when sweeping algorithms are applied to comply with overly broad legislation, speech is chilled and so, too, is the activism it drives.”

Social Moderation, Slides 35–44

35) How were social media platforms actually handling content made by sex workers online around this time? 36) Meta introduced its Sexual Solicitation policy in October 2018, six months after SESTA-FOSTA was signed into law. The policy states that Meta allows for the discussion of “sex worker rights and advocacy” but does not allow content that “facilitates, encourages or coordinates” commercial sex work. 37) As a seemingly balanced policy, the policy enforcement was not as balanced. In 2019, Instagram “announced it will be cracking down on content it deems ‘inappropriate’ following a number of changes within the Facebook company.” The goal was to “manage problematic content across the Facebook family of apps,” saying in a statement that “a sexually suggestive post still appears in Feed if you follow the account that posts it, but this type of content may not appear for the broader community in Explore or hashtag pages.” 38) TikTok adopted a similar approach by stating that “we may limit distribution by making the content ineligible for recommendation into anyone’s For You Feed.” 39) Instead of removing speech, platforms restricted the distribution of speech, which leads to the same issues for sex workers. 40) As complaints surfaced, Instagram tweeted that they do not target sex workers. 41) The data and enforcement show otherwise.

42) Social media enforcement skyrocketed around this time. Appeals were higher and content was later restored at higher number in 2019–2020, meaning more content that did not violate community guidelines was wrongfully removed. This number is growing again in Meta’s most recent report.

43) This chain of events exposed the problem with editing Section 230 — creating liability for interactive computer services leads to the companies removing more content to avoid litigation, further suppressing speech.

Obscenity Laws & First Amendment, Slides 45–57

44) But, does anyone have a First Amendment right on the Internet? It is not clearly defined. The First Amendment provides that “Congress make no law respecting an establishment of religion or prohibiting its free exercise. It protects freedom of speech, the press, assembly, and the right to petition the Government for a redress of grievances.” 45) It is undecided whether our First Amendment rights apply on a social media platform. 46) Even if we did have a First Amendment right on the Internet, would it protect sex workers’ speech?

47) To answer this, we must address the history of obscenity law. Material that is deemed obscene is not constitutionally protected, but courts have “struggled to define pornography and obscenity.” 48) While “pornography has generally been used to describe sexually explicit material,” the courts have shifted the meaning of “obscenity” since the U.S. adopted a test from a British case in 1868 in Regina v. Hicklin, L.R. 3 Q.B. 360 (1868). The Hicklin rule provided the following test for obscenity: “whether the tendency of the matter . . . is to deprave and corrupt those whose minds are open to such immoral influences, and into whose hands a publication of this sort may fall.”

49) The Supreme Court then examined the constitutionality of criminal obscenity statutes in Roth v. U.S., 354 U.S. 476 (1957) and determined that “the statutes, applied according to the proper standard for judging obscenity, do not offend constitutional safeguards against convictions based upon protected material, or fail to give adequate notice of what is prohibited.” The Court held that the test to determine obscenity was “whether to the average person, applying contemporary community standards, the dominant theme of the material taken as a whole appeals to prurient interest.”

50) The Court then approached obscenity again in Memoirs v. Massachusetts, 383 U.S. 413 (1966), and articulated a new three-part test: “(a) the dominant theme of the material taken as a whole appeals to a prurient interest in sex; (b) the material is patently offensive because it affronts contemporary community standards relating to the description or representation of sexual matters; and © the material is utterly without redeeming social value.”

51) Now, Miller v. California, 413 U.S. 15 (1973) is the leading test for obscenity cases in which it deemed obscenity not protected by the First Amendment. “The basic guidelines for the trier of fact must be: “(a) whether the average person applying contemporary community standards would find that the work, taken as a whole, appeals to the prurient interest, (b) whether the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law, and © whether the work, taken as a whole, lacks serious literary, artistic, political, or scientific value.” 52) Since Miller was decided in 1973, the question of “average contemporary community standards” has been left to juries. 53) Leaving these questions to juries leads to inconsistencies depending on geographic area, religious or political background, upbringing, and so many other factors. There is no one contemporary community standard.

54) This same concern was addressed in Ashcroft v. American Civil Liberties Union, 535 U.S. 564 (2002). After Reno, Congress attempted to address the concern of exposing children to indecent material online by enacting the 47 U.S.C. § 231 Child Online Protection Act (COPA), which drew upon the Miller test. Internet providers and civil liberties groups sued the United States Attorney General, alleging that COPA violated the First Amendment because relying on “community standards” to identify what material “is harmful to minors” rendered the statute overly broad. The majority determined that COPA’s use of “community standards” to identify “material that is harmful to minors” did not violate the First Amendment. In a 8–1 decision, however, the concurring justices in Ashcroft questioned the legitimacy of the Miller test. 55) Justice Breyer, in his concurrence, stated “to read the statute as adopting the community standards of every locality in the United States would provide the most puritan of communities with a heckler’s veto affecting the rest of the Nation.” Justice O’Connor, in her concurrence, stated: “I write separately to express my views on the constitutionality and desirability of adopting a national standard for obscenity for regulation of the Internet.” Justice O’ Connor wrote that “adoption of a national standard is necessary in my view for any reasonable regulation of Internet obscenity.”

After a remand, the Court heard Ashcroft v. American Civil Liberties Union, 542 U.S. 656 (2004) again. In the second hearing, the Court discussed the Miller test with more precision and reaffirmed the Third Circuit’s preliminary injunction of COPA on the grounds “that the statute was not narrowly tailored to serve a compelling Government interest.” The Court ultimately concluded that COPA was too restrictive and did not pass the strict scrutiny test for speech regulation. With an intention to protect children from indecent material, broad legislation often sweeps up content made for consenting adults and therefore infringes on adults’ First Amendment rights.

56) Ashcroft was not the first case to examine the balance between child safety and adult expression. Sable Communications of California, Inc. v. F.C.C., 492 U.S. 115 (1989) decided that while the government has a legitimate interest in protecting children from exposure to dial-a-porn messages, § 223(b) of the Communications Act of 1934 was not “narrowly drawn to achieve that purpose” and infringed on the First Amendment rights of consenting adults. Sable emphasizes that to protect children from obscene material, the legislation must be narrowly tailored so that it does not infringe on adults’ rights. Social media continues this trend by removing content between consenting adults and evaluating the content with vague “contemporary community standards.”

57) The community standards have changed since Miller in 1973. If someone were to face 20 years in prison for obscenity, the community standards would change over those 20 years. The social media platforms adopted these inconsistencies in their adult and sexual content community guidelines.

Social Media Guideline Inconsistencies Slides 58–61

58) Out of the eight major social media platforms (Meta, Twitter, YouTube, TikTok, Snapchat, Pinterest, Reddit, and Tumblr), only two allow adult nudity and sexual content on their platforms with 18+ warnings — Twitter and Reddit. But these differentiations do not encompass the complexities of each platform’s policy regarding adult nudity and sexual activity. 59) Amazon lists particular moves and positions that are sexually suggestive and not allowed on the platform. While TikTok strictly enforces sexual content by removing content that includes the word “sex,” Twitter allows pornographic videos (with age-restrictions). 60) Google uniquely specifies levels of strongly restricted and moderately restricted categories. 61) Meta is the most detailed, drawing lines between sexual solicitation and sexual speech. 62) These inconsistencies among the platforms reflect the legal system’s “struggle to define pornography and obscenity.” Sometimes not even posting at all can lead to removal, as seen in the 2021 purge of sex workers online.

Possible Outcomes of Gonzalez, Slides 62–66

63) So could Gonzalez v. Google provide a solution?

64) Possible Outcomes:

Strike down Section 230 entirely

  • Strike down Section 230(c)(1)
  • Find online platforms liable for targeted recommendation
  • Leave Section 230 as is for Congress to amend

65) Likely Outcome: Although difficult to predict the outcome, the Court demonstrated skepticism and confusion in the hearings, likely leading the Court to leave Section 230 for Congress to amend.

66) If Section 230 stays as is, social media companies may remove sex workers’ content that does not violate community guidelines without repercussions. 67) Social media platforms will remove sex workers’ speech leading to many unfavorable outcomes, further endangering sex workers. 68) A study from Baylor University estimated that the rollout of Craigslist’s Erotic Services (ERS) page, allowing sex workers to communicate, decreased the female homicide rate nationwide on average by 10–17% from 2002 to 2010.

69) Social Media Policy Suggestions

  • Hire Humans: Hire expert content moderators to not rely solely on algorithms; if we cannot agree on one community standard, an algorithm will not be a one-size fits all solution. Enforce wellness standards for human employees to reduce errors.
  • Pro: more conscious evaluation of the content, potentially less errors
  • Con: costs more time and money than automated systems, potential inconsistencies, potential mental health harm to employees
  • Hire Sexperts: Hire trust & safety experts on sexuality policy as their sole focus.
  • Pro: instead of hiring general trust & safety employees, hiring experts on sexual content and sexuality law enhances user experience in that field
  • Con: knowledge of sexuality law might be inconsistent because there is no one community standard
  • Policy Enforcement: Enforce policies fairly, keep appeals processes, communicate reasoning for takedown that is not automated.
  • Pro: elevates user experience, more fair assessment
  • Con: higher cost, additional hiring, not always consistent
  • Partnerships With Stakeholders: Connect with non-profits, sex worker organizations, other social platforms, & sex workers themselves.
  • Pro: creates relationship between the platforms and advocacy groups so that they are communicating concerns of new policies
  • Con: possible financial cost, additional hiring
  • Policy Adjustments: Make clear distinctions between sex trafficking and sexual health, wellness, education.
  • Pro: shows the platform has made concerted efforts to separate the issues as opposed to classifying sexual content under one umbrella
  • Con: like defining obscenity, creating one community standard around adult nudity and sexual activity proves difficult
  • Social Media Relationships: Create relationships with other social media companies to create consistent policies across platforms.
  • Pro: more consistent guidelines for sex workers
  • Con: difficult to agree on one community standard

Final Thoughts, Q&A Slides 70–73

70) Although requiring time and investment, devoting focus to this issue leads to healthier conversations around sex work, sexual health, and free speech inevitably creating a healthier internet.

--

--