Jonas Persson Constitutional Law: Theories of Freedom of Expression Professor Kirtley THE ACCOUNTABLE ALGORITHM: WAIVING SECTION 230 PROTECTION FOR ALGORITHMIC EDITS “[T]he Internet is a rapidly developing technology—today’s problems may soon be obsolete while tomorrow’s challenges are, as yet, unknowable. In this environment, Congress is likely to have reasons and opportunities to revisit the balance struck in [Section 230].” – District Court Judge Ellis in Zoran v. American Online.1 I. INTRODCUCTION Imagine that you are a web analyst at a local hospital. One day you receive an email from a member of the public who informs you that—apparently—your hospital recommends that those undergoing surgery drink alcohol to steel themselves before going under the knife. There is a screenshot attached to the email: 1 958 F. Supp. 1124, 1135 n.24 (E.D. Va. 1997). This prescient footnote cautioning that technology may soon render anachronistic the balance Congress struck in enacting Section 230 is well worth keeping in mind, especially given that the case would soon, on appeal to the Fourth Circuit, enshrine that precise balance as a timeless corollary to the First Amendment. See Zeran v. Am. Online, Inc., 129 F.3d 327, 333 (4th Cir. 1997) (holding that “fears of unjustified liability produce a chilling effect antithetical to First Amendment’s protection of speech”). 1|Page (Source: Author’s Google Search, January 2017) It seems like Google has extracted information from your organization’s page and curated the information in a so-called featured snippet2 that it prominently displayed at the top of the search window, but surely you did not advise your patients to drink alcohol? You click on the link provided and sure enough, your organization’s page features a bolded “DO NOT” followed by a bulleted list of things prospective patients should avoid. You send of an email to your counsel. Is there any legal recourse? Patients may think that we are out of our minds. If someone—God forbid—followed the advice we may have a serious lawsuit on our hands. You soon get a phone call. It turns out that there is not much you can do. If a human editor at Google had intentionally cut out the phrase you might have a case for false light invasion3—perhaps even tortious interference with contracts.4 Since the featured snippet is generated by an algorithm, however, Google is most likely protected from tort lawsuits thanks to 47 U.S.C. § 230 (“Section 230”), which not only protects websites from liability for third-party content but also allows them to moderate and edit such content. Instead, you try to get in touch with Google to ask them to remove the snippet, but you find no phone number or email address. You click the “Feedback” link below the snippet but are told that “you typically won’t receive a response.” In the end, you are forced to add the phrase 2 Google defines these as “a special box as the top of your search results with a text description above the link … Google’s automated systems determine whether a page would make a good featured snippet to highlight for a specific search request.” How Google’s featured snippets work, https://support.google.com/websearch/answer/9351707 (last visited April 29, 2020). 3 See Restatement (Second) of Torts § 652E (1977). 4 See id. at § 766. 2|Page “DO NOT” to each sentence on the web page and hope for the best—that Google’s spiders recrawl and update the featured snippet soon.5 Section 230 was enacted with the express purpose of bolstering free speech on the nascent world wide web by protecting the online marketplace6 of ideas—those who, if we may literalize the metaphor, own the land and provide the stalls—from tort liability that may chill7 speech. Section 230 and the seminal Zeran case8 also allowed the marketplace to self-police its stalls for the greater good. The result has been remarkable,9 but when wayward attempts to exercise control over the products end up creating more problems—forcing some vendors to mark their wares and hold on tight lest they be tainted and misappropriated—then there is also a chill. This paper will explore how algorithmic edits effectively short-circuit the statute. When the act of moderating itself transmogrifies an unobjectionable user post into something defamatory or otherwise tortious the effect is two-fold. First, the resulting content is in a legal no-man’s-land beyond the reach of the common law. Second, the knowledge that this happens sends a chill down the spine of other users who will try to second-guess themselves to avoid their posts meeting the same fate. Both of these outcomes are directly contrary to the legislative intent 5 Personal communication with Nebraska Medicine by email on January 9, 2017. The aforementioned legal discussion is purely hypothetical, but the organization was forced to change its webpage to avoid the inaccurate featured snippet. 6 47 U.S.C. § 230(b)(2) (It is the policy of the United States “to preserve the vibrant and competitive free market that presently exists for the Internet … unfettered by Federal or State regulation.”). 7 See Zeran, 129 F.3d at 330 (“The specter of tort liability in an area of such prolific speech [as that of online services] would have an obvious chilling effect”). 8 See infra Section II. 9 See generally, e.g., Eric Goldman & Jeff Kosseff, Commemorating the 20th Anniversary of Internet Law’s Most Important Judicial Decision, The Recorder (Nov. 10, 2017). Jeff Kosseff, The Twenty-Six Words that Created the Internet 4 (2019) (arguing that Section 230 and Zoran, by “providing liability immunity even when publishers exercise editorial control over third party content … helped create a trillion-dollar industry centered around user-generated content”). 3|Page behind Section 230, which was enacted to preserve direct liability10 and foster the free exchange of ideas. Starting with a brief history of Section 230—the animating concerns that led to its enactment and later case law construing the statute as providing broad protection for editorial control—the paper will go on to explore case law hashing out the contours of editorial immunity—both for human and machine changes. The next section will flesh out what is wrong with the current state of affairs, and it will do so by building on the mailman metaphor first introduced by proponents of the legislation in 1995. This will be followed by a section delineating the far-from-obvious relationship between the First Amendment and Section 230. Against the backdrop of current reform proposals, the final discussion will discuss various ways of fixing Section 230 so that it no longer provides a safe harbor for algorithmic edits gone wrong without abridging online freedom of speech. Such a legislative fix, which will incentivize companies to create better software and increase human oversight while at the same time providing a legal avenue for those who have been wronged, will hopefully restore Section 230 to its original vision and thaw the chill that has crept in. II. LEGISLATIVE HISTORY AND JUDICIAL EXPANSION In the seminal press freedom case Smith v. California11, the Supreme Court struck down a local ordinance that made possession of obscene books a criminal strict liability offense—that is, without imposing any scienter requirement; it did not matter, for the purposes of the statute, 10 See Zeran, 129 F.3d at 330 (holding that none of Section 230’s protections for intermediaries means “that the original culpable party who posts defamatory messages would escape accountability”). 11 361 U.S. 147 (1959). 4|Page whether the owner knew that the book contained obscenity or nor. The Court held that the distribution of books was guaranteed under the Press Clause of the First Amendment; “By dispensing with any requirement of knowledge,” Justice Brennan wrote, the “ordinance tends to impose a severe limitation on the public’s access to constitutionally protected matter.”12 Fearful of criminal prosecution, the seller will limit the number of books in his store to the ones he has inspected; other constitutionally protected material, which the seller has not time to vet, thus becomes a collateral victim. In the progeny cases of Smith, distributors were generally not found liable for libel or obscenity unless the plaintiff could show that they had reason to know the content of the material or failed to follow up on complaints.13 With the advent of online technology in the early ‘90s, this line of cases at first seemed to fit hand in glove. A plaintiff who sued CompuServe for distributing a defamatory newsletter written by another user, for example, had to prove that CompuServe knew or had reason to know about the statements.14 As more and more websites started moderating posts submitted by users, however, the actual or constructive knowledge requirement turned into a legal fiction. In Stratton Oakmont v. Prodigy Servs. Co.15, a New York state court found Prodigy potentially liable in a $200 million libel suit. In contrast to CompuServe, which had taken a laissez-faire approach to user material, Prodigy advertised itself 12 Id. at 153. 13 Compare Lewis v. Time, Inc., 83 F.R.D. 455, 465 (E.D. Cal. 1979) (“If a distributor can ever bear the burden of liability for libel … detailed allegations must be required in order to insure the unrestricted distribution of newspapers and magazines which is at the heart of the First Amendment”) with Spence v. Flynt, 647 F. Supp. 1266, 1273 (D. Wyo. 1986) (“This was simply not a case of an innocent magazine seller unwittingly disseminating allegedly libelous material. Rather, we have a distributor who possessed detailed knowledge … and who, after receiving a complaint about the magazine, failed to investigate and continued to sell it”). 14 Cubby, Inc. v. Compuserve, Inc., 776 F. Supp. 135, 141 (S.D.N.Y. 1991). 15 Stratton Oakmont v. Prodigy Servs. Co., INDEX No. 31063/94, 1995 N.Y. Misc. LEXIS 229 (Sup. Ct. May 24, 1995). 5|Page as the family-friendly option; users had to abide by community guidelines that did not allow nudity or offensive language, and the user forums were moderated—both through so-called board leaders and by automatic means. In the eyes of the court, this made all the difference: it could be said that PRODIGY's current system of automatic scanning, Guidelines and Board Leaders may have a chilling effect on freedom of communication in Cyberspace, and it appears that this chilling effect is exactly what PRODIGY wants, but for the legal liability that attaches to such censorship … PRODIGY’s conscious choice, to gain the benefits of editorial control, has opened it up to greater liability than CompuServe and other computer networks that make no such choice.16 Even though Prodigy had no realistic way of knowing the contents of every single user post,17 the fact that it advertised itself as a family-friendly service and exercised a modicum of “editorial control” by deleting some offensive posts was enough for liability to attach. It was no longer a mere distributor of user content but a publisher.18 Stratton was a significant departure from previous cases in which distribution status was a rebuttable presumption. The plaintiff, effectively, had to demonstrate that the defendant knew or should have known about the specific material in question. Although only a trial-court decision, free-speech advocates and industry interests feared that it would set a perverse precedent and force online services to either vet every single post or take a completely hands-off approach.19 16 Id. at 12. 17 The court conceded that the automatic filter only scans for offensive words. It would probably never have flagged the post that was the subject of the lawsuit, which referred to the plaintiff as a “criminal fraud” and alleged SEC violations. Similarly, the “Board Leaders” were tasked with upholding the community guidelines in general but had no way of vetting every single post. For the Stratton court, the lack of real or constructive knowledge was immaterial. See id. at 13. 18 Id. at 10. 19 See, e.g., Peter H. Lewis, After Apology From Prodigy, Firm Drops Suit, New York Times (Oct. 25, 1995), https://www.nytimes.com/1995/10/25/business/after-apology-from-prodigy- firm-drops-suit.html 6|Page To prevent this from happening, two members of the House—Christopher Cox (R-CA) and Ron Wyden (D-OR)—introduced an amendment to the omnibus Telecommunications Act, which did two things: First, it defined online service providers as distributors rather than publishers of third-party content.20 Secondly, it turned the clock back on Stratton by ensuring that a service provider could not be held civilly liable for “good faith” moderation.21 The policy objective was to incentivize self-policing and market forces rather than government regulation.22 Section 230 went into effect on February 8, 1996, but for almost two years the scope of the law was unclear. Could service providers become liable once they receive notice of offending material or do they have ironclad immunity from civil liability for third-party content? 23 In Zeran v. American Online, the Fourth Circuit came down decisively in favor of complete immunity. A notice-and-takedown regime would cause service providers to err on the side of caution and remove every message upon notification. “[L]iability upon notice,” the court reasoned, “has a chilling effect on the freedom of Internet speech” that is tantamount to strict liability.24 If Section 230 allowed for a notice-and-takedown mechanism, bad-faith actors could both silence speech and exploit the mechanism for future lawsuits by submitting a barrage of notices; the service providers would then “be faced with ceaseless choices of suppressing controversial speech or sustaining prohibitive liability.”25 20 47 § U.S.C. 230(c)(1). 21 47 § U.S.C. 230(c)(2)(A). 22 See 141 Cong. Rec. H8470 (Aug 4, 1995). Reps. Wyden and Cox expressed a hope that the marketplace would “give parents the tool they need while the Federal Communications Commission is out there cranking out rules about proposed rulemaking programs.”. 23 See Jeff Kosseff, The Twenty-Six Words that Created the Internet 76 (2019). 24 Zeran, 129 F.3d at 333. 25 Id. 7|Page Less often noticed is that Zeran construed Section 230’s categorical prohibition on treating a service provider as the publisher of third-party content to include instances in which the provider exercises “a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content” (emphasis added).26 We have come full circle since Stratton—from the idea that deleting some offensive posts turned a service provider into a publisher to the notion that a service provider can (materially?) alter third-party content— potentially in bad faith27—and yet remain a distributor. III. THE PUBLISHER’S PURVIEW Courts, most of which follow Zeran as persuasive authority, have struggled to define the limits of this publisher’s immunity. At what point does a service provider, requesting or editing third-party content, go beyond the traditional editorial functions of the publisher to become a speaker in its own right? Slight edits for spelling, grammar, style, or length that do not render the post libelous are clearly well within the publisher’s purview.28 Conversely, attempts to frame or contextualize third-party content through headings, illustrations, and editorial commentary can go beyond this role.29 What about slight edits that nevertheless render the original material 26 Id. at 330. 27 Crucially, the court did not reach the conclusion that providers are free to exercise editorial functions after parsing § 230(c)(2)(A), which provides for immunity for “any action voluntarily taken in good faith to restrict access to or availability” of offensive material. § 230(c)(1) has no such limiting good faith or voluntariness principles. 28 See, e.g., Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1102 (9th Cir. 2009) (“a publisher reviews material submitted for publication, perhaps edits it for style or technical fluency, and then decides whether to publish it.”); Jones v. Dirty World Entm't Recordings LLC, 755 F.3d 398, 416 (6th Cir. 2014) (“editor's changes to the length and spelling of third-party content do not contribute to the libelousness of the message”). 29 Compare Shiamili v. Real Estate Grp. of N.Y., Inc., 17 N.Y.3d 281, 293 (2011) (holding that the defendant’s website was immune when it added a heading and an offensive but satirical illustration to a libelous post as the alterations did not “materially contribute to the defamatory 8|Page libelous? The Ninth Circuit has taken a commonsensical approach and asks whether the service provider contributed “materially to the alleged illegality of the conduct.” While this test is most often applied in cases where a website owner actively solicits30 offensive material, the Circuit has drawn a distinction between a website operator “correcting spelling, removing obscenity or trimming for length” and one who edits so as to transform an innocent message into a libelous one, for example by “removing the word ‘not’ from a user’s message reading ‘[Name] did not steal the artwork.’31 In the latter case the operator would not be immune under Section 230. The person-turned-thief hypothetical is a close analogue to Google’s featured snippet, which removed the word “not” from Nebraska Medicine’s website. Perhaps a suit against Google would have survived a motion to dismiss under the less deferential Ninth Circuit test.32 There is no way of knowing as no cases involving Google’s featured snippets have yet to make their way to an appellate court, but a case involving a standard search result—allegedly rendered defamatory through automatic editing—was given short shrift in the Sixth Circuit. nature of the third-party statements”) with Pace v. Baker-White, No. 19-4827, 2020 U.S. Dist. LEXIS 4984, at *24 (E.D. Pa. Jan. 13, 2020) (finding the defendant’s website beyond the scope of Section 230 protection when it published 5,000 unflattering quotes from police officers’ Facebook accounts together with a statement reading in part, “We present these posts and comments because we believe that they could undermine public trust and confidence in our police”). 30 See Fair Hous. Council v. Roommates.com, LLC, 521 F.3d 1170, 1168 (9th Cir. 2008) (finding that the questionnaires developed by a roommate matching service, including preference for living with people with different sexual preference, encouraged users to give discriminatory answers in violation of the Fair Housing Act). 31 Id. at 1168. 32 This test is an outlier and other courts, even while paying lip service to it, have been reluctant to find that a service provider materially contributed to the alleged illegality. See, e.g., S.C. v. Dirty World, LLC, No. 11-CV-00392-DW, 2012 U.S. Dist. LEXIS 118297, at *9 (W.D. Mo. Mar. 12, 2012) (finding that soliciting “dirt” on people in general was not enough since the website owner did not pay for and was not directly responsible for the particular post that was the subject of the lawsuit). 9|Page In 2012, a Tennessee man, Colin O’Kroley, went to a computer in a public library and Googled his name. The first result linked to the Texas Advance sheet, which summarizes recent state court decisions, and it included the phrase “indecency with a child in Trial Court Cause N ... Colin O'Kroley v. Pringle.”33 O’Kroley had never been charged with indecency, but when displaying the search result, Google removed blank lines and turned the header for the next case, which did involve O’Kroley, into plain text. This, O’Kroley alleged, wrongfully suggested that he had been accused or convicted of indecency with a child.34 The following is an approximation of the search result35 and the original source36 to which Google linked: The magistrate judge deciding on Google’s motion to dismiss was perplexed. He assumed that a reasonable Google user could interpret the search result as defamatory,37 but he “found no case that makes the precise claim … that the underlying information, viewed in its entirety, is not 33 O'Kroley v. Fastcase Inc., No. 3:13-0780, 2014 U.S. Dist. LEXIS 71922, at *3 (M.D. Tenn. May 27, 2014). 34 Id. 35 Google search for “O’Kroley v. Pringle” (May 2, 2020) which displays the same juxtaposition of sentence fragments as cited in the magistrate judge’s R&R in O’Kroley v. Fastcase Inc. 36 Google Books scan of Texas Advance Sheet for March 2012, available at http://tiny.cc/dek9nz. 37 O'Kroley, No. 3:13-0780 at n3. 10 | P a g e defamatory, but that it has been rendered defamatory by Google’s automatic editing process.”38 Nevertheless, and in light of the “robust” immunity afforded by Section 230, he recommended that the judge grant the motion. On appeal to the Sixth Circuit, the court flatly denied O’Kroley’s defamation claim. Citing Zeran, the court found that the “automated editorial acts … such as removing spaces and altering founts” were well within “a publisher’s traditional editorial functions.”39 Google also did not “materially contribute to the alleged unlawfulness” of the content as it did not add or remove any actual text. 40 The decision, which is high on sarcasm but low on analytical substance,41 leaves unanswered the following question: Does Section 230 give an online service provider carte blanche to wring, by automatic means, defamatory content out of innocent third-party material? IV. THE UNASSAILABLE MAILMAN The O’Kroley decision implies that there is content that it defamatory42—perhaps libelous per se—for which there is no speaker, no publisher, and no cause of action for the victim. This was not the intent behind Section 230. When endorsing the Cox-Wyden amendment in the House, Representative Zoe Lofgren analogized to a mailman who should not be held liable “when he delivers a plain brown envelope for what is inside it.”43 But what if the mailman opens the envelope, removes a court order inside and uses White-Out to remove the word “not” from 38 Id. at 3. 39 O'Kroley v. Fastcase, Inc., 831 F.3d 352, 355 (6th Cir. 2016). 40 Id. 41 O’Kroley himself is partly to blame for this. In addition to the defamation claim against Google, he added a litany of frivolous causes of action, such as violation of the long-since- repealed 18th amendment. He also asked for $19 trillion[!] in damages. Id. at 354, 356. 42 No court ever found that the Google search result, which implicated O’Kroley in indecency with a child, was not defamatory as a threshold matter of law. 43 141 Cong. Rec. H8470 (Aug 4, 1995) (remarks by Rep. Lofgren). 11 | P a g e “Defendant was found not guilty”? What if he then tacks the order to a bulletin board on the town square? What if, further, a fear of rogue mailmen causes letter-writers to be more circumspect and modify their messages to make them impervious to wanton White-Out attacks? Creating a chill and allowing for unassailable defamatory statements—this was not what Cox and Wyden had in mind when they introduced their amendment. In fact, even when construing the publisher’s prerogative and immunity broadly, the Zeran court emphasized that there would still be liability for the “original culpable party.”44 It is impossible to know the extent of the problem, but we have come a long way since the crude filter that deleted posts with offensive words in Prodigy. With the rise of artificial intelligence, including technologies for natural language processing, more and more user content is automatically edited, truncated, curated, contextualized, and featured in search results, Facebook and Twitter feeds, and on review pages.45 While most of this automatic work is immensely useful, some of the electronic “mailmen” will invariably get it wrong, sow hate and confusion, incite, defame ordinary users or portray them in a false light—go postal, as it were. There is vibrant discussion around recalibrating the balance struck by Congress and the courts for Section 230 and a growing understanding that according service providers blanket immunity is not always conducive to free speech46 and that the deal struck almost 25 years ago, when Internet was in its infancy, have turned sour when $1-trillion behemoths now enjoy the 44 Zeran, 129 F.3d at 330. 45 See generally Tim Wu, Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social- Ordering Systems, 119 Colum. L. Rev. 2013 (2019) (explaining how the “affirmative speech control” of online platforms, which entails choosing what and how ads other users’ posts are brought to our attention, is typically algorithmic”). 46 See, e.g., Danielle Keats Citron and Benjamin Wittes, The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity, 86 Fordham L. Rev. 401 (2017) (arguing that Section 230 “gives an irrational degree of free speech benefits to harassers and scofflaws but ignores free speech costs to victims”). 12 | P a g e protections we granted to startups without any obligation to pay back and provide any public good.47 There is equally robust discussion about how AI and automatic decision-making systems perpetuate inequities48 and about the need for proactive regulation49, but the imbrications between Section 230, free speech, and automated systems have not yet received much scrutiny. This is unfortunate. While the infrequent50 algorithmic mishaps described thus far may pale in comparison to the dangers posed by online child or revenge pornography, cyber stalking, or sex trafficking, the creation—by unaccountable algorithms—of “speaker-less” speech that cannot be challenged and creates a chill should give us pause. If we believe in the free marketplace of ideas, the speech of algorithmic edits silences vendors who cannot resort to counter-speech. It distorts the exchange of ideas, replaces bartering with misappropriation and does not tend toward truth.51 Similarly, it silences important necessary for democratic self-government and does little to further the rule-based and moderated town hall meeting Meiklejohn envisioned as the First Amendment’s raison d’etre.52 This type of speech is equally problematic under an individual liberty model of the First Amendment. It does little to advance self-fulfillment and by crowding 47 See, e.g., Adam Candeub, Common Carriage and Section 230, CSAS Working Paper 19-33 (2019). 48 E.g., Cathy O’Neil, Weapons of Math Destruction 203-04 (“Automated systems … stay stuck in time until engineers dive in to change them … Big data processes codify the past. They do not invent the future.”) 49 E.g., Mark MacCarthy, AI needs more regulation not less, The Brooking Institution’s Artificial Intelligence and Emerging Technology Initiative (March 9, 2020). 50 There is no way to empirically gauge the frequency of problematic automatic edits, but in the context of Google’s featured snippets, the author has counted eleven additional ones that might have been deemed tortious if disseminated offline. Often, third-party content is stripped of negations or vital context. 51 Cf. Abrams v. United States, 250 U.S. 616, 630 (1919) (J. Holmes dissenting) (holding that “the best test of truth is the power of the thought to get itself accepted in the competition of the market”). 52 See Alexander Meiklejohn, Free Speech and Its Relation to Self-Government 25 (1948). 13 | P a g e out or modifying human speech it does not, to speak with Baker, accord “equal respect for persons as autonomous agents.”53 V. SECTION 230 AND THE FIRST AMENDMENT In its current guise, Section 230 is commonly regarded as offering online companies greater protections than the First Amendment traditionally accords publishers. After Zeran, Kosseff argues, it turned into a “nearly impenetrable super-First Amendment.”54 In a similar vein, Eric Goldman sees Section 230 as providing “significant and irreplaceable substantive and procedural benefits beyond the First Amendment’s free speech protections.”55 He sees it as similar to a “speech-enhancing statue,” such as anti-SLAPP laws, that extend First Amendment protections; it is more protective of commercial speech and covers causes of action for which there are no First Amendment defenses, such as negligence. Danielle Keats Citron and Benjamin Wittes agree that it is a constitutional add-on but offer a far more pessimistic gloss, arguing that the First Amendment does not require “broad-sweeping immunity for online platforms”; rather, Section 230 is a policy layer built on top of it—the result of a normative calibration between speech rights and harm.56 Against the grain, an unsigned Harvard Law Review note takes a foundationalist view and sees the law as the natural confluence of First Amendment precedent, such as Reno v. ACLU (recognizing the value of internet speech), Smith v. California (recognizing the danger of collateral censorship and New York Times v. Sullivan and Gertz v. Welch (hashing out the proper 53 C. Edwin Baker, Human Liberty and Freedom of Speech 49 (1992). 54 Kosseff, supra at 57. 55 Eric Goldman, Why Section 230 Is Better Than the First Amendment, 95 Notre Dame L. Rev. Online 34 (2019). 56 Keats Citron and Wittes, supra at 419-20. 14 | P a g e standard for defamation).57 This may seem to foreclose any restrictions or carve-outs as presumptively unconstitutional, but it is an untenable position. While Zeran builds on Reno’s understanding of the Internet as a free-speech arena and relies on the spirit of Miller—although it is never cited—when decrying the collateral danger of a notice-and-takedown regime, there is no inexorable path that leads from select First Amendment precedent to blanket immunity for online service providers.58 Section 230 is not coextensive with the First Amendment—past, current, or future. While it does indeed, as Goldman explores at great length, offer additional protections, it is also underinclusive in that it chills some speech, for example that of victims of online abuse59 and of those, such as Nebraska Medicine, who fear that their speech will be misappropriated through automatic edits. VI. LEGISLATIVE FIXES It was Circuit Judge Wilkinson who won the plaudits for his decision in Zeran but let us not forget District Court Judge Ellis’s aperçu that with the advent of new technology “today’s problems may soon be obsolete while tomorrow’s challenges are, as yet, unknowable. In this environment, Congress is likely to have reasons and opportunities to revisit the balance” Congress struck in Section 320.60 That time has come. In contrast, however, to most amendments—passed or proposed—which will move the needle away from free speech 57 Note, Section 230 as First Amendment Rule, 131 Harv. L. Rev. 2045 (2018). 58 The idea that you can trace historical developments and current trends in the law to their logical conclusion owes something to Warren and Brandeis, who by following the through-lines of different causes of action under the common law purportedly “discovered” a new tort for the invasion of privacy. See Samuel Warren & Louis Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193 (1890). 59 See Keats Citron and Wittes, supra at 401. 60 Zoran, 958 F. Supp. At n24. 15 | P a g e protection toward rectifying perceived harms61, waiving immunity for tortious edits will hopefully lead to a net increase in speech while providing an accountability incentive for Google and its ilk to do better. By restoring content from the no-man’s land in which there is neither speaker nor liability, Section 320 will also be true to its original intent, which was never to disclaim all civil liability. Neil Chilson has characterized attempts to Reform 320 as either “bargaining chips,” which makes immunity contingent on the service provider meeting certain guidelines, and “carveouts,” which strip protection from some kinds of third-party content.62 The EARN-IT Act63, which was introduced in March of 2020, would create a 15-member commission to develop best practices for how to detect and report pages that feature child exploitation. If an internet service provider does not follow those practices it may be stripped of its Section 230 immunity. Such bills, however, are anathema to Section 230, which was enacted specifically to foster industry self- regulation rather than the government “cranking out rules about proposed rulemaking protocols.”64 It is furthermore unclear what such best practices would consist of in the context of automatic edits and what protections they would be stripped of in the event of noncompliance. 61 On April 11, 2018, FOSTA (Allow States and Victims to Fight Online Sex Trafficking Act) was signed into law. It amended Section 230 and excluded state criminal and civil law relating to sexual exploitation of children or sex trafficking from the safe harbor if the online service provider knew about the crimes or acted in reckless disregard. Free-speech advocates worry that the providers will find themselves in the exact same position as Prodigy did before Section 230—forced either to moderate everything, not monitor anything at all, or (if those options are prohibitively expensive or unpalatable) exit the industry. See Eric Goldman, The Complicated Story of FOSTA and Section 230, 17 First Amend. L. Rev. 293 (2019) (arguing that FOSTA might pave the way for more and more piecemeal exceptions that will eviscerate the protections of Section 230). 62 Adi Robertson, Five Lessons From the Justice Department’s Big Debate Over Section 230, The Verge (Feb. 19, 2020), https://www.theverge.com/2020/2/19/21144223/justice-department- section-230-debate-liability-doj. 63 EARN IT Act of 2020, S.3398, 116th Cong.(2020). 64 141 Cong. Rec. H8470 (Aug 4, 1995) (remarks by Rep. Wyden). 16 | P a g e Perhaps more importantly, best practices would not solve the fundamental problem of “speaker- less” speech that cannot be challenged. Going down the carveout route is also problematic. As Eric Goldman has convincingly argued, it is a slippery slope that may incentivize advocacy groups to argue for their own special exceptions, turning Section 230 into a Swiss cheese and allowing plaintiffs to “easily maneuver into one of the multitudinous exceptions.”65 Carveouts also pose both definitional66 and constitutional67 problems. The problem is not machine edits tout court; the problem is tortious edits for which there is no recourse. A more fruitful approach would be to codify the Ninth Circuit rule into Section 230(c)(1): No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider unless the content service provider, by modifying or editing the information, materially contributes to conduct for which civil or criminal liability attaches. While it is true that other circuits have paid lip service to the rule while constructing it narrowly, incorporating it into the statute would ensure a much more robust analysis. In eschewing cumbersome regulation in favor of a private right of action, such an amendment would also be true to the guiding principles behind Section 230. It would, further, provide an incentive for Google and its ilk to get it right. At the same time—and in contrast to other Section 230 amendments, such as FOSTA—it would not disproportionately affect smaller internet service providers since they tend to be less reliant on AI and automatic editing systems. 65 Eric Goldman, The Complicated Story of FOSTA and Section 230, 17 First Amend. L. Rev. 293 (2019). 66 All human speech on online platforms is already modulated by technology. It could be argued that it is not even possible to draw the line between human and machine edits. 67 See, e.g., Rosenberger v. Rector of Univ. of Va., 512 U.S. 819, 828 (analogizing the rule against content discrimination to government regulation favoring one speaker over another). 17 | P a g e VII. CONCLUSION By expanding the ambit of what “a publisher’s traditional editorial functions”68 are, Zoran and its progeny have created a loophole in the fabric of Section 230 through which tortious content created by algorithmic edits can pass unchallenged. This creates a category of speech for which there is no speaker, no publisher, and no cause of action. In fact, and as the anecdotal example shows, the only recourse may be to modulate your speech or refrain from some speech acts altogether for fear that your words be taken out of context or misquoted and blasted from the proverbial rooftops by Google et al. Both of these consequences—regurgitated speech that is impervious to legal challenges and has a chilling effect on the original speaker—are directly at odds with the original purpose behind Section 230, which was to foster the growth of online fora with “true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity”69 while preserving tortious liability for the original speakers. The solution to this problem is to amend the statute and codify the Ninth Circuit’s commonsensical test of material contribution. Not only will Section 230 be true to its original dual mission, such an amendment will also provide an accountable incentive for Google et al. to do better and develop more accurate editing technology. 68 Zeran, 129 F.3d at 333. 69 18 U.S.C. § 230(a)(3). 18 | P a g e
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-