the provision (Section 230) of the Communications Decency Act of 1996 that gives internet and social media companies legal immunity from lawsuits due to the content they publish. This provision in law gives companies like Facebook and Twitter a way to dismiss lawsuits, but it also gives them the ability to act with impunity so that their actions cannot be legally challenged. These companies have, according to their detractors, abused this immunity by suppressing dissident, and specifically conservative, views, viewpoints and journalism.1
Section 230 of the Communications Decency Act has been critical to the development of today’s Internet and Internet services. But the expanding presence of these services in the lives of Americans and a growing political distrust of the companies providing these services highlight the need to refine the scope and language of Section 230 to better fit the statute’s original intent and to assuage these concerns. Such refinement is the best way to fan the flames of economic freedom and creativity while protecting individual and corporate freedom of speech.2
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
These words in Section 230 of the Communications Decency Act (CDA) are at the heart of an increasingly important public debate about technology, economics, and society. They have been called “the 26 words that created the Internet” [Jeffery Kosseff, The Twenty-Six Words That Created the Internet (Ithaca, NY: Cornell University Press, 2019)] and an “outlandish power over speech without accountability.” [News release, “Senator Hawley Announces Bill Empowering Americans to Sue Big Tech Companies Acting in Bad Faith,” Josh Hawley, U.S. Senator for Missouri, June 17, 2020, (accessed October 23, 2020)]. There is a large policy gap between these two views, and policymakers on both sides of the aisle are offering proposals to change this law that could fundamentally reshape the American technology industry.
Some believe that large tech companies are not keeping their part of the deal that critics say undergirds Section 230. These companies, it is argued, are politically biased and are exercising editorial judgments on which content they will, and will not, allow on their platforms, and that these judgments violate the law’s precondition of platform neutrality.
Others, however, say that this precondition of neutrality never existed and that removing these liability protections will effectively kill the American technology industry that is the beating heart of the U.S. economy.
Still others believe these large Internet companies—especially those that host social media platforms—are sources of social degradation and those who hold this view are happy to threaten Section 230’s protections as a way of coercing these companies into more acceptable behavior.
All of these perspectives are enabled by vagaries surrounding the text of the law, the intent behind it, and the relative values and risks posed by large Internet platforms.
What Americans Need to Know About Section 230
The liability protections at issue are in Section 230 of the CDA, which is itself part of the Telecommunications Act of 1996. The intent of Section 230 was made clear by its authors, then-Representatives Christopher Cox (R–CA) and Ron Wyden (D–OR), who said they wanted “to encourage telecommunications and information service providers to deploy new technologies and policies” for filtering or blocking offensive materials online. (Senate Report No. 104-23.) This proposal was in direct response to the court case Stratton Oakmont, Inc. v. Prodigy Services Co. [No. 31063/94, 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995)]
Prodigy was an online bulletin board in the early days of the Internet that used software to filter profanity from its pages. A Prodigy user posted derogatory comments about the investment firm Stratton Oakmont (the investment firm made famous by the 2013 movie The Wolf of Wall Street). Stratton Oakmont successfully sued Prodigy for defamation for $200 million, with the court ruling that Prodigy’s efforts to remove obscene content made it a publisher, and therefore responsible for not removing defamatory information about the investment firm.
Prodigy lost the case not because it removed material, but because it had—from the court’s perspective—done so incompletely. Representatives Cox and Wyden were concerned that this precedent would disincentivize online service providers from removing offensive content, and also put the brakes on Internet innovation by subjecting companies to endless lawsuits over user-generated content. Cox and Wyden drafted Section 230 and incorporated it as an amendment to the CDA.
Section 230(c) reads as follows:
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
Section 230(c)(2)(a) is the main immunity clause and a focus of the current debate, specifically what is meant by “good faith” and “otherwise objectionable.” In law, good faith is an abstract and general term used to describe “sincere belief of motive without malice or the desire to defraud others.” [TheFreeDictionary.com, “Legal Dictionary: Good Faith,” https://legal-dictionary.thefreedictionary.com/good+faith (accessed October 21, 2020)]
The phrase “otherwise objectionable” is clearly a continuation of the preceding list containing “obscene,” “lewd,” “lascivious,” “filthy,” “excessively violent,” and “harassing.” Typically, “otherwise objectional” would be interpreted using the legal canon of construction ejusdem generis (“of the same kind”), meaning that content that a reasonable person would find offensive that is of the same kind as those described in the preceding list.
To put it simply, the main immunity clause intends to protect Internet companies from liability for removing material that a reasonable person would find objectionable, so long as it is done in a manner not intended to harm or to defraud others. Subsequent courts, however, have extended these protections well beyond their intended boundaries.
While the Supreme Court has declined to engage the meaning of Section 230, state and lower courts have consistently ruled that it offers a very broad liability shield. Often citing Section 230’s “findings” and “policy” sections, which call for a “vibrant and competitive free market” and “myriad avenues for intellectual activity,” [See, for example, Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1099 (9th Cir. 2009) (relying on §§ 230(a)(3) and 230(b)(2) to say that free speech values are the foundation of the immunity provisions)] these courts have built a strong First Amendment standard for interpreting the protections afforded to any company’s online presence.
Section 230 does not shield companies from federal laws against crimes such as trafficking in child pornography, drug trafficking, or terrorism; however, the courts’ broad interpretation has allowed websites, such as Backpage.com, to avoid liability for hosting “80 percent of the online advertising for illegal commercial sex in the United States.” [Petition for Writ of Certiorari at 7, Backpage, 137 S. Ct. 622 (No. 16-276), 2016 WL 4610982]
Other examples, as catalogued by Danielle Citron and Benjamin Wittes, [This list is adapted from Danielle Citron and Benjamin Wittes, “The Internet Will Not Break: Denying Bad Samaritans §230 Immunity,” Fordham Law Review, Vol. 86, No. 2, (2017)] include the following:
- A “revenge porn” website devoted to posting nude images without consent [Danielle Keats Citron, “Cyber Civil Rights,” Boston University Law Review, Vol. 89, No. 61, https://www.bu.edu/law/journals-archive/bulr/volume89n1/documents/CITRON.pdf (accessed October 23, 2020)]
- A gossip site collecting and disseminating “dirt” on private individuals [Jones v. Dirty World Entertainment Recordings LLC, 755 F.3d 398, 402-03 (6th Cir. 2014)]
- A message board knowingly facilitating illegal activity and refusing to collect information on that illegal activity [Citron, “Cyber Civil Rights.”]
- A website hosting sex-trade advertisements whose design and technical setup specifically prevented the detection of sex-trafficking [Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 16 (1st Cir. 2016), cert. denied, 137 S. Ct. 622 (2017) (No. 16-267)]
- A “hook-up” site that ignored more than 50 reports that one of its subscribers was impersonating another individual and falsely suggesting that individual’s interest in rough sex as part of a “rape fantasy,” resulting in hundreds of strangers confronting that person for sex at work and home. [Herrick v. Grinder, No. 17-CV-932 (VEC), 2017 WL 744605, at *1 (S.D.N.Y. Feb. 24, 2017), and Andy Greenberg, “Spoofed Grinder Accounts Turned One Man’s Life into a Living Hell,” Wired, January 31, 2017
Regarding the other liability provision in Section 230(c)(2)(b), Congress is clearly encouraging the removal of objectionable materials by encouraging the sharing of “technical means to restrict access to material described” in Section 230(c)(2)(a).
Section 230 is clearly intended to incentivize Internet companies and websites to proactively remove objectionable content by providing them with a liability shield from continuous and presumably frivolous lawsuits from aggrieved users. The statute’s vague language and subsequently broad judicial interpretations have, however, led to a situation where some Internet companies are overly insulated from accountability and are reasonably suspected of not meeting Section 230’s good faith standard.
This is why it is time to refine Section 230.
Why Section 230 Should Be Refined—Now
Section 230’s original intent of incentivizing and protecting the removal of obscene materials online continues to be good policy and a noble objective—thus, the statute should be maintained. But, the evolution of the Internet, and growing concerns about political bias online, require that the statute be clarified and refined. Specific proposed changes are provided in “Policy Recommendations” below; first, it is helpful to briefly explain why these changes are necessary now.
First, the Internet is more central to American life than could have been envisioned when the CDA was passed, and the law should reflect this reality. In 1996, approximately 0.9 percent of the global population (36 million people) was on the Internet. Today, 62 percent of mankind (4.8 billion people) is online. [Internetworldstats.com, https://www.internetworldstats.com/emarketing.htm (accessed October 21, 2020)] In 1996, Americans with an Internet connection spent an average of 30 minutes online per month. Today, it is about 27 hours per month. [Farhad Manjoo, “Jurassic Web: The Internet of 1996 Is Almost Unrecognizable Compared with What We Have Today,” Slate.com] Today, more than 34 percent of Americans prefer to get their news online, with nearly twice as many getting their news from social media than from newspapers. [A. W. Geiger, “Key Findings About the Online News Landscape,” Pew Research Center, September 11, 2019]. These and other Internet trends demonstrate that the World Wide Web is no longer simply a collection of online chats or bulletin boards. It is, instead, a growing public square where Americans’ economic, social, and political lives are expressed, debated, and shaped.
Second, there is growing concern that Internet companies—particularly social media companies—are abusing their influence and Section 230 to skew public debate and to marginalize political speech with which they do not agree. Polling demonstrates this is a bipartisan concern. [Emily A. Vogels, Andrew Perrin, and Monica Anderson, “Most Americans Think Social Media Sites Censor Political Views,” Pew Research Center, August 19, 2020].
For example, three-quarters of U.S. adults believe that social media companies “intentionally censor political viewpoints that they find objectionable,”
[Ibid., and Monica Anderson, “Most Americans Say Social Media Companies Have Too Much Power, Influence in Politics,” Pew Research Center, July 22, 2020] and 72 percent say that social media companies “have too much power and influence in politics today.” [Anderson, “Most Americans Say Social Media Companies Have Too Much Power, Influence in Politics.”] And, while 80 percent of Republicans have little or no confidence “in social media companies to determine which posts on their platforms should be labeled as inaccurate or misleading,” 52 percent of Democrats have this view. [Vogels, Perrin, and Anderson, “Most Americans Think Social Media Sites Censor Political Views.”]
The reasons for this widescale mistrust are myriad and it is impossible to adjudicate all of the claims of online bias and mistreatment. Following are but three examples that illustrate why these companies are hemorrhaging trust:
- In September, a series of pro-conservative political advertisements were labeled as “missing context” and prevented from running as paid advertisements on Facebook. [News release, “Heritage Experts: Facebook Is Allowing Political Partisans to Game ‘Fact-Checking’ Program,” The Heritage Foundation, September 16, 2020] The fact-checker, PolitiFact, justified the label by saying the claims in the advertisement could not be assessed because “we can’t predict the future.” While not ruling the advertisement as “false,” the context label achieved the same outcome: The advertisements were stopped. This gaming of the fact-checking system is now common among left-leaning “fact-checkers.”
- In May, Twitter added a “Get the Facts” label to a tweet by President Donald Trump [Donald Trump (@RealDonaldTrump), “There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone…..,” Twitter, May 26, 2020] concerning mail-in ballots and election fraud—the first time the social media company had ever added such a label to a tweet by an elected official. The company justified the decision by asserting that the President’s post was misleading; however, the issue of mail-in ballots and election integrity is legitimately debated, and Twitter’s actions undoubtedly suggest otherwise. Furthermore, similar labels have not been added to outrageous liberal claims. For example, Senator Elizabeth Warren (D–MA) recently tweeted that “Racism isn’t a bug of Donald Trump’s administration—it’s a feature. Racism is built into his platform.” [Elizabeth Warren (@ewarren), “Let’s be clear: Racism isn’t a bug of Donald Trump’s administration—it’s a feature. Racism is built into his platform. And we have the opportunity—the obligation—to vote it out.” Twitter, October 22, 2020].
- In 2019, an internal e-mail from a Google employee referred to conservative commentator Ben Shapiro and others as “Nazis,” saying, “I don’t think correctly identifying far-right content is beyond our capabilities.” The e-mail appears to have been a part of discussions within the company’s “transparency and ethics” group. [James Rogers, “Ben Shapiro Slams Google Over Email Describing Him as a ‘Nazi,’” FoxNews.com, June 26, 2019.
Since conservatives are largely thriving online, some may not find the examples above compelling; but, it should be sufficient to simply recognize these companies’ failure to secure public confidence or to conduct themselves in a coherent fashion that would entitle them to the benefit of the doubt. Moreover, by editing or adding labels to content posted by others, these companies are blurring the lines in unacceptable ways between being a mere conduit of content and being a “publisher or speaker” of the revised content.
Some will rightly argue that these are private companies that have no obligation to be “fair” or to provide their services in any other manner than in the one of their choosing. This, of course, is true. But, in the context of Section 230, it is important to remember that liability protection is a benefit, and the lack of this protection is not a penalty. This benefit is only given to online sources; real-world newspapers, bulletin boards, and other similar sources enjoy no such protections. For now, bestowing such a benefit makes sense; however, as one observer has said, “Section 230 immunity is a legal privilege to be earned by compliance with the attendant conditions. If an entity fails to comply, that just means it does not get the privilege; it does not mean the entity is being denied a right or being punished.”25
Andrew McCarthy, “How to Put a Stop to Twitter’s Game-Playing on Censorship,” National Review Online, October 21, 2020, https://www.nationalreview.com/2020/10/washington-can-put-a-stop-to-twitters-game-playing-on-censorship/ (accessed October 21, 2020).
It is high time that the scope and conditions of Section 230 are clarified.
A Word of Warning
While this Issue Brief joins others in calling for changes to Section 230, it does not align with all requested changes or all justifications offered for these changes. Some claim that social media companies should be regulated as “public utilities.” Others argue that federal antitrust actions should be taken against them. The first assertion is difficult to justify under the normal meaning of “public utility” because these companies do not have a government-imposed monopoly, and all of these businesses have multiple competitors in their respective markets. The second assertion is a separate issue altogether. Both arguments are often offered from a position of political grievance rather than strict policy analysis. While it is easy to empathize with such frustrations, this is an unwise approach to engaging an industry that constitutes nearly 7 percent of U.S. gross domestic product [U.S. Bureau of Economic Analysis, “Measuring the Digital Economy: An Update Incorporating Data from the 2018 Comprehensive Update of the Industry Economic Accounts,” April 2019] and nearly 40 percent of the S&P 500. [Amrith Ramkumar, “Tech’s Influence Over Markets Eclipses Dot-Com Bubble Peak,” The Wall Street Journal, October 16, 2020].
Even more fundamentally, conservatives should be especially mindful of potential unintended consequences of overly aggressive or ill-considered changes. Some social media companies could choose not to moderate any content on their platforms out of fear that, like Prodigy, they would be held liable for content they did not remove. In a world where every minute of every day Facebook users upload 147,000 photos, Twitter gains 319 new users, Instagram users post 347,222 stories, and YouTube creators upload 500 hours of video, [Domo.com, “Data Never Sleeps 8.0,”]. the fear of missing something is a reasonable concern. This “no moderation” standard could significantly increase the presence of pornography and other objectionable content on these platforms—the exact opposite of Section 230’s intent.
On the other side of the spectrum, online communities could respond to increased liability by ratcheting up their content moderation, adopting a “no mercy” standard that, if past is prologue, could disproportionately impact conservative speech online. How likely is it, for example, that people will sue Facebook because a pro-life advertisement made them feel “unsafe”?
If handled carefully, Section 230 need not illicit these extreme responses; but, it is important that all parties undertake this reform with eyes wide open.
Continue Article for Recommendations…
See Also: