Why Almost Everything You Read Criticizing Section 230 Is Wrong
Section 230 takes a lot of heat – and for reasons that are almost always wrong. Here are a few of the common mistakes that you hear about Section 230.
3 Minute Read

Why is the comment section of every website a toxic waste dump that knocks 20 points off the reader’s IQ? Why does Facebook take down posts criticizing countries that dismember journalists but leave up your idiot friend’s meme claiming that federal law prevents a store from requiring shoppers to wear a mask?

The answer in both cases is Section 230 of the Communications Decency Act, also (and accurately) known as “The 26 Words That Created The Internet.”[1] It’s no exaggeration to say that without the broad immunity from civil suits created by Section 230, any Internet site or application that displays content uploaded by users – whether social media giants like Facebook or Twitter, comment sections on any website (including newspapers or other media), review and recommendation sites like Yelp, or a neighborhood digital bulletin board – either would not exist or would look nothing like it does today.

You’d think people would believe this is a good thing, but Section 230 takes a lot of heat – and for reasons that are almost always wrong. Here are a few of the common mistakes that you hear about Section 230:

  • “Facebook is biased when it moderates [the good side’s] posts, and that takes away its Section 230 protection!”
  • “Because Twitter bans more [good side] accounts than [bad side] accounts, it’s a publisher not a platform, and publishers are not protected by Section 230!”
  • “Section 230 protects only sites that are neutral public forums!”
  • “Section 230 takes away any incentive to moderate!”
  • “Section 230 lets people post hate speech!”

Let’s start by taking a look at what the law actually says. The most important part of Section 230 is subsection (1)(c), which says this:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

“That’s it. Nothing that requires neutrality or forbids bias.”

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

That’s it. Nothing that requires neutrality or forbids bias. Nothing about sites being “public forums.” No “platform/publisher” distinction.

What Section 230 does is as simple as its plain language: it immunizes any site or social media platform or listserv that qualifies as an “interactive computer service”[2 ] or any user of that service, from being held liable for anything posted by someone else. So if you retweet someone’s defamatory tweet, you can’t be held liable for defamation, because the information was “provided by another information content provider.”  Similarly, the Milwaukee Journal Sentinel cannot be held liable for defamatory statements posted by readers in its comment sections.  And Facebook cannot be held liable for any of the stupid medical advice, invasions of privacy, or daily defamatory remarks posted by its users.

Is this a good thing? I guess it depends on whether you think Twitter, Facebook, online comment sections, and listservs are good things. Because if Twitter, Facebook, comment sections, and listservs could be held liable for third-party content, they simply would not exist. Websites would delete their comment sections and carry on without comments – and Twitter, Facebook, and any other social media platform would vanish, because all they are is third-party content. Thousands and thousands of warehouses full of moderators would not have enough time to scrutinize the billions of comments, replies, retweets, reviews, and other “information provided by another content provider” uploaded to the Internet every day, let alone review complaints, seek the other side of the story, and decide who was right.

Some courts have interpreted subsection 230(c)(1) to immunize interactive computer services from liability for moderation decisions. More often, moderation is analyzed under subsection 230(c)(2)(A), which gives a “provider” or “user” of an “interactive computer service” immunity from civil liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Note that (c)(2)(A) contains a “good faith” requirement not contained in (c)(1) – but even “good faith” falls far short of the “neutrality” that misinformed commentators think is required.

Lots of people are wrong about Section 230. But some people – including Donald Trump, a bunch of Republican senators, and Joe Biden – want to amend it or destroy it altogether, and my next post shows why they are dangerously mistaken.

Oh, I didn’t forget about hate speech: Section 230 doesn’t have anything to do with it. The First Amendment protects hate speech. (Good.)

____________________________

[1] The title of a 2019 book by Jeff Kossoff, a professor of cybersecurity law at the United States Naval Academy, that is the best single source on Section 230’s legislative history, case law, and commercial impact. (return)

[2] Defined as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.” 47 U.S.C. §230(f)(2). (return)

 

More Articles

Scroll to Top