In the vast landscape of internet regulation, few laws have been as influential—or as controversial—as Section 230 of the Communications Decency Act of 1996. At the heart of this debate are 26 words that have shaped the digital age: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This provision has drawn a clear line between platforms and publishers, allowing social media and other online services to flourish by protecting them from liability for user-generated content. However, as the internet has evolved, so too have the criticisms and calls for reform.
In this excerpt from Market Institute Charles Sauer’s latest article, he delves into the complexities surrounding Section 230, exploring why this pivotal law is under scrutiny from both ends of the political spectrum, and what the future might hold for online content moderation and free speech.
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Those 26 words comprise Section 230 (c) (1) of the 1996 Communications Decency Act, one of the most consequential pieces of legislation in modern history. Section 230 drew a bright line between publishers who exercise editorial control over content—and those who provide a platform for individuals without exercising “editorial control” over what their users post.
Section 230 is credited by many observers for the rise of social media. This has made Section 230 a convenient scapegoat for the real and imagined problems with the internet, leading to calls for its reform or repeal from both the left and right. Those on the left blame Section 230 for allowing social media companies to profit by undermining our democracy by platforming those pedaling conspiracy theories, extremism, and disinformation. Conservatives blame Section 230 for protecting social media companies from accountability to users who find themselves deplatformed because of their views.
Both sides blame Section 230 for allowing social media companies to manipulate their algorithms to guide users—particularly young people—to consume content encouraging self-destructive behavior. The critics also claim Section 230 allows social media companies to avoid liability for the use of their platforms for illegal activity including sex trafficking of minors. These concerns have led to a bipartisan group of Representatives to propose sunsetting Section 230 in order to get Big Tech to the table to help “modify” the law. The case for sunsetting, or making major changes to, Section 230 is not justified under close examination. While there are horrifying anecdotes of individuals victimized by online bullying, predators, or communities devoted to encouraging destructive behavior, the data does not support the claim that this is a widespread problem, certainly not widespread enough to justify opening the door to more government control over the internet.
The controversy over content moderation is largely generated by Section 230 (c) (2), known as the “Good Samaritan” provision. This provision protects platform operators from civil liability for removing material they make a “good faith” determination is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” This section enables social media companies to engage in content moderation. If social media companies were not able to remove offensive or inappropriate posts the internet would quickly turn the social media universe into a cesspool.
This is not to say there are not some legitimate concerns over content mediation policies being used to suppress certain stories and opinions unfavorable to the Democratic Party and/or the progressive agenda. It just means that increasing the power of the courts and Congress over social media is not the way to address those concerns. Social media companies that engage in aggressive content moderation are already paying the costs as conservatives, libertarians, and those just seeking a safe space to express dissenting views on controversial topics flee the big tech platforms for social media sites offering a freer, speech-friendly alternative.
The free speech case against Section 230 is further weakened by the fact that much content moderation is done at the “request” of government bureaucrats and politicians—including President Joe Biden. So, instead of taking away social media’s Section 230 protections, Congress should pass the Protect Free Speech from Government Interference Act. This Act, sponsored by House Judiciary Committee Chair Jim Jordan and Senator Rand Paul, forbids any federal officials from using their position to undermine First Amendment-protected activity. Conservatives who favor government actions to punish social media companies for deplatforming users should consider how progressive bureaucrats may use that power against sites that offer free expression to those with views to the right of AOC.”