• Show Notes

Dear Reader,

No one wants their online ad to appear next to a social media post denying the Holocaust or offering quack remedies for Covid. But that’s what’s at stake in a pair of cases pending before the Supreme Court, which demonstrate the challenges of regulating online communications in a nation that treasures free speech. 

Last week, the Court heard arguments in two cases–NetChoice v. Paxton and Moody v. NetChoice–challenging state laws that prohibit social media platforms from removing content that violates their community standards. At issue is whether the platforms may continue to exercise editorial discretion in deciding which content to allow on their sites. 

According to Amy Howe at SCOTUSBlog (a great resource for Supreme Court cases, by the way), the laws were enacted in Texas and Florida in 2021 “in response to a belief that social media companies were censoring their users, especially those with conservative views.” Although the statutes differ in their details, their essence is the same. Both laws limit large social media companies from making their own choices about what content to allow on their platforms and require the social media companies to explain to users each individual editorial choice. Industry groups argue that the statutes violate the tech companies’ First Amendment rights by forcing platforms like YouTube, Facebook, and X/Twitter to publish content that conflicts with their terms of service. During oral argument, Paul Clement, representing those challenging the laws, insisted that denying platforms the discretion to moderate content would be “a formula for making those websites very unpopular to both users and advertisers.” In other words, content moderation is, in their view, essential to stemming the tide of online disinformation and other hateful speech. Without it, users would find their platforms unbearable.