Dear Reader,Â
In the avalanche of coverage following the seismic Supreme Court rulings over the last couple of weeks concerning Chevron and presidential immunity, it would be easy to miss a couple of other important decisions that were handed down involving content moderation on social media. Those cases, Murthy v. Missouri and the NetChoice cases against Texas and Florida, reveal the tricky landscape when it comes to finding the right First Amendment balance between the government and social media platforms. They also highlight the difficulty in creating a âneutralâ legal standard in the context of a disinformation landscape that is politically asymmetrical.
Letâs start with the cases. Murthy v. Missouri involved a lawsuit filed by the Republican attorneys general of Missouri and Louisiana, as well as several individuals whose social media posts were moderated or removed, who alleged that the platforms acted under improper pressure from the Biden administration in the course of combating COVID and election misinformation. The question for the justices was whether the governmentâs actions amounted to âjawboning,â or informal pressure or coercion that effectively results in censorship. The NetChoice cases involve laws in Texas and Florida that limit and regulate social media platformsâ content moderation decisions, in order to curb what the states saw as attempts to censor conservative users following the events of January 6. The social media platforms, represented by internet trade associations, claimed that these laws violated their First Amendment rights. In Murthy, the Court dismissed the case for lack of standing, finding the link between the harm allegedly suffered by the plaintiffs and the governmentâs actions too attenuated. In the NetChoice cases, the Court remanded the cases back to the lower courts to apply the proper analysis in assessing the constitutionality of the laws, preventing them from going into effect in the meantime.Â
The legal issues in these cases havenât completely been resolved yet, but the facts themselves present an interesting juxtaposition: In one case, the government was trying to coordinate with social media to police mis- and disinformation. In the other cases, by contrast, the government is trying to prevent social media platforms from moderating mis- and disinformation. Taking them together, where should we stand when it comes to government influence over and involvement with social media platforms? Do we want them to police these platforms, formally or informally, or do we want them to just butt out?