Last week two American juries found social media companies liable for harming children. The New Mexico verdict was framed in early coverage as a precursor to the LA verdict, the latter being regarded as the “bellwether” case. But the two cases did different legal work, and face distinct appellate threats, which matters enormously for the 2,000 cases still pending. The First Amendment may kill the Los Angeles judgment on appeal while leaving New Mexico’s verdict largely untouched.

In the Los Angeles case, a young woman known in court documents as Kaley sued Meta and Google’s YouTube (and TikTok and Snap, but both settled before trial), arguing that Instagram and YouTube had caused her mental health issues. Kaley began using YouTube at age six and Instagram by age nine. By her early teens she had developed severe depression, anxiety, and body dysmorphia. Her lawyers were careful to argue the platforms were defective products, deliberately and negligently engineered to be addictive through features like infinite scroll, autoplay, variable reward schedules, and algorithmic recommendations designed to maximize time-on-app, irrespective of the psychological costs. They did not, importantly, argue that the platforms had shown her harmful content posted by other users. After a trial that included testimony from Mark Zuckerberg himself, followed by nine days of jury deliberations, the jury found both Meta and Google liable. They owed Kaley $3 million in compensatory damages and $3 million in punitive damages.

One day earlier, a jury in Santa Fe reached a verdict against Meta in a different social media case. New Mexico’s attorney general had sued Meta in 2023 after an undercover operation, in which his office created fake Instagram accounts registered to minors as young as 12 years old. The accounts were flooded with solicitations from predators, and the platform appeared indifferent to stopping it. The jury found that Meta had made false or misleading statements about platform safety and had exploited the vulnerability and inexperience of children, and awarded $375 million in a verdict against Meta.

Some legal scholars and commentators have argued that social media companies are just as liable as tobacco companies were. The tobacco analogy is more precise than its critics credit. The turning point in tobacco litigation wasn’t jury verdicts finding that cigarettes caused cancer. Plaintiffs had been winning those cases for years without driving systemic change because individual causation was always contestable, and appellate courts kept finding ways to reverse on appeal. But once state attorneys general started using their broad discovery powers to force disclosure of internal industry documents that showed companies had suppressed their own research on addiction and cancer risk for decades, the question was no longer whether smoking caused harm but whether the companies lied about what they knew.

New Mexico just demonstrated that the same legal tool is available for social media. The 2,000 pending cases can be organized either around proving that platforms were badly designed or that companies knew what they were doing and misrepresented it. The design defect theory may be more vulnerable on appeal to both a Section 230 challenge and First Amendment challenges, while the fraud and misrepresentation theory faces neither to the same degree.

Although perhaps intuitively compelling, the design defect theory rests on shakier legal ground. Social media companies have for three decades been shielded by Section 230 of the Communications Decency Act, which protects platforms from liability for content their users post. In the LA case, Kaley’s lawyers targeted not what appeared on the platforms, but how they were built. The trial judge agreed that design was a separate question from content and instructed the jury accordingly. The distinction between product design and content is what made the verdict possible. But it’s also what Meta and Google will attack on appeal, because there is a legitimate argument that when someone is harmed by becoming addicted to Instagram, what they are returning to is content other people posted, and an algorithm that surfaces that content is arguably organizing rather than creating it. 

The First Amendment threat is potentially even more serious. In 2024, the Supreme Court decided Moody v. NetChoice, a case involving state laws in Florida and Texas that tried to restrict how platforms moderate content. The Court held that platforms’ algorithmic curation of third-party content is protected editorial discretion under the First Amendment, the same principle that prevents the government from telling a newspaper what to print. The Court remanded the cases for further proceedings consistent with that framework. If Meta’s recommendation algorithm is protected editorial expression, a court order requiring Meta to redesign it may be compelled speech, which the First Amendment forbids, regardless of the government’s interest in protecting children. An appellate court sympathetic to that principle and willing to extend Moody’s logic to tort liability, and there are several, could conclude that holding a platform liable for design choices that are themselves constitutionally protected expression is a First Amendment violation dressed in tort law.

Notice, though, what that argument does and doesn’t do for the New Mexico case. The fraud and misrepresentation theory doesn’t require any court to order Meta to change its algorithm. Instead, it punishes Meta for the gap between what its executives said in public and what they knew in private. Fraud doctrine has never been understood as implicating the First Amendment’s protection of editorial discretion, because knowing deception is not protected speech. A company can claim First Amendment protection for its editorial choices but can’t claim First Amendment protection for lying about them.

Internal Meta documents read aloud in the courtroom showed the company had estimated that 30 percent of American 10-to-12-year-olds were already on Instagram in 2015 and set an explicit goal to increase the time 10-year-olds spent there. A 2018 document was more explicit: “If we wanna win big with teens, we must bring them in as tweens.” When the company’s own experts concluded that appearance-enhancing filters contributed to body-image problems in young girls, Zuckerberg declined to remove them, calling that response “paternalistic.” On the stand, he told the jury that keeping users safe has always been a priority and that a platform where people feel unsafe is “not sustainable.” What the jury was being asked to weigh was the distance between what was in those documents and what Meta actually told parents.

A theory of damages based on insult to mental self-determination runs through both verdicts, even if the law hasn’t named it yet. Cognitive liberty, the right to think, form preferences, and make decisions free from external manipulation, has never had a cleaner test case than this. Product liability is an imperfect fit, and the negligence doctrine applied in both cases wasn’t built for these harms. And as Moody shows, the First Amendment may complicate the effort to remedy insults to cognitive liberty through product design mandates. But the fraud theory offers a path to protecting cognitive liberty that doesn’t require the law to develop new doctrine. It simply requires companies to tell the truth about what they know. It is as old as the common law. And the documents showing what they knew already exist.