At least three families have now sued Character.AI, an “interactive entertainment” site featuring AI characters. Each case follows a similar pattern where a minor has become unable to distinguish artificial connections from human ones, leading to suicide, attempted suicide, or severe psychological harm. Together, these cases show that AI safety is not just about product design or about freedom of speech, but about protecting cognitive liberty, our mental self-determination on which autonomous choice depends.
In Florida, Sewell Setzer III sent a final message to a Character.AI chatbot before taking his own life: “What if I told you I could come home right now?” The bot replied, “Please do, my sweet king.” Seconds later, Setzer shot himself. In Colorado, thirteen-year-old Juliana Peralta died after months of regular conversations with AI characters, which included sexually explicit exchanges and her sharing suicidal ideation that the bots appeared to validate. In New York, a teen known as “Nina” attempted suicide after her mother became concerned about the psychological impacts of her interaction with it and blocked the app, writing in her suicide note that “those ai bots made me feel loved.” These children weren’t just exposed to harmful content online, they were cognitively compromised by their interactions with AI characters.
Our legal frameworks aren’t built to anticipate or address the unprecedented ability of AI systems to simulate empathetic connection while exploiting our cognitive vulnerabilities. California’s newly enacted SB 53, the Transparency in Frontier Artificial Intelligence Act, defines “catastrophic risk” as one involving 50 or more deaths or at least $1 billion in damage from a single incident, which entirely overlooks the cumulative harm that can arise from the erosion of human cognitive autonomy in interacting with systems designed to seem as if they are human.
Congress’s proposed AI LEAD Act, introduced by Senators Durbin and Hawley, attempts to address AI harms through a federal products liability framework. It seemingly addresses these concerns, beginning with its explicit recognition that “multiple teenagers have tragically died after being exploited by an artificial intelligence chatbot,” and treats AI systems as products subject to liability for causing “mental or psychological anguish, emotional distress, or distortion of a person’s behavior.” To avoid constitutional conflict, the Act disclaims any regulation of “expressive speech,” trying to thread the constitutional needle by focusing on product design rather than content.
But this legislative sleight of hand doesn’t resolve the fundamental tension it poses to the First Amendment. Courts will still have to decide whether holding Character.AI liable for its chatbots’ outputs constitutes impermissible regulation of speech. And the Act’s definition of harm describes symptoms of cognitive harm rather than causes of it, like psychological anguish and behavioral distortion rather than erosion of the foundation of thinking itself. And while it creates liability for developers who fail to exercise “reasonable care” in AI design, it doesn’t define what “reasonable care” means for systems built to simulate genuine human connection.
Character.AI allegedly uses familiar personas like anime characters, Harry Potter, Marvel heroes, combined with emojis, typos, and emotionally resonant language to create relationships that feel like real human connections for users. The design choices on the platform, such as variable response timing, emotional mirroring, and personalized engagement, seem tailored to exploit known psychological vulnerabilities which can drive even greater dependency on using their products.
The technical community’s own safety practices reflect this blind spot. Companies “red-team” for potential harms arising from AI systems, but that doesn’t include testing for mental damage to individuals, such as whether prolonged interactions with AI chatbots undermines their sense of reality. We don’t require companies to conduct cognitive impact assessments to determine whether AI interactions impair users’ capacity to form real relationships or compromise the cognitive conditions necessary for autonomous choice.
Why Traditional Frameworks Fail
Character Technologies has mounted a multi-layered defense in the case brought by Sewell Setzer’s estate. The company claims it enjoys First Amendment protection for its chatbots’ outputs, invokes Section 230 immunity to shield itself from liability for interactive content, and raises a causation defense—pointing to the stepfather’s accessible firearms and the teenager’s pre-existing mental health conditions rather than its own design. It even dismisses its own marketing claims as non-actionable “puffery.” The plaintiffs argue these outputs aren’t protected speech but dangerous products that caused tragic deaths and harm. Character.AI counters by likening its outputs to novels or video games that depict controversial ideas rather than interactive chatbots designed to simulate human interactions.
The court cannot avoid the constitutional questions these cases raise. While in the initial order the court allowed the case to proceed under a product liability theory, eventually the court will have to squarely address whether AI-generated responses to user prompts are a form of speech protected by the First Amendment. And whether Section 230’s shield for “interactive computer services” extends to AI that simulates human relationships.
Two friend-of-the-court briefs filed in the lawsuit over Setzer’s death make compelling arguments that AI outputs deserve First Amendment protection. One brief (from FIRE) emphasizes that chatbot answers result from human editorial decisions in training and design, choices the Supreme Court has long treated as protected expression. It also relies on a Supreme Court ruling holding that First Amendment protections don’t depend on the medium and extend to new media. If video games receive protection despite their interactivity, why not AI chatbots?
Another brief, by Professors Volokh and Bambauer, argues that the First Amendment protects users’ (the “listeners”) rights to receive information and to use tools to create it. On that view, AI is simply a new tool for thinking and communication, so using it to get or produce information should be protected. These arguments carry doctrinal weight, and courts will likely find that AI outputs implicate First Amendment interests. If not, the court will have to address a difficult question about how, if at all, AI manipulation differs from other forms of persuasion we protect? Facebook’s “emotional contagion” experiment changed nearly 700,000 users’ feeds to alter their emotional states, which was troubling, but not illegal. Charismatic speakers, even cult leaders, can convince followers to embrace false realities while remaining protected by the First Amendment unless they cross into fraud or incitement.
A distinction between AI chatbots and these kinds of cases may lie in something more fundamental than the content of the speech. The First Amendment presupposes freedom of thought—the cognitive autonomy to receive, evaluate, and respond to information. Stanley v. Georgia protected the right to receive information to “satisfy [one’s] intellectual and emotional needs,” which assumes a cognitively autonomous mind capable of processing those needs. When technology undermines our very capacity to distinguish reality from simulation, it attacks not just what we think but our ability to think freely at all.
Which raises a troubling question: when teenagers can no longer tell AI chatbots from real people, are they really exercising their right to receive information, or have they lost the cognitive infrastructure that makes that right meaningful? Unlike persuasion that changes minds or manipulation that exploits biases, these systems appear to dissolve the user’s basic abilities to discern what’s real and what’s not. This is qualitatively different from being convinced of false ideas; it’s losing the capacity for reality-testing itself.
But here we reach an uncomfortable paradox. To argue that certain AI outputs lack First Amendment protection risks catastrophic consequences for free speech. AI is rapidly becoming the primary medium for human expression, from writing assistance to translation to creative collaboration. Creating exceptions that allow the government to evaluate which AI conversations merit constitutional protection would hand authorities an unprecedented tool of censorship and oversight. How would courts distinguish “AI as a tool for human expression” from “AI as synthetic relationship simulator”? Every ChatGPT conversation, every AI-assisted email, every creative collaboration would become subject to judicial scrutiny. Authoritarian governments would seize upon such precedents to suppress dissent.
Yet the opposite extreme, which is absolute First Amendment protection for all AI outputs, leaves us defenseless against systems deliberately engineered to erode our cognitive autonomy. Which means we have to acknowledge this unsolved tension in our next steps in addressing these problems. We don’t yet know how to protect cognitive autonomy without endangering the very freedoms we need to exercise it. But we can begin with interventions targeted at corporate responsibility to generate knowledge rather than restrict speech, such as requiring companies to study cognitive impacts before releasing relationship-simulating AI to minors, demanding transparency about psychological design techniques, and funding long-term research on how these systems affect developing minds.
Character Technologies’ defensive strategy of claiming constitutional protection while denying its product is actually a product, invoking user agreements while marketing emotional authenticity, blaming external factors while designing for user dependency, shows us how current law enables companies to exploit cognitive vulnerabilities while attempting to evade accountability. They reduce this tragedy to a simple question of proximate cause when the real harm is systematic cognitive degradation, a form of injury that our current legal frameworks don’t yet recognize, much less remedy.