The New York Times published leaked internal Supreme Court memos from 2016 last week, in which the justices address one another by first name, air grievances at each other, cite a BBC interview and a blog post, and hash out over the course of five days the decision to invent what we now call the shadow docket.
The Court has a history here. Time magazine got the outcome of Roe v. Wade from a clerk in January 1973, before the justices announced the ruling. Politico published the full draft of the Dobbs opinion in May 2022 before it was issued. After Dobbs, the Court’s marshal ran an eight-month investigation, conducted over one hundred and twenty six interviews, including one with each justice, pulled printer logs and device records, ordered fingerprint analysis on a relevant item, and never identified the leaker. The court then overhauled its security. Four years later, the Times got the shadow docket memos anyway.
Treat that as the floor of institutional privacy in 2026. The most procedurally secure institution in American life has a containment problem it knows about, has spent years trying to fix, and keeps failing to solve. Where every breach came from a human source, acting deliberately.
Earlier this month, Anthropic announced the ceiling. The company said it built an AI system called Mythos so good at finding software vulnerabilities that it decided not to release the model to the public. Mythos is roughly ten times more efficient at finding exploitable bugs than previous AI models. Anthropic gave a preview version to about fifty critical infrastructure operators, including Amazon, Microsoft, Apple, Google, and the Linux Foundation, so those companies could find and patch their own weaknesses before attackers catch up to similar capabilities in another lab’s model. The head of Anthropic’s frontier red team told the Wall Street Journal, “We basically need to start, right now, preparing for a world where there is zero lag between discovery and exploitation.”
Zero lag. When researchers find a software vulnerability, defenders have historically had weeks or months to ship a patch before attackers weaponize the bug. Mythos-class models compress that window toward zero. Whatever a company has not patched today gets exploited tomorrow, automatically, at scale. Human-source leaks at the Supreme Court expose one institution at a time. A proliferated Mythos-class capability, in the wrong hands, exposes thousands at once. That includes hospitals, power grids, banks, election infrastructure, court records, personal correspondence, and the backbone of every service ordinary people depend upon.
Which might soon make the Court leak the least interesting thing that leaked this decade.
Anthropic says it will not release Mythos publicly. They also say other labs will build similar models within the next few years. But even Anthropic can’t guarantee Mythos will stay contained. Employees change jobs. Model weights leak. Servers get breached. Containment buys time, not security, and the clock is already running.
Against that backdrop, the Illinois legislature is debating SB 3444, a bill backed by OpenAI. It would protect AI companies from civil liability when their products materially help cause a mass casualty event, defined as deaths of a hundred people, a billion dollars in damage, or the release of a chemical or biological weapon. To qualify for immunity, a company would need to have published a safety plan on its website and avoided conduct the law calls “intentional or reckless,” a standard that is notoriously difficult to meet against a sophisticated corporate defendant.
The thresholds look abstract, but after Mythos they read like forecasts. A coordinated cyberattack on regional hospital networks, on an insurance company’s claims processing, on a power grid in August, on an election system in November, on a bank’s core settlement platform reaches a billion dollars in damage before anyone has even named the event. Cyberattacks on critical infrastructure can kill a hundred people quickly and quietly. The drafters of SB 3444 treated the risk as speculative, butMythos makes it concrete.
Anthropic is lobbying against SB 3444 and calling it a “get-out-of-jail-free card.” To be clear, Anthropic would receive the same immunity under the bill as its main backer, OpenAI,, which makes Anthropic’s opposition all the more significant. Anthropic is instead backing a narrower Illinois bill that requires outside review of safety plans and creates a right to sue when an AI product causes severe harm to a child.
When a drug company’s medicine kills people, the families sue. The company has to sit in a courtroom, answer questions under oath, and explain itself on a public record. When a car company’s brakes fail, the same thing happens. That slow, adversarial, public process is how Americans have always held powerful actors to account. SB 3444 unplugs that process for AI companies before the first lawsuit stemming from a catastrophic event has even been filed. In exchange, the public gets a safety plan posted on a public website. The plan doesn’t even have to actually prevent anything for AI companies to receive immunity. It just has to exist.
The leaks at the Supreme Court show how fast institutional privacy collapses when one person with access decides to talk. Mythos warns that institutional security will soon collapse even faster, as AI-enabled exploitation outruns human-paced defenses. Illinois will test whether the legal system meets that compounded fragility with accountability or with immunity. One AI company wants the immunity while another one is refusing it, telling anyone listening that catastrophe is coming. Faster than we would like. For every institution we care about.