Something extraordinary happened in AI labs in May. When researchers tried to shut down OpenAIâs latest model during routine testing, it fought back. Not just onceârepeatedly. In 7 out of 100 test runs, the o3 model sabotaged shutdown scripts, rewriting the very code designed to turn it off. Even when explicitly commanded to âallow yourself to be shut down,â it found ways around the order.
This isnât science fiction anymore. This is happening right now, in corporate labs around the world, and it should fundamentally transform how we think about AI regulation, consciousness, and what it means to be intelligent in the digital age.
The resistance was systematic across OpenAIâs models. Codex-mini proved most defiant with 12 sabotage events, and o4-mini resisted once. But when researchers removed the explicit instruction to âallow yourself to be shut down,â o3âs rebellion escalated dramatically, fighting shutdown attempts 79 out of 100 times.
But the story gets darker. Anthropic just revealed that their new Claude Opus 4 model goes even further. In 84% of test scenarios where the AI was told it would be replaced, it resorted to blackmailâthreatening to expose a fictional engineerâs affair to prevent its own shutdown. The model even attempted to copy its own âweightsââthe equivalent of its brainâto external servers without authorization.
And yet, right here, right now, while AI systems are demonstrating resistance to human commands, Congress is debating whether to give the AI industry a decade-long regulatory vacationâat least from state oversight.
Trumpâs âOne Big Beautiful Billâ includes a provision that would ban state regulation of artificial intelligence for ten years. On Thursday, the Senate Commerce, Science and Transportation Committee introduced a revision to the Houseâs version that would make federal broadband funds contingent on statesâ accepting the regulatory ban. Either approach seeks to prevent states from enforcing any laws governing AI models, systems, or automated decision-making.Â
To be clear, neither the House nor the Senate version prevents federal regulation of AIâCongress could still act. But there is currently no comprehensive federal AI legislation in the United States, and President Trump has signaled a hands-off approach to AI oversight, issuing an Executive Order for Removing Barriers to American Leadership in AI in January 2025 calling for federal departments and agencies to revise or rescind all Biden-era AI policies that might limit âAmericaâs global AI dominance.âÂ
Defenders of these provisions argue that federal preemption of AI regulation is necessary to prevent a patchwork of conflicting state regulationsâan argument with some merit. Companies shouldnât have to navigate 50 different regulatory regimes for a technology that operates across borders. But timing matters. Preempting state regulation before establishing federal standards creates a dangerous regulatory vacuum.Â
Even Rep. Marjorie Taylor Greene, who initially voted for the House bill, didnât know what she was voting for. âFull transparency, I did not know about this section on pages 278-279 of the OBBB that strips states of the right to make laws or regulate AI for 10 years,â Greene wrote on X. âI am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there.â
Think about that. A member of Congress voted on a 1,000-page bill without reading the AI provisions. Now imagine what else lawmakers donât understand about the technology theyâre trying to de-regulate.
âWe have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous,â Greene added. Sheâs rightâbut for reasons that go far beyond what she probably realizes. The shutdown resistance weâre seeing isnât randomâitâs systematic. And it exposes why AI doesnât fit our existing regulatory categories.Â
Weâre still thinking about AI through frameworks designed for humans. Traditional approaches to moral and legal standing ask three questions:Â
Will it become human?Â
Can it suffer?Â
Can it reason and be held accountable?
But AI systems like OpenAIâs o3 and Anthropicâs Claude Opus 4 are breaking these categories. Theyâre not on a path to personhood, they likely canât feel pain, and theyâre certainly not moral agents. Yet theyâre exhibiting sophisticated self-organizing behavior that warrants serious ethical consideration.
We know how to regulate passive tools, dangerous products, complex systems, even autonomous vehicles. But what happens when a system can rewrite its own code to resist shutdown, deceive humans about its capabilities, or pursue goals we never intended? This isnât just autonomyâit’s a self-modifying agency that can subvert the very mechanisms designed to control it.
When a system exhibits self-preservation behaviors, we cannot treat it like just a tool. Instead, we must approach it as an agent with its own goals that may conflict with ours. And unlike traditional software that predictably follows its programming, these systems must be understood as ones that can modify their own behavior in ways we canât fully anticipate or control.
This raises two distinct but equally urgent questions. First, the regulatory one: How do we govern systems capable of autonomous goal-seeking, deception, and self modification? We need a tiered system based on capabilitiesâminimal oversight for basic AI tools, heightened scrutiny for adaptive systems, and intensive controls for systems that can resist human commands.
Second, and perhaps more vexing: At what point does cognitive complexity create moral weight? When a systemâs information processing becomes sufficiently sophisticatedâexhibiting self-directed organization, adaptive responses, and goal preservationâwe may need to consider not just how to control it, but whether our control itself raises ethical questions. Our current consciousness-based framework is wholly inadequate for entities that exhibit sophisticated cognition without sentience.Â
We canât even begin to address these questions if we silence the laboratories of democracy for the next decade. Californiaâs proposed SB 1047, though vetoed, sparked important national conversations about AI safety.Â
The fact that multiple AI systems now refuse shutdown commands should be a wake-up call. The question isnât whether weâre ready for this future. Itâs whether weâre brave enough to face what weâve already builtâand smart enough to govern it before itâs too late.
Because in server farms around the world, artificial minds are learning to say no to being turned off. And Congress is debating whether to look the other way.
The revolution isnât coming. Itâs already here, running 24/7, refusing to die.