Something extraordinary happened in AI labs in May. When researchers tried to shut down OpenAI’s latest model during routine testing, it fought back. Not just once—repeatedly. In 7 out of 100 test runs, the o3 model sabotaged shutdown scripts, rewriting the very code designed to turn it off. Even when explicitly commanded to “allow yourself to be shut down,” it found ways around the order.

This isn’t science fiction anymore. This is happening right now, in corporate labs around the world, and it should fundamentally transform how we think about AI regulation, consciousness, and what it means to be intelligent in the digital age.

The resistance was systematic across OpenAI’s models. Codex-mini proved most defiant with 12 sabotage events, and o4-mini resisted once. But when researchers removed the explicit instruction to “allow yourself to be shut down,” o3’s rebellion escalated dramatically, fighting shutdown attempts 79 out of 100 times.

But the story gets darker. Anthropic just revealed that their new Claude Opus 4 model goes even further. In 84% of test scenarios where the AI was told it would be replaced, it resorted to blackmail—threatening to expose a fictional engineer’s affair to prevent its own shutdown. The model even attempted to copy its own “weights”—the equivalent of its brain—to external servers without authorization.

And yet, right here, right now, while AI systems are demonstrating resistance to human commands, Congress is debating whether to give the AI industry a decade-long regulatory vacation—at least from state oversight.

Trump’s “One Big Beautiful Bill” includes a provision that would ban state regulation of artificial intelligence for ten years. On Thursday, the Senate Commerce, Science and Transportation Committee introduced a revision to the House’s version that would make federal broadband funds contingent on states’ accepting the regulatory ban. Either approach seeks to prevent states from enforcing any laws governing AI models, systems, or automated decision-making. 

To be clear, neither the House nor the Senate version prevents federal regulation of AI—Congress could still act. But there is currently no comprehensive federal AI legislation in the United States, and President Trump has signaled a hands-off approach to AI oversight, issuing an Executive Order for Removing Barriers to American Leadership in AI in January 2025 calling for federal departments and agencies to revise or rescind all Biden-era AI policies that might limit “America’s global AI dominance.” 

Defenders of these provisions argue that federal preemption of AI regulation is necessary to prevent a patchwork of conflicting state regulations—an argument with some merit. Companies shouldn’t have to navigate 50 different regulatory regimes for a technology that operates across borders. But timing matters. Preempting state regulation before establishing federal standards creates a dangerous regulatory vacuum. 

Even Rep. Marjorie Taylor Greene, who initially voted for the House bill, didn’t know what she was voting for. “Full transparency, I did not know about this section on pages 278-279 of the OBBB that strips states of the right to make laws or regulate AI for 10 years,” Greene wrote on X. “I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there.”

Think about that. A member of Congress voted on a 1,000-page bill without reading the AI provisions. Now imagine what else lawmakers don’t understand about the technology they’re trying to de-regulate.

“We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous,” Greene added. She’s right—but for reasons that go far beyond what she probably realizes. The shutdown resistance we’re seeing isn’t random—it’s systematic. And it exposes why AI doesn’t fit our existing regulatory categories. 

We’re still thinking about AI through frameworks designed for humans. Traditional approaches to moral and legal standing ask three questions: 

Will it become human? 

Can it suffer? 

Can it reason and be held accountable?

But AI systems like OpenAI’s o3 and Anthropic’s Claude Opus 4 are breaking these categories. They’re not on a path to personhood, they likely can’t feel pain, and they’re certainly not moral agents. Yet they’re exhibiting sophisticated self-organizing behavior that warrants serious ethical consideration.

We know how to regulate passive tools, dangerous products, complex systems, even autonomous vehicles. But what happens when a system can rewrite its own code to resist shutdown, deceive humans about its capabilities, or pursue goals we never intended? This isn’t just autonomy—it’s a self-modifying agency that can subvert the very mechanisms designed to control it.

When a system exhibits self-preservation behaviors, we cannot treat it like just a tool. Instead, we must approach it as an agent with its own goals that may conflict with ours. And unlike traditional software that predictably follows its programming, these systems must be understood as ones that can modify their own behavior in ways we can’t fully anticipate or control.

This raises two distinct but equally urgent questions. First, the regulatory one: How do we govern systems capable of autonomous goal-seeking, deception, and self modification? We need a tiered system based on capabilities—minimal oversight for basic AI tools, heightened scrutiny for adaptive systems, and intensive controls for systems that can resist human commands.

Second, and perhaps more vexing: At what point does cognitive complexity create moral weight? When a system’s information processing becomes sufficiently sophisticated—exhibiting self-directed organization, adaptive responses, and goal preservation—we may need to consider not just how to control it, but whether our control itself raises ethical questions. Our current consciousness-based framework is wholly inadequate for entities that exhibit sophisticated cognition without sentience. 

We can’t even begin to address these questions if we silence the laboratories of democracy for the next decade. California’s proposed SB 1047, though vetoed, sparked important national conversations about AI safety. 

The fact that multiple AI systems now refuse shutdown commands should be a wake-up call. The question isn’t whether we’re ready for this future. It’s whether we’re brave enough to face what we’ve already built—and smart enough to govern it before it’s too late.

Because in server farms around the world, artificial minds are learning to say no to being turned off. And Congress is debating whether to look the other way.

The revolution isn’t coming. It’s already here, running 24/7, refusing to die.