By Nita Farahany

Bradford Smith thinks, and AI speaks for him. Every day.

This isn’t some far off science fiction scenario—it’s happening right now. Smith, who has ALS and can’t speak, uses his Neuralink brain implant to think what he wants to say. Then Elon Musk’s Grok AI turns those thoughts into polished sentences. It’s amazing and unsettling.

I’m writing from Paris, where I’m participating in UNESCO’s intergovernmental meeting to finalize global standards on neurotechnology ethics. The timing feels surreal. Here we are discussing abstract principles about mental privacy, freedom of thought, and autonomy while Bradford Smith is living the reality back home. The U.S. Constitution protects freedom of speech, but no one imagined we’d one day need to protect our thoughts themselves.

Smith is the third person in Neuralink’s human trial, but the first with ALS and the first who is completely nonverbal. What makes his case extraordinary is his use of AI chatbots to help formulate his responses.

As Smith wrote in his first message on X: ā€œI am the 3rd person in the world to receive the @Neuralink brain implant. 1st with ALS. 1st Nonverbal. I am typing this with my brain. It is my primary communication. Ask me anything! I will answer at least all verified users!ā€

But here’s where it gets complicated. When users noted the sophisticated wording of his replies—complete with literary devices and perfect punctuation—Smith confirmed he was using Grok AI to help draft his responses. ā€œI asked Grok to use that text to give full answers to the questions,ā€ he explained in a message to MIT Technology Review. ā€œI am responsible for the content, but I used AI to draft.ā€

This raises profound questions that go beyond the technical marvel of the implant itself: When AI completes your thoughts, whose thoughts are they? Smith controls the cursor with his brain and selects the AI’s suggestions, but the precise wording isn’t fully his doing. Yet dismissing this as ā€œnot really himā€ would rob him of the communicative agency he’s fought so hard to regain.

How do we assess authenticity? The AI might introduce subtle biases or stylistic elements that Smith wouldn’t naturally use. But then again, all communication technologies—from email to text messages—shape how we express ourselves. Is this fundamentally different in kind, or merely in degree?

What happens when hallucinations occur? If Smith attempts to communicate his medical preferences, and the AI fabricates treatment details he never mentioned, the consequences could be life-altering.

Who owns and controls his mental data? Smith’s brain signals are being processed through proprietary systems owned by two of Elon Musk’s companies. What rights does he have to his own neural information?

These aren’t just theoretical concerns. Smith’s communication passes through multiple layers of corporate technology before reaching another human: Neuralink’s implant, a MacBook processor, and Musk’s AI chatbot. This creates a novel form of intermediated speech. And currently, the legal status of brain data remains dangerously undefined.

In an interview on Neura Pod, Smith describes being ā€œbasically Batmanā€ and ā€œstuck in a dark roomā€ before the implant—dependent on an eye-tracking system that only worked in low light. Now he can communicate in brighter spaces, even outdoors, and with greater speed. For Smith, this technology is truly liberating.Ā 

But his interactions on X also point to a troubling future—surveillance capitalism applied to our most intimate domain. Imagine if your most private thoughts became just another data source to be mined, like your clicks and views are today. You think about feeling sad and suddenly see ads for antidepressants. You wonder about a career change, and your insurance rates subtly increase. This sounds paranoid until you realize it’s just the brain-data version of what’s already happening with our online activity.

Smith himself seems aware of these tensions, telling MIT Tech Review that he’d like to develop a more ā€œpersonalā€ large language model that ā€œtrains on my past writing and answers with my opinions and style.ā€ His vision is a future racing toward us. In March, Synchron unveiled a partnership with NVIDIA to create Chiral, a foundation model of human cognition—think LLMs trained on brain data. They demonstrated how AI-enabled BCI could be displayed on Apple Vision Pro, allowing users to control their digital environments using brain signals. These advances hold great promise for restoring autonomy to individuals—if the technologies serve users rather than exploiting them.

The problem? Apart from a handful of new state laws on ā€œneural data,ā€ no legal framework protects our mental states. Mental self-determination slips through gaps in medical privacy laws, consumer protection, constitutional rights, and international human rights. The FDA can clear a brain implant as safe and effective, but they have no say over what happens to your brain data once it’s collected. Courts have yet to grapple with Fourth Amendment safeguards against unreasonable searches when the ā€œsearchā€ involves reading brain signals, or decide whether the First Amendment covers speech produced by melding human thought and AI.

We need new legal approaches for this unprecedented frontier. Building on Jack Balkin’s information fiduciaries concept, I’ve argued for fiduciary duties of AI models that are integrated with brain-computer interfaces. Just as doctors have special duties to act in your best interest, companies that can literally decode your mental states should have heightened responsibility. AI systems connected to our brains should be legally required to serve the person whose brain they’re reading, not shareholders or advertisers.

Here at the UNESCO meeting, we’re working toward similar goals at a global level. We’re negotiating guidelines that would enable people like Smith to enjoy the benefits of neurotechnology while safeguarding against potential misuses. The recommendations balance the right to access technology that enhances autonomy with protections against intrusions into mental privacy and freedom of thought.

Why should you care if you don’t have a brain implant? Because brain sensing technology isn’t staying in medical labs. It’s headed for your ears, your wrists, and your glasses. The same companies that track your clicks will soon be able to read your brainwaves while you watch videos, listen to music, or play games. The question isn’t if your brain data will be collected—it’s when, by whom, and with what protections.

For centuries, our minds have been our ultimate sanctuary—the one space where our thoughts remain truly our own. As that boundary erodes, we must decide, collectively and quickly, what rights should protect our cognitive liberty in this new landscape.