By Nita Farahany
Bradford Smith thinks, and AI speaks for him. Every day.
This isnāt some far off science fiction scenarioāitās happening right now. Smith, who has ALS and canāt speak, uses his Neuralink brain implant to think what he wants to say. Then Elon Muskās Grok AI turns those thoughts into polished sentences. Itās amazing and unsettling.
Iām writing from Paris, where Iām participating in UNESCOās intergovernmental meeting to finalize global standards on neurotechnology ethics. The timing feels surreal. Here we are discussing abstract principles about mental privacy, freedom of thought, and autonomy while Bradford Smith is living the reality back home. The U.S. Constitution protects freedom of speech, but no one imagined weād one day need to protect our thoughts themselves.
Smith is the third person in Neuralinkās human trial, but the first with ALS and the first who is completely nonverbal. What makes his case extraordinary is his use of AI chatbots to help formulate his responses.
As Smith wrote in his first message on X: āI am the 3rd person in the world to receive the @Neuralink brain implant. 1st with ALS. 1st Nonverbal. I am typing this with my brain. It is my primary communication. Ask me anything! I will answer at least all verified users!ā
But hereās where it gets complicated. When users noted the sophisticated wording of his repliesācomplete with literary devices and perfect punctuationāSmith confirmed he was using Grok AI to help draft his responses. āI asked Grok to use that text to give full answers to the questions,ā he explained in a message to MIT Technology Review. āI am responsible for the content, but I used AI to draft.ā
This raises profound questions that go beyond the technical marvel of the implant itself: When AI completes your thoughts, whose thoughts are they? Smith controls the cursor with his brain and selects the AIās suggestions, but the precise wording isnāt fully his doing. Yet dismissing this as ānot really himā would rob him of the communicative agency heās fought so hard to regain.
How do we assess authenticity? The AI might introduce subtle biases or stylistic elements that Smith wouldnāt naturally use. But then again, all communication technologiesāfrom email to text messagesāshape how we express ourselves. Is this fundamentally different in kind, or merely in degree?
What happens when hallucinations occur? If Smith attempts to communicate his medical preferences, and the AI fabricates treatment details he never mentioned, the consequences could be life-altering.
Who owns and controls his mental data? Smithās brain signals are being processed through proprietary systems owned by two of Elon Muskās companies. What rights does he have to his own neural information?
These arenāt just theoretical concerns. Smithās communication passes through multiple layers of corporate technology before reaching another human: Neuralinkās implant, a MacBook processor, and Muskās AI chatbot. This creates a novel form of intermediated speech. And currently, the legal status of brain data remains dangerously undefined.
In an interview on Neura Pod, Smith describes being ābasically Batmanā and āstuck in a dark roomā before the implantādependent on an eye-tracking system that only worked in low light. Now he can communicate in brighter spaces, even outdoors, and with greater speed. For Smith, this technology is truly liberating.Ā
But his interactions on X also point to a troubling futureāsurveillance capitalism applied to our most intimate domain. Imagine if your most private thoughts became just another data source to be mined, like your clicks and views are today. You think about feeling sad and suddenly see ads for antidepressants. You wonder about a career change, and your insurance rates subtly increase. This sounds paranoid until you realize itās just the brain-data version of whatās already happening with our online activity.
Smith himself seems aware of these tensions, telling MIT Tech Review that heād like to develop a more āpersonalā large language model that ātrains on my past writing and answers with my opinions and style.ā His vision is a future racing toward us. In March, Synchron unveiled a partnership with NVIDIA to create Chiral, a foundation model of human cognitionāthink LLMs trained on brain data. They demonstrated how AI-enabled BCI could be displayed on Apple Vision Pro, allowing users to control their digital environments using brain signals. These advances hold great promise for restoring autonomy to individualsāif the technologies serve users rather than exploiting them.
The problem? Apart from a handful of new state laws on āneural data,ā no legal framework protects our mental states. Mental self-determination slips through gaps in medical privacy laws, consumer protection, constitutional rights, and international human rights. The FDA can clear a brain implant as safe and effective, but they have no say over what happens to your brain data once itās collected. Courts have yet to grapple with Fourth Amendment safeguards against unreasonable searches when the āsearchā involves reading brain signals, or decide whether the First Amendment covers speech produced by melding human thought and AI.
We need new legal approaches for this unprecedented frontier. Building on Jack Balkinās information fiduciaries concept, Iāve argued for fiduciary duties of AI models that are integrated with brain-computer interfaces. Just as doctors have special duties to act in your best interest, companies that can literally decode your mental states should have heightened responsibility. AI systems connected to our brains should be legally required to serve the person whose brain theyāre reading, not shareholders or advertisers.
Here at the UNESCO meeting, weāre working toward similar goals at a global level. Weāre negotiating guidelines that would enable people like Smith to enjoy the benefits of neurotechnology while safeguarding against potential misuses. The recommendations balance the right to access technology that enhances autonomy with protections against intrusions into mental privacy and freedom of thought.
Why should you care if you donāt have a brain implant? Because brain sensing technology isnāt staying in medical labs. Itās headed for your ears, your wrists, and your glasses. The same companies that track your clicks will soon be able to read your brainwaves while you watch videos, listen to music, or play games. The question isnāt if your brain data will be collectedāitās when, by whom, and with what protections.
For centuries, our minds have been our ultimate sanctuaryāthe one space where our thoughts remain truly our own. As that boundary erodes, we must decide, collectively and quickly, what rights should protect our cognitive liberty in this new landscape.