The next time you’re sitting on that paper-covered exam table, your doctor wonât tell you the most important fact about your visit: Theyâre weighing their decades of training against an AI thatâs four times more accurateâ80% (AI) to their 20% (human) on complex cases. And they canât tell you whether they trusted their own experience or a machine’s with your life.
That four-fold difference isnât a projection or possibility. Itâs the result of Microsoft’s study on 56 of medicine’s hardest casesâthe diagnostic puzzles regularly published by the New England Journal of Medicine that humble even veteran physicians. When tested head-to-head, the AI didnât just edge out human performance. It exposed a truth that could change everything about health care and the role of medical providers within it.
In fact, itâs already happening. According to a new American Medical Association survey, 66% of physicians now use AIâup 78% from last year. Two out of three doctors are making decisions with technology that dramatically outperforms human performance. Yet our entire legal system still assumes that the âreasonable physicianâ standard means a human physician. Should it?
The Paradox
Imagine a patient presenting in a doctorâs office with unusual symptoms that the doctor canât quite place. Plugging the information into an AI system, the machine predicts the patient has a rare autoimmune condition with a confidence level of 80%. The doctor, trained to recognize common diagnoses, suspects something simpler, just a common cold triggering a post-viral sequelae. Which prediction governs the patientâs treatment? And whoâs liable if either is wrong?
In their April 2024 guidance, the Federation of State Medical Boards (FSMB), the organization that provides legislative advocates for state medical boards, tried to answer this question. Their solution proposed that physicians remain fully responsible for all AI-assisted decisions and exercise their independent clinical judgment when using AI tools.
Sit with that idea. Does it really make sense for physicians to assume complete liability for technology that outperforms them by 400%? And then defend the decisions made by systems that are so complex that physicians canât even explain how they arrived at their recommendations, nor why their own clinical judgment is inferior?Â
An MIT Media Lab study this month adds another layer to this conundrum. Regular AI use creates temporary but measurable changes in brain activity. Participants using ChatGPT showed âweaker neural connectivity and under-engagement of alpha and beta networks.â If AI use detrimentally affects cognitive patterns, which seems to be verifiably the case, how can physicians maintain the âindependent judgmentâ the FSMB requires?
At Aspen Ideas Health last week, where I served as a panelist on the Artificial Intelligence Revolution, this collision between capability and liability dominated the conversation. When I asked the packed room of healthcare leaders how many had used ChatGPT for medical questions, hands shot up everywhere. When I then asked about uploading medical records, the nervous laughter said everything.
My fellow panelist Micky Tripathi from Mayo Clinic underscored the gridlock. Mayo has AI tools ready to deploy today that could help patients immediately. But they canât use them. Their governance processes canât evaluate technology that evolves monthly while liability questions remain unanswered.
Innovation vs. Regulation
During our panel, Karen DeSalvo from Google captured the fundamental mismatch. Innovation races at Silicon Valley speed. Regulation crawls at government pace. By the time lawmakers and courts answer todayâs questions, the technology will be three generations ahead.
In the Q&A that followed, a physician asked about using AI to solve healthcareâs most maddening problemâfragmented medical records across different systems. The technology exists today to unify your entire medical history instantly. But the regulatory framework? The liability questions? Unanswered. So fragmentation continues, information stays siloed, and patients suffer while experts, policymakers, and lawyers debate.
These unanswered questions multiply daily. Will malpractice plaintiffs argue that failing to use an 80% accurate tool equals negligence? Or will they claim that following AI recommendationsâeven demonstrably superior onesâbreaches the standard of care by abandoning independent judgment?
The FSMB suggests falling back on âestablished ethical principles,â but how can we fall back on outdated ethical frameworks (designed for human-only medicine) to solve a fundamentally new problem where the numbers show humans are vastly outperformed? Itâs like trying to use horse-and-buggy traffic laws for highways with cars. The old framework simply doesnât fit the new reality.Â
Some propose making AI companies liable, but these are just toolsâsophisticated calculators. Others suggest new insurance frameworks, but you canât price risk that hasnât been defined. The boldest proposals would abandon fault-based malpractice entirely for AI-assisted care, creating no-fault compensation funded by AIâs cost savings, where some combination of healthcare providers and hospitals would pay into a fund, AI companies might contribute based on their toolsâ usages, insurance companies could be required to redirect some malpractice premiums to the fund, or government subsidies could contribute from overall health savings.Â
But none of these proposals address the fact that weâre asking physicians to maintain skills that machines have already surpassed while simultaneously warning them against the âdependence and skill degradationâ that comes from using these very machines and the superior diagnostic capabilities they provide.
Your next doctor’s visit will happen in this gap between mathematical reality and legal theory. Your physician will choose between human judgment and machine analysis with undefined consequences. Theyâll make decisions that could save your life or end their careerâor both.
Courts havenât yet ruled whether thereâs a duty to use AI when itâs superior to human judgment. They haven’t decided if following AI recommendations while maintaining independent judgment is even possible. These are daily realities facing physicians, where doing the medically right thing and the legally safe thing are starting to conflict.Â
Microsoft didnât intend to expose this crisis. They simply built something that diagnoses complex cases far better than physicians do. But that achievement revealed an uncomfortable truth: Medical malpractice law assumes human judgment defines reasonable care. When machines obliterate that assumption, the question isnât whether the framework will change. Itâs how many patients will be affected before it does.
The head of Microsoftâs AI tech unit, Mustafa Suleyman, predicts that these systems will be âalmost error-free in the next 5-10 years.â Will the law catch up to the math?Â
The AI will see you now. Whether the law will protect anyone in the exam room remains dangerously unclear.