Are you coming down off the Olympics hype and eagerly awaiting college basketball’s March Madness? Never fear: STAT Madness is here to fill that adrenaline gap! Take a look at the scientific innovations battling for the STAT crown, and vote for your favorites early and often.
There are already signs that the idea of an “AI doctor” is going to be a big theme of 2026. But there are many serious questions to contend with on the road to that inevitability.
In discussions about the Doctronic Utah AI pilot and the AI Prognosis evaluation of the recent Nature Medicine LLM piece, I’ve seen multiple people raise the argument: Doctors make almost 800,000 diagnostic errors a year in the U.S. and they don’t get held responsible for them; why should AI have to be perfect for us to use it?
In my view, this isn’t quite right. Those advancing this argument are correct that there were harms from physician mistakes before AI. There will also inevitably be medical mistakes that AI will produce. I think the big questions we need to wrestle with are more along the lines of:
(If you disagree, or have other questions to add, reply to this email! I want to hear from you!)
I anticipate approaching these questions from different angles over the course of this year. To begin thinking about how the industry currently assesses the risk of doctors making mistakes, I turned to someone who has a financial stake in this: a medical malpractice insurer.
Jared Kaplan is the CEO of Indigo, a medical malpractice insurer that’s particularly attuned to AI because it uses AI to make its own coverage decisions.
Whereas a traditional malpractice insurer will treat, say, a group of nephrologists in Boise as if they all have the same risk profile, said Kaplan, Indigo matches a provider’s identification number to about 200 data points and calculates their risk not only on their history, geography, and specialty, but also based on the patients they’re seeing, the procedures they’re doing, the medications they prescribe.
This saves doctors the hassle of filling out pages-long questionnaires and speeds up the time it takes to get a quote to a doctor, he said. Indigo is betting that its more customized algorithm will be better at identifying doctors who will have less-frequent claims, and also that its efficiency will allow it to offer lower premiums. Currently, AI is underwriting the risk and pricing of 25% of Indigo’s submissions, and that number will be 50% by the end of the year, he said.
Kaplan thinks that AI improving the doctor error rate will manifest in two ways: Specific tools may drive better judgement or outcomes more than others; if we could measure that, we could reward doctors for using specific tools in the way the home or auto industry already does. But also, if doctors really are getting less error-prone as they implement AI, that will play out over time no matter how we measure it in the overall outcomes.
Read more in our conversation below about how the entrance of AI tools and autonomous AI may change how malpractice insurers think about risk.
I am especially interested in malpractice as we’re thinking about the “AI doctor.” Where does that liability go? How are folks thinking about it? Right now the model seems to be that the doctor using the AI takes responsibility.
So you’ve got this AI doc in Utah…Our personal point of view on that is, it is a product liability question in the way they’re pursuing the model today — meaning if there’s a failure there, it’s not medical malpractice per se; it is “A medical device [has] failed.” And there’s an established product liability market that takes care of those types of situations, so it’s relatively a one-off of where we think the space is going to go.
The space is going to look like what has happened in homeowners’ and what has happened in autonomous driving, to a certain degree. Fundamentally, we believe if you have a burglar alarm, if you have fire suppressant systems, if you have audio and video security, your home is safer; you deserve a discount on your home insurance. At auto, Lemonade just announced they are going to discount auto insurance premiums by 50% if you drive a Tesla and it is on the full self-driving mode.
What you’re going to see in a physician’s office, though, is the beautiful child of both of those. Because physicians are going to be armed with note taking tools, they’re going to be armed with standard operating procedure tools that help them understand how to diagnose and what the proper care pathways are. They’re going to have diagnostic tools so they can better read results and ensure that they don’t miss anything. We will see who the players are that have the best data that prove that they can reduce errors and omissions with those tools, and we will reward the physician groups with lower insurance premium.
We’re very nascent right now on who’s got the data at scale with statistical significance that they are providing less error-prone care. But […] Our philosophy at Indigo is that [there’s] no question: Errors and omissions will go down in medical offices and therefore medical malpractice insurance should get less costly over time.
You mentioned the Utah AI pilot — [Doctronic] said specifically they got malpractice insurance for this. I am curious what you think about that proposition — whoever did that, what are they measuring?
I’d love to see the policy. I’m skeptical it really is malpractice insurance… My guess, but I’m completely speculating here, it looks like a product liability [policy], looks [more] like a medical-devices manufacturer’s typical insurance policy, than it does a traditional medical malpractice policy where the care oversight is a big part of assessing the risk. I could be wrong, but we haven’t seen anything like that in the marketplace. And so that’s a lot of marketing, probably, over substance.
…And the use case there is relatively limited too, right? It’s just on refill prescriptions. And I think that they have got human oversight for now […] I mean, it looked amazing as a PR announcement, but the devil is always in the details. Utah is awesome as far as being forward-thinking and having the sandbox to create the areas to do this, but testing it here is a long ways away from a national product that is well-accepted.
As I understand it, in the Utah pilot, a Doctronic doctor’s name is on the script that the AI renews, and Utah has arranged it so that is okay. What do you think about that from a malpractice standpoint, of the doctor’s name being on this thing that the doctor did not do and perhaps does not know is occurring.
Think of that as like a medical director. We see this all the time — a medical director who supervises a group, but is not intimately or directly involved in patient care, those are always much riskier risks, right, because of that gap of information between what that person is doing, and you don’t know who they’re supervising.
Similarly here, if someone is overseeing it — by the way, if there is an individual on the policy, then it very well could be a medical malpractice policy that looks like a medical director policy…that would make a lot more sense to me, actually, if it looked like a more traditional medical directorship medical malpractice policy.
What do you think about what that looks like for the physicians whose names are on the prescriptions? Like for their malpractice insurers — how does that work up the chain if your name is on this but you weren’t really aware that this was happening?
I mean, you wouldn’t insure something if — well, unaware — I mean, you are unaware here, but you would think that there are very clear rules and very clear kick-outs. And so you would go through that process of really understanding what that person is able to influence.
But in a situation where there’s no influence and there is not this feedback loop and there’s real gaps of how that person can direct the “care,” you wouldn’t find many insurers that would put a traditional medical malpractice policy on top of that.
So I’m going to go back to what I originally said, which is “Show us the policy,” because I guarantee you it’s not what we think it looks like.
From the STAT files:
AI news roundup
STAT's AI coverage
Song of the Week: “World Within My Room” by the Original Broadway Cast of “Maybe Happy Ending”
I finally saw “Maybe Happy Ending” this past weekend; bummed I didn’t see Helen J. Shen in her role as Claire before she ended her run, but glad to catch Darren Criss as Oliver. My hot take: This musical about two South Korean “helper bots” is actually “Waiting for Godot” wrapped in some “La La Land” aesthetics.
Characters are waiting for a mysterious man! It brings up questions about what it means to be human! There’s ambiguity! There’s cyclicity! There’s jazz! This song from the beginning of the very good musical is a good introduction to its sonic and narrative world. If you’ve seen it, let me know if you agree that the show was influenced by the Theatre of the Absurd.
Image from original article.