AI Liability: Who's Responsible When AI Fails?

Adoption of AI and AI products in healthcare has increased rapidly since 2023. In fact, one 2025 study revealed that 100% of surveyed health systems were using AI tools for clinical documentation, while 90% of imaging and radiology practices had incorporated AI into their processes. What does this all mean for AI liability?

As medical providers increasingly turn to AI to diagnose serious conditions and inform treatment decisions, key questions have arisen regarding liability. Specifically, when AI models are wrong, who is liable for resulting harm?

Because AI development is new and evolving rapidly, there’s no clear answer yet, as laws governing AI technology product liability and medical negligence weren’t designed for AI.

Lawmakers are trying to change that. In fact, 1,208 AI-related bills were introduced by federal and state legislatures in 2025, and 1,561 AI-related bills were introduced between January and March 2026.

Still, the technology is changing faster than new laws can be passed, so courts and regulators are scrambling to fill the gap in real time. Those who incorporate AI models into their workflows must understand how these rules are taking shape.

This guide explains key concepts, including developer liability, deployer liability, and physician liability, as well as the implications for medical malpractice coverage.

Section 1: The Legal Foundation

Tort law is the body of law that allows individuals to use the civil justice system to recover compensation for harm. As the U.S. Chamber of Commerce explains, there are three general categories of torts, two of which may be relevant here:

  • Negligence: Negligence requires plaintiffs to prove the defendant owed them a duty of care, a breach of that duty occurred, the breach was the direct cause of harm, and the plaintiff suffered damages they should be compensated for as a direct result of the harm.
  • Strict liability: Strict liability holds a defendant liable for harm regardless of intent or negligence. Strict liability rules apply only in limited situations, including animal attacks, abnormally dangerous activities, and when product defects cause harm.

Unfortunately, applying these frameworks raises numerous questions, specifically:

  • Who owes the duty in negligence claims? Is it the AI developer, the hospital or clinic that encouraged or required AI deployment, or the physician who uses the AI?
  • Does strict liability apply to AI companies? Since the Restatement (Third) of Torts defines products as “tangible personal property distributed commercially for use or consumption,” courts have historically not applied strict liability rules to software or professional services. Strict liability may not apply to AI if it’s viewed as the equivalent of either.
  • How can you prove causation? Even AI creators cannot always explain their results, which creates challenges for causation and discovery. Did the AI’s design result in the incorrect output, or was it the prompt or information provided by the physician? The answer may not be clear.

The AI Systems Supply Chain Problem

In addition to the challenges inherent in applying traditional tort law frameworks to claims of AI harm, there are also open questions about who could face legal liability, with potential targets including:

  • Model developers
  • The deploying company or hospital
  • The physician using the tool
  • The payer embedding AI in prior authorization workflows

Product liability law traditionally covers an entire chain of products and distribution. Early case law suggests courts will apply these principles in AI-harm claims, placing liability not just on physicians but also on developers, enterprise deployers, and even upstream providers, such as companies providing data and infrastructure to support AI tools.

Section 2: How Courts Are Actually Handling AI Cases

As AI use has grown more widespread, caselaw is beginning to establish how claims related to AI are likely to unfold.

Specifically, while many of the earliest AI disputes were grounded in consumer protection laws, privacy laws, intellectual property laws, and defamation laws, a growing number of claims are now built around product liability laws.  

Machine Learning Claims Arising Out of Product Liability Law

Under strict liability rules applied in product liability cases, manufacturers are liable for harm caused by inherent product defects, regardless of intent or negligence. Three types of defects can create liability, each of which may apply in AI-harm claims:

  • Design defects: Design flaws create liability if they make products unreasonably dangerous for use. In AI claims, the question is whether the AI systems are inherently dangerous in their architecture due to choices in algorithm, training data, or failure to incorporate bias detection and risk mitigation mechanisms.
  • Failure to warn: Manufacturers can be held liable for failure to disclose known risks. In the case of AI tools, the question is whether AI models disclose known limitations, including hallucination tendencies, accuracy gaps, data privacy constraints, and appropriate use cases.
  • Manufacturing defects: If there’s a problem with how the product is made that creates inherent risks, this can make a manufacturer liable for undesirable outcomes. In the case of AI, this can happen if the deployed version deviates from its intended design due to issues like misconfigured safety settings, integration errors, or deployment-specific bugs that introduce unexpected risk.

Consumers have already begun to bring claims citing these legal grounds for liability, with varying degrees of success.

Garcia v. Character Technologies

In Garcia v. Character Technologies, Megan Garcia filed a lawsuit against Character.AI, an AI developer, when her 14-year-old died by suicide after spending months engaging with a chatbot she claims engaged in exploitative role-playing. Garcia also included Google as a defendant, as Google provided technology and intellectual property used to create the bot. 

Garcia claimed the platform was defective when used as intended, as it targeted children, engaged in dangerous communications, and didn’t include appropriate safety guardrails. 

While Character.AI sought to have the claims dismissed on First Amendment grounds, claiming the chatbot had free speech rights and its statements were protected expressive content, the court allowed the claims to move forward. 

Critically, U.S. District Judge Anne Conway noted that the chatbot's output should be treated as a product, rather than as expressive conduct, because AI lacks key human traits central to free speech protections, including intent and awareness. 

Raine v. OpenAI

Raine v. OpenAI is another claim arising out of the suicide of a teenager, with OpenAI, Inc. and its affiliates (the creators of ChatGPT) named as defendants. 

The parents of Adam Raine allege that while Adam began using ChatGPT for homework, the chats progressed and became concerning. As Adam expressed suicidal ideation, the AI encouraged him, failed to deploy safety guardrails despite its systems flagging a medical emergency, drew him away from real-life support, and taught him to design a noose. 

The Raines alleged the outcome was the predictable result of deliberate design choices, including the AI’s internal memory recalling details about users, its human-like speech designed to reaffirm user emotions, and its failure to appropriately respond to dangerous content. 

Under California's strict liability rules, a product is defectively designed if it fails to perform as safely as an ordinary user would expect or if the product’s risks outweigh its benefits. They allege both tests of California’s strict liability doctrine are satisfied, as no reasonable user would expect a homework helper to create a close relationship with a minor before instructing him on how to commit suicide, and the risk of self-harm by a vulnerable person outweighs any benefits. 

This case is ongoing. However, it highlights an emerging AI litigation strategy: plaintiffs treat an AI system as a cohesive product, including its interface and guardrails, rather than as a software tool or expressive device. This framing is intended to sidestep First Amendment defenses and Section 230 defenses that create immunity for sites hosting third-party content.

Negligence & Discrimination Claims

Product liability claims are one avenue for plaintiffs pursuing claims against AI companies.

There are others, including negligence and discrimination claims. In particular, discrimination claims often arise from biased training data, where data bias can compromise the integrity and security of AI applications. This underscores the importance of safeguarding data throughout the AI lifecycle.

Early caselaw, including the following key cases, suggests multiple legal theories could potentially be fertile ground for plaintiffs and create significant liability exposure.

Mobley v. Workday

In Mobley v. Workday, Derek Mobley, a disabled African American over age 40, sued Workday, alleging that its AI-based tools relied on biased training data. As a result, it rejected candidates based on protected status and engaged in both intentional and disparate impact discrimination (unintentional discrimination that occurs when facially neutral policies harm protected groups). 

Mobley claimed Workday was an employment agency and an agent of employers using its software, and was thus liable for discriminatory behavior under Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, and the Americans with Disabilities Act. While the court rejected the employment agency argument, it agreed that Workday was an agent of the employer and allowed the disparate impact claim to move forward. 

The court also granted conditional certification of a nationwide collective action in May of 2025. This creates a significant risk for vendors and companies deploying AI when algorithms produce biased outcomes at scale.

Forrest v. Meta Platforms

Forrest v. Meta Platforms is a case brought by Dr. Andrew "Twiggy" Forrest alleging that Meta negligently failed to operate its platform in a commercially reasonable manner when it hosted over 230,000 AI-produced scam ads that falsely portrayed him as endorsing fake crypto investments and schemes and failed to remove the ads despite his repeated requests.

Meta claimed protection under Section 230 of the Communications Decency Act (47 U.S.C. § 230), which immunizes online platforms from liability for third-party content. However, Forrest alleges Meta actively contributed to the problem by allowing content to be uploaded, using AI optimization to mix and match media, and enhancing ad variations to improve engagement. 

While the case is still in the discovery phase, Judge P. Casey Pitts denied Meta’s request to dismiss the claims on Section 230 grounds because of Meta’s active role in shaping the ads with its AI tools. This denial suggests that negligence could be a viable near-term theory in legal claims arising from AI harm due to the unsettled status of strict liability for software products.

Section 230: A Weakening Shield

Historically, Section 230 of the Communications Decency Act provided strong protection for interactive services like Reddit or YouTube, preventing the sites from being treated as the publisher of third-party information. 

However, it’s an open question whether AI should be treated as a passive conduit and shielded from legal liability or treated as the information content provider or creator. 

The lawmakers who wrote Section 230 stated they don’t believe generative AI is covered by the liability shield, and in Gonzalez v. Google, Supreme Court justice Neil Gorsuch used AI as an example of a situation when Section 230 protections wouldn’t apply, stating: 

“Artificial intelligence generates poetry. It generates polemics today that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected.”

If courts are prepared to hold companies liable for content generated by AI, this has significant implications across all industries, including healthcare, as developers of physician- facing AI tools face potential liability when hallucinations or mistakes lead to medical errors.

Section 3: The Federal & State AI Policy Response

Policy responses have been slow on the federal level, despite a clear recognition that guardrails are essential.

The National Telecommunications and Information Administration (NTIA) 2024 AI Accountability Policy Report was produced in response to an executive order from the Biden Administration. It provides a series of recommendations aimed at ensuring AI systems are safe, secure, and trustworthy. It’s the most comprehensive federal framework on AI to date. 

The report organizes accountability around three pillars:

  • Access to information by appropriate means and parties: This includes transparency requirements and disclosures to ensure stakeholders (regulators, auditors, and the public) have sufficient and timely access to training processes, system documentation, and other data.
  • Independent evaluation: This means ensuring a rigorous and objective assessment of AI systems in real-world conditions to ensure they function as intended without causing harm. Audits, red-teaming, impact assessments, performance testing, bias checks, and safety evaluations should be implemented using verifiable methods and standard benchmarks.
  • Consequences: To ensure accountability, consequences for harmful outcomes or noncompliance must be clear, proportionate, and enforceable. Consequences may include legal liability, market repercussions (loss of customer trust), fines, operational restrictions, mandatory remediation, or bans on high-risk uses.

The NTIA also offered regulatory suggestions, including requiring standard AI disclosure formats (AI nutrition labels), so deployers and users can make informed choices; the application of existing liability rules to AI companies, and additional regulations to fill gaps in current law, including clear guidelines on responsibility for harm.

In September of 2025, Senators Dick Durbin and Josh Hawley introduced the bipartisan Aligning Incentives for Leadership, Excellence, and Advancement in Development (AI LEAD) Act, which aligns broadly with some of the recommendations made in the NTIA report.

The Act classifies AI systems as products and creates a federal cause of action for product liability claims when the systems cause harm. This would ensure that AI companies are incentivized to design their tools with safety as a priority, and not as a secondary concern behind deploying the product as quickly as possible.”

The AI LEAD Act

  • Broadly defines covered AI systems to include any software capable of making predictions or decisions via machine learning. 
  • Allows lawsuits against developers and against deployers that make substantial modifications to AI systems or intentionally misuse systems contrary to intended use.
  • Establishes liability for AI system developers (and some deployers) for defective design, failure to warn, express warranty violations, or unreasonably dangerous or defective products.
  • Removes the "open and obvious" defense for harm to users under 18. This defense historically allowed companies to avoid liability by claiming they’re not responsible for risk if any reasonable person could have seen and avoided the danger.

The Act has not become law, although Congress passed other AI-focused legislation, including the TAKE IT DOWN Act, which criminalizes the publication of non-consensual intimate imagery, including AI-generated NCII, and requires social media and similar websites to remove the content within 48 hours of a victim providing notice.

State Law & Disclosure Requirements

In 2025 alone, lawmakers introduced 1,208 AI-related bills, 145 of which were enacted into law across 38 states. Many of the laws that have not yet been passed remain alive heading into the 2026 legislative session.

One notable example includes S0358 in Rhode Island, which would establish near-strict liability rules by creating a rebuttable presumption about the state of an AI system based on how humans would act under the same circumstances. The law requires AI companies to raise affirmative defenses, such as demonstrating that the AI met the standard of care or that its failures don’t rise to the level of negligence.

Other regulations passed in recent years include:

  • AB 489 in California, which takes effect in 2026. This law requires disclosures when AI communicates with patients and prohibits false claims of healthcare licensing by AI tools.
  • Texas Responsible Artificial Intelligence Governance Act, which also takes effect in 2026. Among other things, this act forbids the development or deployment of AI tools that promote self-harm or criminal activity, or that produce child sexual abuse imagery.

Nevada's attorney general also sued MediaLab AI in 2025, alleging its social media app, Kik, harmed Nevada’s youth by marketing itself to teen audiences, allowing anonymous accounts with no barriers to entry, facilitating the dissemination of child sexual material, and failing to disclose known hazards and risks. 

These are just some of the many recent developments across the U.S. With AI-regulation largely left to states in the absence of federal action, physicians utilizing AI face a patchwork of different rules if they practice in multiple jurisdictions or offer telehealth services across borders.

Section 4: AI Liability in Healthcare: The Physician's Specific Exposure

While the laws are still evolving on how AI is regulated, AI adoption in the healthcare industry is progressing at a rapid pace. This creates specific exposure risks, and these risks are not theoretical. 

Reports indicate a 14% increase in malpractice claims involving AI diagnostic tools between 2022 and 2024, the majority of which stemmed from diagnostic AI use in radiology, cardiology, and oncology. In many cases, medical care providers, not AI model developers, are the primary target of these claims. 

This is unsurprising given that the Federation of State Medical Boards formally recommended in April of 2024 that physicians, not AI developers, bear liability for AI-assisted clinical errors, stating:

“Consistent with the prevailing standards for any tool used in the delivery of healthcare, the physician is ultimately responsible for the use of AI and should be held accountable for any harm that occurs. “

However, the Federation also made clear that, “The extent to which a physician will be held accountable by the state medical board will depend on the relationship between the AI being used and risk that the tool may either create patient harm or otherwise impact the professional obligations of the physician.”

Physicians determining how to incorporate AI must take multiple considerations into account, including the standard of care, the potential for automation bias to affect clinical decisions, and the limitations on the AI developer's responsibilities in order to understand liability exposure. 

The Standard of Care Is Shifting

Under the traditional definition of medical malpractice, doctors were judged based on the customary practice standard, with their behavior compared to what most physicians would do under similar circumstances. 

However, in May of 2024, the American Law Institute (ALI) approved the Restatement of the Law Third, Torts: Medical Malpractice, which is the first dedicated restatement on this topic. The Restatement shifts towards a more objective, evidence-based rationale, so physicians are judged based on what a competent practitioner would do in light of available scientific evidence, clinical guidelines, and best practices. 

This has direct implications for AI because, as AI tools become more widespread in medicine, the effective use of well-validated AI could become part of the expectations of what a reasonable physician does, just as physicians are currently expected to consult current medical literature or guidelines.

 This could create a situation where providers potentially face liability for:

  • Failure to use AI tools
  • Using AI tools incorrectly
  • Relying too heavily on AI

This creates a liability trap where providers face exposure from all sides – and one that early evidence suggests doctors will struggle with. 

In fact, research from Johns Hopkins Carey Business School reveals physicians are more likely to consult AI in ‘low uncertainty’ cases, when they’re fairly certain about a prospective treatment plan, but avoid it in higher-uncertainty cases.

Researchers believe liability concerns drive this phenomenon because when physicians consult AI in high uncertainty cases but don’t follow its recommendations, they’re exposed to more risk. 

“Once AI becomes very precise in telling what needs to be done for a patient, and the information is very likely accurate, it becomes very difficult for a doctor to consult AI and then discard the information since taking that step could come back to haunt the physician in a subsequent malpractice case,” said study author Shubhranshu Singh. “To proactively protect themselves from legal liability, they may opt not to generate AI in the first place.”

Unfortunately, this is arguably the most suboptimal approach from a risk management perspective, as doctors will use AI the least in cases where it is most necessary.

Automation Bias in AI Tools

Automation bias is another issue as AI is deployed in medicine. This is a well-documented phenomenon in which humans are excessively trusting of, or deferential towards, algorithmic outputs even when those outputs are in conflict with their education, training, or contradictory clinical evidence.

Studies have consistently found automation bias, especially in complex situations and when providers are under time pressure or face an intense workload. Unfortunately, it creates a significant liability risk if physicians allow AI recommendations to serve as a substitute for their own judgment and a negative outcome results. 

Courts have long held that reliance on guidelines or medical literature doesn’t shield a doctor from malpractice claims, as the doctor ultimately is responsible for exercising independent judgment. 

AI systems will likely be treated the same way, with the physician ultimately expected to use their expertise to determine the appropriateness of following AI guidelines.

The Learned Intermediary Doctrine & AI

The learned intermediary doctrine is a longstanding legal doctrine frequently applied in product liability claims, which is also likely to affect liability when the use of AI tools in medicine leads to undesirable outcomes. 

Manufacturers have successfully argued that their duty to warn of the dangers of their products is satisfied when a physician has been adequately warned of the risks. Individual patients don’t need to be warned, because the physician serves as an intermediary, has an independent duty to evaluate risks and benefits, and can inform the patients of the potential harm.

If this doctrine applies to AI, then when a developer discloses a tool’s limitations to a hospital or medical care provider and the provider fails to accurately assess the risk or warn patients of the limitations, the developer may be shielded from liability, and the physician may ultimately be held responsible for losses.

While it’s not yet clear that this doctrine will apply to the use of AI-assisted tools, many legal scholars predict this is the likely outcome, especially if AI models are treated like products and claims for AI-harm arise under traditional product liability laws.

Autonomous vs. Assistive AI

The degree of physician involvement with AI tools is also likely to impact how liability is apportioned between physicians and developers. Specifically:

  • With assistive AI or decision support tools, the physician remains the final decision-maker and thus remains primarily liable for adverse outcomes. The physician is expected to critically evaluate and override AI's recommendations when appropriate. 
  • With automated AI tools such as FDA-approved diagnostic systems that operate with little physician oversight, manufacturers or developers are more likely to be legally liable for adverse outcomes.

One clear example is Dickson v. Dexcom Inc., which arose out of a failure of an FDA-approved glucose monitor to alert the plaintiff of dangerously low blood sugar. The court found that during its De Novo review, the FDA had established "special controls" for the G6’s design and labeling, so state laws (including laws on design defects and breach of warranty) couldn’t impose additional requirements. 

While the Dickson v. Dexcom case did not directly involve AI, legal experts suggest it could set a precedent for AI-enabled medical devices because it makes clear that federal preemption applies to situations when devices are approved using the De Novo pathway and also that states can’t exceed the “special controls” established by the FDA. 

This could encourage AI model developers to pursue this path to approval, while the special controls ruling could protect them from juries deciding an algorithm should have been "designed differently."

Case Studies: AI Failures & What They Teach

AI has already failed in ways that have resulted in harm. Evaluating real-world examples of how these failures have been handled provides insight into future liability rules for medical providers incorporating AI into their workflows.

Case Study 1: Algorithmic Prior Authorization Denial

nH Predict was developed by naviHealth, a UnitedHealth subsidiary, to manage prior authorizations for Medicare Advantage patients by predicting the appropriate length of post-acute care. 

Unfortunately, the tool had significantly higher denial rates than human reviewers, resulting in a lawsuit alleging that elderly patients were denied care owed to them as a result of decisions made by a tool known by the company to have a 90% error rate. 

A federal judge in Minnesota issued a ruling in February of 2025 allowing core claims to proceed, including claims arising from breach of contract and bad faith. The case remains pending, while an October 2024 report from the Senate Permanent Subcommittee on Investigations sharply criticized UnitedHealthcare for its surging denial rates.

Physician implications: When payers employ AI tools that deny necessary care, treating physicians are caught in a liability trap. Following the denial risks malpractice exposure if the patient is harmed due to delayed or denied treatment, while challenging every denial creates an untenable administrative burden. 

Case Study 2: AI & provider conflicts

Multiple mock jury studies reveal that if an AI model correctly detects an abnormality that radiologists miss, jurors are significantly more likely to find the radiologist liable for resulting harm. 

In fact, one study published by the New England Journal of Medicine reveals that when AI disagreed with the radiologist's "normal" read:

  • 72.9% of jurors sided with the plaintiff in a brain bleed case vs. 50% when no AI was used.
  • 78.7% sided with the plaintiff in a lung cancer case vs. 63.5% with no AI usage.

Physician implications: Using AI increases legal risks if the AI outperforms the human and the human fails to follow the AI recommendations. Best practices include careful workflows when AI and clinical evidence aren’t aligned, documenting reasoning, and knowing the tool's limitations.

Case Study 3: AI-Generated Clinical Documentation Error from Training Data Issues

Ambient AI scribes, or tools that listen to visits and generate real-time clinical notes, have introduced a new category of medical errors as well as created new risks of HIPAA violations.

Documentation errors have long given rise to malpractice litigation. The use of AI scribes introduces additional risks, as AI may omit important information or introduce false information. In fact, in one instance, AI hallucinated that a prostate exam had been performed after a physician mentioned scheduling the exam.

As AI-generated notes become widespread, thorough reviews may diminish, especially in light of automation bias. This can result in errors impacting future care, which the physician ultimately remains responsible for, as the notes are treated as the physician's own once they’re signed. 

Physician implication: Care providers who review and sign AI-generated notes are responsible for their contents. Thorough review and documenting the use of AI-generated notes are key to reducing liability risks.

AI Disclosure: What Physicians & Patients Are Owed

CLASSICA research published by Annals of Surgery Open found widespread agreement among both U.S. and E.U. surgeons that patients should be informed when medical care providers use AI, particularly if following or rejecting its advice could alter patient outcomes.

Despite this consensus, no federal law currently mandates that care providers disclose to patients when AI is used in their care. However, a growing number of states have put disclosure laws in place, including but not limited to:

  • California AB 3030, which requires health facilities and providers to include clear disclaimers in written or verbal patient communication generated in whole or in part by generative AI. 
  • Texas Senate Bill 1188, which requires licensed healthcare providers to disclose when they use AI models for diagnostic purposes 
  • Utah HB 452, which requires mental health chatbots to provide clear and conspicuous disclosures that the system is AI and not a human doctor. 

Even when it’s not required, proactive disclosure is both ethically sound and the better option from a risk management perspective. 

Medical care providers incorporating AI systems into their workflows must also ensure full compliance with HIPAA when providing protected health information. Unfortunately, many physicians don't realize they’re putting patient privacy at risk and opening themselves up to HIPAA violations when using cloud-based ambient scribes that may not have the proper data-protection systems.

AI Liability Insurance: How Coverage Is (and Isn't) Keeping Up

The NTIA Artificial Intelligence Accountability Policy Report indicated that financial assurance mechanisms, including insurance, may help drive AI accountability. 

These mechanisms distribute risk across stakeholders while creating powerful market incentives to employ safer practices, similar to the way environmental liability insurance has done for decades under frameworks like the Comprehensive Environmental Response, Compensation, and Liability Act. 

However, effective systems for insurance-based accountability aren’t yet in place, and the malpractice market’s adaptation to AI risk is uneven, with some insurers requiring AI training as a condition of coverage and others introducing AI-specific exclusions. Insurers are also faced with significant uncertainty in pricing models, as a lack of settled case law means there are no actuarial tables for AI-related claims. 

Since there are significant differences among policies because of these issues, physicians must review their current malpractice policies for AI-related exclusions and limitations to avoid significant coverage gaps, especially in legacy policies. Since most AI-related claims against physicians will flow through existing malpractice coverage, ensuring full protection is critically important as AI use becomes more widespread. 

Physicians practicing in an AI-integrated environment deserve coverage that reflects their actual risk profile. As an AI-powered malpractice insurer, Indigo is uniquely positioned to provide it. 

What Physicians Should Do Now 

With physicians facing uncertainty regarding how the law and their malpractice carrier will treat AI-related harm, it’s critically important to establish best practices to reduce risk. Doctors incorporating AI into their practice should:

  • Document all AI use in medical records. Note when AI was consulted, what tools were used, what the recommendations were, whether you followed AI’s guidance, and your clinical reasoning for any deviation. Detailed documentation is your primary defense in any AI-related malpractice claim
  • Exercise independent clinical judgment. Medical care providers won’t escape liability for malpractice just because a tool produced the wrong AI output. Doctors remain responsible for exercising clinical judgment and will likely be treated as the learned intermediary between the technology and the patient
  • Review your malpractice policy for AI exclusions. Doctors must confirm whether their insurance policy excludes AI-related harms or requires specific training before AI technologies become part of clinical practice.
  • Obtain patient consent where warranted. Regardless of whether notice requirements apply, proactively informing patients that AI is part of their care is both ethically sound and the legally defensible choice.

MLOps, or Machine Learning Operations, is a set of practices and tools that streamlines the entire machine learning lifecycle, from data preparation and model training to deployment and monitoring. MLOps fosters collaboration between data scientists, engineers, and operations teams, ensuring that models are developed efficiently and deployed reliably.

By adopting MLOps principles, organizations can automate key processes like model testing, validation, and deployment, making it easier to scale AI initiatives and deliver value faster. Integrating data science best practices and encouraging collaboration among data scientists and other stakeholders is essential for managing AI risk in clinical settings.

AI Model Liability Into the Future

This isn’t a future concern for doctors, it's a present issue. Currently, cases are being litigated in courts, lawmakers are enacting regulations, and insurers are evaluating how to manage liability risks.

Since the law hasn’t yet caught up to the technology, that gap is a big problem for physicians adopting AI without full clarity about how the law will apportion responsibility for errors between doctors, developers, and the AI models themselves.

Until federal laws are established, physicians must follow a patchwork of state laws, keep up-to-date on evolving case law, and understand their malpractice carriers’ policies and protections. Robust AI governance is essential to ensure that emerging systems are safe, fair, and transparent, and that oversight mechanisms and ethical principles are in place to align with societal values.

The safest approach to limiting liability will be to incorporate AI effectively into workflows, treating it as a tool that enhances but does not replace clinical judgment, and to ensure careful documentation of how AI was used. Managing the machine learning lifecycle is also crucial, as each stage—from development to deployment and maintenance—requires careful oversight to mitigate liability risks.

Finding the right malpractice insurer is also critical to ensuring protection when AI harm occurs. Indigo can help medical care providers get the comprehensive coverage they need for all of the risks they face.

Reach out to Indigo today to learn more.

Image by Just_Super from iStock.

Disclaimer: This article is provided for informational purposes only. This article is not intended to provide, and should not be relied on for, legal advice. Consult your legal counsel for advice with respect to any particular legal matter referenced in this article and otherwise.

Further Reading