What happens when AI gets it wrong: medico-legal risk in radiology
A 58-year-old man presents with chest pain. The AI triage tool flags his chest X-ray as normal priority. The radiologist, trusting the AI's assessment and managing a backlog of 40 studies, reads the film six hours later. The subtle left hilar mass is there, but the delay in reporting contributes to a delayed diagnosis of lung cancer. The patient's family wants answers.
This scenario is not hypothetical. As AI tools become embedded in radiology workflows, the question is not whether AI will contribute to an adverse outcome, but when — and whether your practice can demonstrate it governed that AI responsibly.
The legal landscape in Australia
Australian medico-legal liability for AI-assisted clinical decisions sits at the intersection of several frameworks: professional negligence law, coronial investigation powers, AHPRA regulatory obligations, and medical indemnity insurance requirements. None of these frameworks have been updated specifically for AI, which means existing standards of care are being applied to new technology.
The core legal test remains unchanged: did the practitioner exercise the care and skill expected of a reasonable practitioner in their field? But when AI is involved, the question expands. Did the practice have appropriate governance over the AI tool? Was the tool being used within its intended purpose? Was the radiologist aware of the tool's limitations?
The three questions a coroner will ask
In a coronial investigation involving AI-assisted radiology, three questions will define whether your practice is seen as having acted responsibly:
1. Did you know what the AI tool was designed to do — and not do?
This is about intended use documentation. If your chest X-ray AI is validated for detecting pneumothorax but not hilar masses, and a hilar mass was missed, the coroner will ask whether the practice understood those limitations. If you cannot produce documentation of the tool's intended use boundaries, the inference is that you deployed it without understanding what it could and could not do.
2. Were you monitoring whether the AI was performing as expected?
Performance monitoring is the evidence that your practice was actively overseeing the AI, not just passively relying on it. If you can produce concordance data showing the AI's detection rates, false positive rates, and any discordance patterns, you demonstrate active governance. If you have nothing, the coroner may conclude that you treated the AI as infallible.
3. When something went wrong, what did you do about it?
Incident documentation is critical. If a previous case involving AI error was identified but not logged, not investigated, and not addressed, the coroner will question whether the practice learned from its mistakes. A documented incident response — even one that concludes the AI performed within acceptable parameters — is materially better than silence.
AHPRA obligations
The Australian Health Practitioner Regulation Agency holds individual practitioners accountable for their clinical decisions, regardless of whether those decisions were influenced by AI. AHPRA's position is clear: the use of AI does not diminish or transfer a practitioner's professional responsibility.
For radiologists, this means:
- You cannot delegate diagnostic responsibility to an AI tool. If the AI misses a finding and you sign the report, the liability is yours
- You are expected to understand the limitations of the AI tools you use in practice
- If you become aware that an AI tool is underperforming, you have a professional obligation to act — whether that means adjusting your reliance on it, reporting the issue, or ceasing to use it
AHPRA has not yet published specific guidance on AI in clinical practice, but its existing frameworks on professional conduct and mandatory reporting apply directly. A practitioner who blindly follows AI output without applying clinical judgment is not meeting the standard expected by AHPRA.
What insurers expect
Medical indemnity insurers — MDA National, Avant, MIGA — are increasingly aware of AI as a risk factor. While explicit AI governance questions have not yet appeared on most renewal forms, underwriters are asking about it during reviews, particularly for practices that use multiple AI tools or have disclosed AI-related incidents.
Insurers assess AI governance risk across five dimensions:
- Tool registration: Do you know what AI tools are in use and are they regulatory-approved?
- Policy documentation: Do you have a written AI governance policy?
- Human oversight: Is there documented evidence that practitioners retain clinical authority over AI-assisted decisions?
- Performance monitoring: Are you tracking whether the AI tools are working correctly?
- Incident management: When issues arise, are they logged, investigated, and resolved?
The absence of governance documentation does not just increase your medico-legal risk — it can directly affect your insurability. An insurer who discovers after an incident that a practice had no AI governance framework may question whether the claim falls within the policy's terms.
Real-world scenarios
Consider three scenarios that illustrate how governance documentation changes outcomes:
Scenario A: AI misses a fracture, governance in place. The AI tool is registered, its limitations are documented (validated for long bone fractures, not complex pelvic fractures), performance monitoring shows a 94% concordance rate for its intended use, and the radiologist's workflow requires independent review of all studies. The miss is a clinical error, but the practice can demonstrate it governed the AI responsibly. The legal exposure is contained.
Scenario B: AI misses a fracture, no governance. Same clinical outcome, but the practice has no tool register, no documentation of the AI's intended use, no performance data, and no evidence that radiologists independently review AI-flagged studies. The practice cannot demonstrate it understood what the tool did or whether it was working correctly. The legal exposure is substantially greater.
Scenario C: AI generates a false positive, patient harmed by unnecessary procedure. The AI flags a lesion that leads to an unnecessary biopsy with complications. If the practice can show it monitored the AI's false positive rate and the rate was within acceptable clinical parameters, the outcome is defensible. If the practice never tracked false positive rates, it cannot demonstrate it was governing the tool at all.
What to do now
The medico-legal framework for AI in radiology is still forming. Case law is sparse, regulatory guidance is evolving, and insurers are still calibrating their risk models. But the direction is consistent: practices that can demonstrate structured AI governance will be treated differently from those that cannot.
Documentation is the foundation. If you can produce a tool register, risk assessments, performance data, human oversight evidence, and incident logs, you have a defensible governance framework. If you cannot, you are relying on the hope that nothing goes wrong — and in clinical practice, that is not a strategy.
The time to build your governance framework is before the adverse event, not after. Every week without documentation is a week of unrecoverable risk.
Assess your governance documentation gaps
Take the free AI Governance Readiness Assessment and see where your practice stands.
Take the assessment