How medical indemnity is evolving for clinical AI
Medical indemnity insurance in Australia operates on a principle that most clinicians understand intuitively: you are covered for the standard of care that a reasonable practitioner in your position would have provided. The introduction of clinical AI tools is testing the boundaries of what that standard means.
Insurers have not yet formalised AI governance into their pricing models. But the conversations happening during policy renewals, claims investigations, and risk assessments signal a clear direction. Practices that can demonstrate governance over their AI tools represent a quantifiably different risk profile than those that cannot.
The current state of play
Australia's three major medical indemnity providers — Avant, MDA National, and MIGA — collectively cover the vast majority of practising clinicians. Their approach to clinical AI governance is evolving at different speeds, but the trajectory is consistent.
Renewal conversations: Underwriters at all three providers are beginning to ask about AI tool usage during policy renewals. These questions are not yet formalised into renewal forms, but they are appearing in supplementary questionnaires and broker discussions. Practices that have disclosed AI tool usage, particularly in high-acuity specialties like radiology, are receiving more pointed questions about governance arrangements.
Claims investigation: When an adverse event involves an AI-assisted clinical decision, the claims investigation now routinely includes questions about the governance framework that was in place at the time. What tool was used? Was it registered? Was there a governance policy? Was the clinician trained on its limitations? Was the tool being monitored? These questions are not hypothetical — they are being asked in active claims.
Risk assessment frameworks: Insurers are developing internal frameworks for assessing AI-related risk at the practice level. While these are not yet published, the factors they consider are predictable: number and type of AI tools in use, governance documentation, oversight arrangements, monitoring practices, and incident history.
What insurers are looking for
The evidence that matters to an insurer is specific and practical. It falls into three tiers of governance maturity.
Tier 1: Basic governance (minimum expectation)
- A register of all AI tools in clinical use
- A current governance policy that addresses clinical AI
- Named governance oversight responsibility
- Basic incident reporting process
Practices at this tier can demonstrate they are aware of their AI tools and have established minimum governance arrangements. This is where most insurers expect practices to be within the next 12 to 24 months.
Tier 2: Active governance (emerging standard)
Everything in Tier 1, plus:
- Risk assessments for each deployed tool
- Documented clinical workflows showing human oversight
- Regular performance monitoring with local metrics
- Staff competency records for AI tool usage
- Evidence that governance findings inform practice decisions
Practices at this tier can demonstrate that governance is operational, not just documented. This is where insurers expect practices to move as the regulatory environment matures.
Tier 3: Evidentiary governance (best practice)
Everything in Tier 2, plus:
- Sealed, timestamped evidence records for every AI-assisted clinical decision
- Hash-chained Decision Packs linking governance evidence to specific cases
- Exportable audit packs for regulatory and legal proceedings
- Continuous monitoring with drift detection
- Formal CAIOS domain scoring across all five governance dimensions
Practices at this tier can produce specific, tamper-evident evidence that a particular clinical decision was governed according to a documented framework at the time it was made. This is the standard that matters most in medico-legal proceedings.
The coverage question
The most important implication of evolving indemnity expectations is not premium pricing — it is coverage reliability.
Medical indemnity policies typically require practitioners to have exercised reasonable care. If an AI-assisted decision leads to patient harm, the insurer will investigate whether the practice's governance arrangements met the standard of care expected at the time. If the practice had no governance framework, or had one that was outdated and not followed, the insurer's obligation to indemnify may be contested.
This is not a hypothetical risk. As AI governance standards like RANZCR Ch.9 and the CAIOS framework become established reference points, the "reasonable practitioner" standard will increasingly include an expectation of documented AI governance. A practice that deployed AI tools with no governance framework will find it harder to argue it met the standard of care, even if the specific clinical decision was sound.
Premium implications
While governance is not yet a named pricing factor, the economics are straightforward. Insurers price risk based on probability and severity of claims. Practices with documented governance frameworks represent lower risk on both dimensions:
- Lower probability: Governance reduces the likelihood of AI-related adverse events by ensuring oversight, monitoring, and incident management
- Lower severity: When adverse events occur, governance documentation reduces legal exposure by demonstrating the practice met its duty of care
As the data matures, insurers will formalise this into pricing. Early adopters of governance infrastructure will be positioned for preferential terms. Late adopters will face either higher premiums or more restrictive coverage conditions.
What to do now
The window between "insurers are asking about governance" and "governance is a coverage requirement" is closing. Practices that build governance infrastructure now gain three advantages:
Evidence depth. Governance evidence is more credible when it shows a history of continuous operation rather than recent adoption. Starting now means you have 12+ months of governance records by the time formal requirements arrive.
Operational maturity. Governance processes improve with practice. The workflows, monitoring cadences, and incident management processes you establish now will be refined and reliable by the time they are tested.
Negotiating position. When insurers formalise AI governance into their pricing and coverage models, practices that can demonstrate established governance will be in a stronger negotiating position than those scrambling to comply.
The most expensive time to build governance infrastructure is after an adverse event, during a claims investigation, when an insurer is questioning whether your practice met the standard of care. The most cost-effective time is now, when governance can be built systematically and evidence can accumulate before it is needed.
Assess your governance readiness
Take the free AI Governance Readiness Assessment and see where your practice stands.
Take the assessment