5 AI governance gaps your insurer is looking for
Medical indemnity insurers in Australia are quietly adjusting their risk models to account for AI use in clinical practice. While most have not yet added explicit AI governance questions to their renewal forms, underwriters at MDA National, Avant, and MIGA are beginning to ask about it during reviews — particularly for practices that have disclosed AI tool usage or have experienced AI-related incidents.
This is not a future concern. It is happening now, in underwriting conversations that most practice directors never hear about because they are handled by practice managers or insurance brokers. The five gaps that consistently raise concerns follow a predictable pattern, and every one of them is addressable with the right governance framework.
Gap 1: No AI tool register
The first question an insurer asks, directly or indirectly, is: do you know what AI tools are in use at your practice?
This sounds basic, but most practices cannot give a complete answer. A tool register is not a list of products you purchased. It is a documented catalogue that includes each tool's name, vendor, version, TGA classification status, intended clinical purpose, deployment date, and when it was last reviewed.
Why insurers care: An unregistered tool is an unassessed risk. If a practice cannot enumerate its AI tools, the insurer infers that the practice has not evaluated the risks those tools introduce. It is the AI governance equivalent of not knowing which medications are in your pharmacy.
What good looks like: A maintained register, reviewed at least quarterly, that accounts for every AI tool touching clinical workflows. This includes tools embedded in PACS systems, standalone detection algorithms, and any research tools being used in parallel with clinical workflows.
Gap 2: Missing or outdated governance policies
The second gap is the absence of a current, clinical-specific AI governance policy. Insurers distinguish between three levels of policy maturity:
- No policy at all — the highest risk signal. The practice has adopted AI tools without any documented governance framework
- Generic IT policy — a policy exists but is borrowed from general IT governance and contains no provisions specific to clinical AI. It addresses data security and system uptime but says nothing about clinical oversight, performance monitoring, or incident management
- Clinical AI governance policy — a purpose-built policy that addresses how the practice governs AI tools in clinical workflows, including oversight responsibilities, monitoring requirements, and incident response
Why insurers care: A policy is evidence of intent. It shows the practice has thought about AI governance and established standards for itself. An outdated policy — one that references tools no longer in use or responsibilities assigned to people who have left — signals governance neglect.
What good looks like: A policy reviewed within the last 12 months, signed by the practice director or governance lead, with specific provisions for tool registration, risk assessment, performance monitoring, human oversight, and incident management.
Gap 3: No evidence of human oversight
Insurers are particularly focused on whether radiologists retain meaningful clinical authority over AI-assisted findings. The concern is not that AI tools exist, but that practices may be over-relying on them without adequate human checks.
This gap manifests in several ways:
- No documented workflow showing how AI outputs are reviewed before reaching clinical reports
- No policy on what happens when a radiologist disagrees with an AI finding
- No evidence that radiologists are aware of the limitations and intended use boundaries of the AI tools they work with
- Workflows where AI triage determines reading priority without documented radiologist oversight of the triage decisions
Why insurers care: If an AI-assisted decision leads to patient harm, the insurer needs to know whether the practice had systems in place to ensure human oversight. If the answer is \"the radiologist reviews everything anyway,\" the follow-up is: can you prove it?
What good looks like: Documented clinical workflows that show how AI outputs are integrated into the reporting process, with clear evidence that radiologists review and can override AI findings.
Gap 4: No performance monitoring
This is the gap that causes the most discomfort during underwriting conversations because it is the hardest to retrofit. Performance monitoring means tracking whether your AI tools are actually working as expected in your clinical environment — not just relying on the vendor's published validation data.
Relevant metrics include:
- Concordance rates between AI outputs and radiologist determinations
- False positive and false negative rates in your patient population
- Detection sensitivity across different clinical scenarios
- Any drift in performance over time or after software updates
Why insurers care: An AI tool validated on a dataset of 100,000 chest X-rays from European hospitals may perform differently on your patient population. If you have not measured this, you are operating on assumption rather than evidence. Insurers view unmeasured risk as unmanaged risk.
What good looks like: Regular performance reviews (at minimum quarterly) with documented metrics. Even basic tracking — such as a monthly count of AI flags versus radiologist agreement — provides evidence of active oversight.
Gap 5: No incident documentation
The fifth gap is often the most damaging when it surfaces after an adverse event. Incident documentation means having a formal process for logging, investigating, and resolving cases where an AI tool contributed to an incorrect or delayed finding.
The absence of incident logs does not mean incidents have not occurred. It means the practice has no evidence of having recognised, investigated, or learned from them. For an insurer, silence in the record is not reassuring — it is a red flag.
Why insurers care: Incident documentation demonstrates learning. A practice that logged an AI error, investigated the root cause, and adjusted its processes (even if the adjustment was minimal) shows governance maturity. A practice with no incident history over two years of AI use either has unusually good luck or, more likely, is not looking.
What good looks like: A formal incident logging process with defined triggers (when should an AI-related incident be logged?), investigation steps, resolution documentation, and follow-up actions. Incidents should be reviewed as part of regular governance meetings.
The premium implication
Insurers have not yet formalised AI governance into their pricing models, but the direction is clear. Practices with documented governance frameworks represent lower risk, and lower risk eventually translates to more favourable terms. Practices without governance represent unknown risk, which insurers price conservatively.
The more immediate concern is not premium pricing but insurability. If an AI-related claim arises and the practice has no governance documentation, the insurer may scrutinise whether the practice met the duty of care implied by its policy terms. Governance documentation is not just about reducing premiums — it is about ensuring your coverage holds when you need it.
The five gaps are addressable. Each maps to a specific governance activity that, once established, requires minimal ongoing effort to maintain. The investment is small relative to the risk of being found without governance when it matters most.
Find out where your governance stands
Take the free AI Governance Readiness Assessment and see where your practice stands.
Take the assessment