Moirai turns a practice's AI governance posture into reviewable evidence: tool scope, ownership, regulatory position, incidents, corrective actions, and public verification.
Failure mode
The weak version is a practice answering a questionnaire from memory. The expensive version is an insurer discovering after a claim that ownership, review dates, incidents, and vendor evidence were scattered.
Score, evidence gaps, RANZCR mapping, tool-level risk, and a 30/90 day action plan.
Owner, use case, regulatory status, current risk, incidents, evidence state, and next review.
Hash-based proof link for report and decision-pack integrity without opening the private workspace.
The commercial wedge is sharper when insurers can ask for the evidence file as a condition of renewal, premium review, or network risk monitoring. Moirai gives the practice a way to prove governance without asking the insurer to log into private clinical systems.
AI tool inventory, intended use, review cadence, known incidents, and unresolved evidence gaps are explicit rather than buried in email or committee notes.
A reviewer can test report fingerprints and decision-pack chain metadata while Moirai keeps PHI out of public verification surfaces.
The evidence packet creates a natural renewal motion: prove the register stayed current, prove gaps closed, prove incidents were handled.
Confirm AI tools under evaluation or clinical use.
Review current owner, review date, and evidence status for each tool.
Check whether high-risk tools have incidents, corrective actions, and board visibility.
Use the sample pack to inspect the report, register export, and verifier before a paid assessment.