AHPRA's AI Guidelines Explained: What Every Australian Practitioner Must Know in 2026
Artificial intelligence is moving quickly through Australian healthcare. GPs are using AI scribes to transcribe consultations. Specialists are running diagnostic imaging through AI-assisted reading tools. Allied health practices are deploying AI chatbots to handle patient enquiries out of hours. The technology is compelling — but the regulatory picture has lagged behind adoption, leaving many practitioners uncertain about where the lines sit.
In 2025 and into 2026, AHPRA and the National Boards have provided clearer guidance on how existing professional obligations apply when AI is involved in care. The headline message is straightforward, and it applies regardless of how sophisticated the AI tool is: you remain personally accountable for every clinical decision made in your practice, whether or not an AI system contributed to it.
This guide explains what that means in practice — covering informed consent, record-keeping, advertising, AI scribes, patient communication tools, liability when AI gives wrong advice, and the upcoming Privacy Act 2024 obligations that come into effect in December 2026.
The Central Principle: You Cannot Delegate Clinical Responsibility to an AI
AHPRA's position is grounded in the existing professional standards that already govern every registered health practitioner in Australia. The codes of conduct for medical, nursing, pharmacy, dental, and allied health practitioners all require practitioners to take responsibility for decisions made in their name and within their scope of practice.
AI does not hold a registration. It cannot be sanctioned, suspended, or held professionally accountable. When an AI system is involved in a clinical pathway — whether suggesting a diagnosis, triaging a patient, generating a clinical note, or advising on medication — the registered practitioner in whose care the patient sits is responsible for the outcome.
This has three immediate implications for practice:
- You must be able to explain and justify any clinical decision, even if an AI tool contributed to your reasoning.
- You must apply appropriate clinical scrutiny to AI-generated outputs before acting on them — not simply accept them.
- You cannot disclaim liability by saying "the AI recommended it." That defence does not exist under Australian law or professional standards.
Informed Consent for AI-Assisted Care
The Medical Board's informed consent guidelines, adopted across most National Boards, require practitioners to provide patients with information that a reasonable person in their position would want to know before making a decision about their care. The question of whether AI involvement constitutes material information is now largely settled: it does, in most cases.
Specifically, patients should be informed when:
- An AI tool will analyse their images, pathology, or clinical data and contribute a recommendation to your decision-making.
- An AI scribe or transcription tool will be recording and processing their consultation.
- Their personal health information will be processed by a third-party AI platform (which also triggers Privacy Act obligations — see below).
- An AI system may respond to their clinical queries outside of your direct supervision.
Consent does not need to be elaborate for lower-risk applications. A brief verbal explanation and the opportunity to opt out — documented in the clinical record — is generally sufficient for AI transcription. Higher-risk applications, such as AI contributing to a diagnostic interpretation, warrant written consent and a more detailed explanation of how the technology works and its known limitations.
The key test: would this patient reasonably want to know this information before deciding whether to proceed? If yes, tell them.
AI Clinical Scribes: Where the Line Sits
AI transcription and scribe tools — including products like Heidi Health, Lyrebird Health, and Nabla — are among the most widely adopted AI tools in Australian general practice and allied health. Used appropriately, they sit comfortably within existing AHPRA guidelines. Used carelessly, they cross into territory regulated by the TGA.
When AI transcription is straightforward
Ambient transcription that converts spoken consultation audio into a draft clinical note is, in essence, an advanced dictation tool. The practitioner reviews the output, edits as required, and approves the final record. Provided patients are informed and consent is documented, this falls within normal practice.
When AI scribes become a TGA concern
The TGA regulates software as a medical device (SaMD) under the Therapeutic Goods (Medical Devices) Regulations 2002. An AI tool crosses from administrative transcription into regulated SaMD territory when it begins to:
- Generate clinical assessments, differential diagnoses, or treatment recommendations based on the transcribed content.
- Flag abnormal values or clinical risk scores from the patient's history.
- Suggest referrals, investigations, or medication changes.
Several scribe products have begun incorporating these features. Practitioners should check whether any AI tool they use has TGA clearance if it generates outputs that could directly influence clinical decisions — not merely record them.
A practical rule: if the AI is listening and writing, that's transcription. If the AI is listening, writing, and then telling you what to do next, that's decision support — and it needs to be evaluated differently.
AI in Patient Communication: A Risk Stratification Framework
Practices are increasingly using AI-powered tools to communicate with patients outside of consultations. AHPRA has not published a specific framework for this, but applying the existing duty of care and supervision requirements produces a clear risk gradient:
| Use Case | Risk Level | Practitioner Obligation |
|---|---|---|
| Appointment reminders (time, date, location) | Low | Standard consent for AI processing; no clinical supervision required |
| Post-appointment care instructions (pre-approved templates) | Low–Medium | Templates must be practitioner-approved; AI personalisation must not alter clinical content |
| Symptom checking or triage responses | Medium–High | Must be built on validated clinical logic; practitioner review of high-acuity flags; clear escalation pathway |
| Answering clinical questions (medication, dosage, side effects) | High | Requires practitioner supervision; AI outputs should be reviewed before delivery or responses should direct patient to consult |
| Mental health support or crisis conversations | Very High | Not appropriate for unsupervised AI; must always escalate to human practitioner or crisis line |
The general principle: the more the communication could influence a clinical decision or affect patient safety, the more it requires practitioner oversight — and the more clearly patients need to understand they are interacting with an AI, not a clinician.
When AI Gets It Wrong: Liability, Insurance, and Notification
AI systems produce errors. Sometimes those errors cause patient harm. Understanding the liability chain before an incident occurs is essential for every practice using AI tools.
Practitioner liability
As established above, practitioners cannot use AI error as a liability shield. The standard of care expected of you is to apply reasonable professional judgment — which includes scrutinising AI outputs. A practitioner who accepts an AI recommendation without appropriate clinical review, and where that acceptance results in patient harm, is exposed to a finding of professional misconduct or negligence.
Insurance implications
Medical indemnity insurers in Australia are updating their policies in response to AI adoption. Several insurers now ask whether AI tools are used in clinical practice as part of their renewal questionnaires. Failing to disclose AI tool use when material to the risk profile could void coverage. Contact your insurer (MIGA, Avant, MDA National, MIPS) to confirm your policy covers AI-assisted practice and understand any documentation requirements they impose.
AHPRA notification requirements
Mandatory notifications to AHPRA apply when a practitioner reasonably believes another practitioner has placed the public at risk. Where an AI system in a practice produces an error that causes serious patient harm, the notification obligations depend on whether the harm resulted from a failure in the practitioner's own duty of care — not from the AI itself. AHPRA cannot receive notifications about AI systems; they receive notifications about practitioners.
If a patient is harmed and you believe your supervision of the AI tool was adequate, document that reasoning thoroughly. If you believe a colleague's supervision was inadequate, the standard mandatory notification test applies.
Privacy Act 2024: The Automated Decision Transparency Deadline
The Privacy and Other Legislation Amendment Act 2024 introduced significant changes to the Privacy Act 1988. For healthcare AI, the most important change is the new right for individuals to request meaningful information about automated decisions that significantly affect them — and the obligation on organisations to be transparent about when those decisions are made.
The relevant provisions come into effect in stages, with the automated decision-making transparency requirements applying from December 2026. Australian Privacy Principle (APP) 1 will require organisations to disclose in their privacy policy whether they use automated decision-making that significantly affects individuals, what kinds of decisions those are, and how the individual can request information about or seek review of those decisions.
For healthcare practices, "automated decisions that significantly affect individuals" will include:
- AI triage systems that determine urgency or appointment priority.
- AI tools that screen patients for recall, risk stratification, or preventive care programs.
- Any AI system that generates a recommendation that directly influences a clinical care pathway.
From December 2026, patients will also have the right to request human review of any automated decision that significantly affects them — an obligation practices need to have a process ready to fulfil.
My Health Records Act implications
The My Health Records Act 2012 governs access to and use of information in the national My Health Record system. Healthcare providers who are registered system operators may access My Health Records for the purpose of providing healthcare to a patient. Using an AI system that pulls information from a patient's My Health Record — for example, to generate a clinical summary — is permissible provided:
- The access is for the purpose of providing healthcare to that patient.
- The AI vendor is a registered or contracted system operator, or the data is processed within your registered organisation's own systems.
- Patient data from My Health Records is not used to train third-party AI models without explicit consent and appropriate authorisation.
Uploading AI-generated clinical notes to a patient's My Health Record is permissible where the practitioner has reviewed and approved the note. Uploading unreviewed AI-generated content is not.
Advertising AI-Powered Services: What AHPRA Permits and Prohibits
The National Law prohibits health service advertising that is false, misleading, or deceptive. AHPRA's advertising guidelines apply directly to claims about AI capabilities. Common problem areas include:
- Prohibited: "Our AI provides accurate diagnoses" — accuracy claims about diagnostic AI require substantiated evidence and appropriate caveats.
- Prohibited: "AI-powered care that's better than a standard consultation" — comparative claims require evidence and must not create unrealistic expectations.
- Prohibited: Testimonials that imply the AI produced a specific clinical outcome for a named or identifiable patient.
- Permissible: "We use AI-assisted transcription to improve documentation accuracy" — factual description of an administrative AI tool.
- Permissible: "Our practice uses AI appointment reminders to reduce wait times" — factual claim about a measurable operational improvement.
The safest approach: describe what the AI tool does, not what it delivers in terms of clinical outcomes. If you make a specific efficacy claim, have the evidence to support it.
Practical Checklist: 5 Steps to AHPRA-Compliant AI Use in Your Practice
- Audit your AI tools. List every AI tool in use across your practice — including appointment reminders, clinical scribes, diagnostic aids, patient communication tools, and any AI embedded in your practice management software. For each, identify what data it processes, where that data is stored, and whether TGA registration applies.
- Update your privacy policy. Your privacy policy must disclose all third-party systems that process patient data. Add or update your AI tools disclosures now. From December 2026, the policy must also describe any automated decision-making that significantly affects patients and how they can request human review.
- Establish consent processes. Create a documented consent process for each AI tool that processes identifiable patient data or contributes to clinical decisions. For AI scribes, a statement read at the start of the consultation and noted in the record is the minimum. For diagnostic AI, written consent is preferable.
- Review AI outputs before acting. This is not optional. Build a workflow where AI-generated clinical notes, recommendations, or communications are reviewed by a registered practitioner before they become part of the patient's record or are delivered to the patient. Document that review process.
- Notify your insurer. Contact your medical indemnity insurer and confirm that AI tool use is covered under your current policy. Ask whether any additional endorsements or documentation are required. Do this before an incident, not after.
Frequently Asked Questions
Do I need patient consent to use an AI scribe?
Yes, in practice you do — even if the strict legal requirement is debated. Most Australian medical defence organisations and the RACGP recommend practitioners inform patients before using ambient AI recording in a consultation and offer them the choice to opt out. The practical risk of not doing so includes patient complaints and potential breach of the Australian Privacy Principles if the third-party AI vendor is storing or processing the audio.
Can I advertise AI-powered services in my practice?
Yes, with care. You can accurately describe AI tools you use in administrative and operational contexts. You cannot make unsubstantiated clinical efficacy claims about AI. Phrases like "AI-powered diagnostics for better outcomes" are high-risk under the National Law advertising guidelines unless supported by evidence. Stick to factual, functional descriptions of what the technology does.
What if my AI system gives incorrect information to a patient?
Your liability depends on what supervision was in place. If the AI was operating in a triage or patient communication role without adequate practitioner oversight, and a patient acted on incorrect AI advice that caused harm, the practice and responsible practitioner face exposure. Document your supervision processes, ensure patients know they are interacting with AI and not a clinician, provide clear escalation pathways to a human, and always direct patients to seek urgent care through emergency channels when clinical risk is present. Your medical indemnity insurer should be informed of the incident as soon as it occurs.
Does my practice management software count as AI?
Not all software with "AI" in its marketing is a regulated medical device or triggers the consent obligations above. The key question is whether it makes or contributes to clinical decisions, or whether it processes identifiable patient data via third-party machine learning models. Many scheduling and billing tools use rule-based automation that is not AI in the regulatory sense. If your PMS vendor claims AI features, ask them directly whether any of those features meet the TGA SaMD definition or involve sending patient data to external AI infrastructure.