Detailed guidancefor responsible AI governance v1.0, October 2025 |industry.gov.au/NAIC Introduction The Australian Government is committed to supporting industryadopt AI responsiblytosecure significant benefits for our economy and community.Best-practice industryguidanceplays an important role inbuildingtheconfidenceand capability ofAustralianworkers and organisationstoadopt and use AI in ways that makes our lives better. TheGuidance for AIAdoptionformsthe core guidance for the responsible adoptionofAIacross Australia’s economy.Theguidancedetails6key responsible AIpractices.These practicesalignwithAustralia’s AI Ethics Principles, as well asinternationalstandards and governance approaches. The guidanceadvisesboth non-technical and technical audiences: •Foundations: a high-level, accessible guide to establish the foundations ofthe6responsible AIadoptionpractices,fororganisations such assmall andmedium-sized enterprises and not-for-profits. •Implementationpractices: comprehensive guidance for governanceprofessionals and technical expertsto implementthe6responsible AI adoptionpractices. This resourcealigns withinternational standards and incorporates allVAISS practices, making themmore streamlined and accessible. It also extendsbest practices to AI developers. How to use this guidance This guidance applies to both developers and deployers of AI systems. Where practicesareparticularly relevantto either developers or deployers, this is marked with acorresponding (DEP) or (DEV). Find all thedefinitions for this guidancein thedefinitionstable. Why implementthisguidance? Build trust with customers and stakeholders when usingAI Customers and the community want to know thatorganisations are using AI ethicallyand responsibly. Implementing good governance practices canhelp to build trustwithstakeholders. Secure the intended benefits of AI while mitigating therisks Theimplementation practicessupportorganisations tofocus on thepurpose of AIadoption, align activities tostrategic goalsandintegrateresponsible AIpracticeswithexisting governance mechanisms. Buildtheconfidence of decision makers and leaders to adopt AIatgreaterscale By establishing good governance foundations and practices, organisations adopting AIcan more confidently experiment and take riskswith new AI-powered initiatives. Follow a roadmap that can helpto navigate a complexgovernancelandscape Theimplementation practicesalignto international standards and regulation,supporting organisationstoadopt leading practices globally in responsible AI. Our approach Human-centered Weadopta human-centred approach to AI development and deployment. Thisis in linewith Australia’s AI Ethics Principles and Australia’s commitment to internationaldeclarations such as the Bletchley Declaration. A human-centred approach helps makesure technologies are fit-for-purpose while serving humans, respecting individual rightsand protecting marginalised groups. In the context of safe and responsible AI system development and/or deployment,ahuman-centred approach means: •Protecting people.Theimplementation practicesaredesigned to help leadersand business owners identify, prevent, minimise and remedy a wide range ofAI-related risk of harm to their organisation and stakeholders,includingconsumers, employees and the Australian community.In thisguidance,theapproach towards protecting the safety of people is grounded in respectinghuman rights. A human-centred approach to AI upholds Australia’s responsibilityto human rights protections. These protections are enshrined in a range offederaland state and territory instruments, the Australian Constitution and thecommon law. •Upholding diversity, inclusion and fairness.Theimplementation practicesaredesigned to help organisations ensure AI systems serve all people in Australia,regardless of racial background, gender, age, disability status or other attribute. •Prioritising people through human-centred design.Human-centred design is anapproach to technology design, development and/or deployment that recognisesand balances human goals, relationships and social contexts with the capabilitiesand limitations of technical systems(Gasson 2023).This guidanceoffers practicalways to prioritise the needs of humans in the development and/or deployment ofAIsystems. •Developing and deploying trustworthy AI systems to support social licence.Tounlock the greatest possible value from AI, an organisation developing anddeploying it must have social licence for its use. This social licence is based onstakeholders believing in the trustworthiness of the AI system. It is only by earningand maintaining the trust of stakeholders that an organisation can be confident itpossesses the social licence needed to develop and/or deploy AI systems. Bias Thisguidance definesbias as the ‘systematic difference in the treatment of certainobjects, people or groups in comparison to others’. It can be the basis for unfairness,defined as ‘unjustified differenti