In October 2025, Australia’s National Artificial Intelligence Centre (NAIC) released its Guidance for AI Adoption, providing practical, risk-based direction to help organisations adopt and manage AI responsibly.
The Guidance is the latest significant step in Australia’s evolving approach to embedding responsible AI. It offers tools and templates that translate high-level principles into day-to-day governance and assurance practices, helping organisations integrate AI accountability and transparency into existing risk and compliance systems. Australia’s approach continues to align with leading global standards, including ISO 4200:2023 AI Management Systems, the U.S. NIST AI Risk Management Framework, and the OECD AI Principles, ensuring that innovation is supported by robust governance.
This article explains how Australia’s AI policies, frameworks, and standards connect, and clarifies what organisations must now apply to ensure responsible and lawful AI use across both public and private sectors.
1. The National AI Governance Architecture
The Guidance for AI Adoption marks the next stage in the evolution of the Australian Government’s AI framework. Earlier initiatives laid the groundwork: the Australian AI Ethics Principles set the moral compass for AI development and use, and the Voluntary AI Safety Standard (VAISS) introduced a baseline for responsible practice. Together, they promoted fairness, transparency, and accountability but offered limited operational detail.
The new Guidance for AI Adoption is the first update to VAISS. It refines the original ten guardrails set out in VAISS into six essential practices for accountable and transparent AI. The Guidance responds to feedback received by NAIC, including for more accessible and actionable guidance for both technical and non-technical audiences. The Guidance comprises two parts:
- Foundations, aimed at organisations beginning to adopt AI or using AI in low risk ways or professionals who are new to AI and AI governance; and
- Implementation Practices, for governance professionals and technical experts, and for entities with more advanced AI maturity or higher-risk use cases.
The Office of the Australian Information Commissioner released two AI guidelines in 2024 to help organisations comply with privacy obligations, emphasising the importance of privacy by design and conducting privacy impact assessments: Guidance on privacy and developing and training generative AI models; and Guidance on privacy and the use of commercially available AI products.
For government agencies, additional policies and standards apply:
- National Framework for the Assurance of AI in Government (Department of Finance, June 2024) provides for a nationally consistent approach for the assurance of AI use in government. It provides a structured assurance process for testing and validating AI systems before and during deployment. It emphasises risk assessment, explainability, documentation, and continuous monitoring.
- Policy for the Responsible Use of AI in Government (Digital Transformation Agency, October 2024) sets mandatory expectations for all Commonwealth agencies using or procuring AI systems. It requires agencies to demonstrate adherence to eight responsible-AI principles, appoint accountability leads, and maintain AI system inventories.
Responsible AI governance also reflects the Australian Public Sector (APS) Value of Stewardship, which requires agencies to ensure the long-term capability, transparency and accountability of government systems and data. This includes maintaining complete and accurate recordkeeping of key actions and decisions (Requirement E, APS Values).
2. NAIC’s Guidance from Policy to Practice
The Guidance for AI Adoption provides detailed, risk-based steps for organisations to embed responsible AI governance.
Each identifies six core practices essential for responsible AI management:
- Assign accountability for AI.
- Understand impacts and plan accordingly.
- Measure and manage risks across the AI lifecycle.
- Share information and maintain transparency.
- Test and monitor.
- Maintain human control.
Importantly, the Guidance for AI Adoption provides tools that organisations can use, including:
- an AI screening tool for risk classification;
- an AI policy template aligned with government expectations;
- an AI register template for cataloguing systems and oversight status; and
- supporting glossaries and examples to aid consistent interpretation.
By offering these standardised tools, the Guidance transforms high-level policy and assurance obligations into day-to-day management practices. Organisations can use the templates to document compliance and to demonstrate traceability and accountability.
The Guidance emphasises two key obligations and cross-cutting requirements for documentation and accountability, and cybersecurity.
2.1 Risk Management
Organisations must:
- Conduct planning and risk assessment before any AI procurement or development.
- Complete pre-implementation assurance focusing on explainability, fairness, privacy and bias.
- Undertake post-deployment monitoring to detect drift and new risks.
These activities should be documented and reported through appropriate governance mechanisms, including risk committees, to demonstrate compliance and diligence.
As the guidance points out, the governance frameworks for data, privacy, and cybersecurity should already be in place and reviewed and updated for the use of AI systems.
2.2 Compliance with the Privacy Act and Australian Privacy Principles
Organisations must ensure that their use of AI complies with privacy legislation that govern consent, collection, storage, use, disclosure and retention of personal information. In particular, they must:
- Manage personal information lawfully, securely and transparently.
- Ensure AI systems that collect or infer personal data are consistent with the original collection purpose or have consent or legal authority.
- Conduct privacy impact assessments for AI systems using personal data and ensure transparency to individuals affected by AI-assisted decisions.
Detailed legislative and regulatory guidance governing privacy are outlined in Section 6.4 below.
2.3 Data governance and documentation are key
Data governance of AI systems is critical, since the quality of an AI model’s output is driven by its data. This also includes the management of data usage rights for AI, including intellectual property, Indigenous Data Sovereignty, privacy, confidentiality and contractual rights.
As the Guidance for AI Adoption points out, documentation is key. Every activity in the 6 essential practices must be documented. This aligns with the requirements under the Australian Public Sector Code. Under section 4.1.2, recordkeeping is an essential part of meeting accountability obligations. Under section 4.1.3, APS employees must demonstrate that their actions and decisions have been made with appropriate consideration, care and diligence, and that they have used Commonwealth resources properly.
The Guidance for AI Adoption requires organisations to maintain an AI register to record details about all of the AI systems across the organisation, with an inventory of each AI model and system with sufficient detail, including, for example, datasets and their provenances, identified risks, potential impacts and the risk treatment plan.
In practice, organisations should maintain artefacts such as system design documentation, model validation reports, data provenance records, change management logs, and records of human oversight. These provide the evidentiary trail for explainability, audit and review.
Human oversight remains paramount: decision-makers are accountable for AI-assisted outcomes and must be able to explain decisions.
2.4 Cybersecurity
Cybersecurity is foundational to responsible AI governance. As AI systems become embedded in core business and government operations, they expand the organisation’s attack surface and introduce new vulnerabilities across data pipelines, models and interfaces.
The Guidance for AI Adoption and the Australian Signals Directorate’s Essential Eight controls together set expectations that every organisation, public or private, must identify, assess, and mitigate cyber risks throughout the AI lifecycle.
Each entity’s existing information security and risk frameworks should therefore be reviewed and updated to address AI-specific risks, such as adversarial attacks on models, data poisoning, and unauthorised model reuse.
Boards and executives remain accountable for cyber resilience. For regulated entities, this is reinforced by the Security of Critical Infrastructure Act 2018 (Cth) and by APRA’s CPS 234 Information Security and CPS 230 Operational Risk Management prudential standards, which require ongoing oversight of technology-related risks.
In short, cybersecurity is fundamental to the integrity of data governance, privacy and AI assurance. Detailed legislative and regulatory obligations governing cyber resilience are outlined in Section 6.1 below.
3. Integrating AI Governance with Information and Privacy Stewardship
AI governance builds on information governance principles. Integrating these with privacy requirements ensures that:
- datasets are accurate, lawfully sourced and documented;
- outputs and models are properly classified and retained;
- decision logs are maintained for audit and review; and
- evidence is available for internal or external scrutiny.
The National Archives of Australia (NAA) confirms that information and outputs generated by AI technologies are Commonwealth records under the Archives Act 1983 (Cth) whenever they form evidence of decisions, actions or communications. Agencies must therefore capture and preserve AI records – including training datasets, model documentation, system outputs, and metadata – in accordance with information management standards.
The NAA’s guidance Information management for records created using Artificial Intelligence (AI) technologies sets expectations for the capture and preservation of AI-related records. It highlights that records must be complete and reliable, metadata should document algorithms, data sources and model versions, and human oversight must verify AI outputs to ensure authenticity and integrity. Recordkeeping obligations extend across the entire AI lifecycle from system development and training through to decommissioning.
Information lifecycle management and AI lifecycle management are interdependent: both require continuous documentation, assurance and review. Applying NAA guidance helps agencies design metadata and retention rules that capture algorithmic context, training data provenance, and model change history. Such practices ensure that AI-assisted decisions remain verifiable long after systems evolve or are retired.
These requirements align directly with requirement E of the APS Values of Stewardship, which includes “Ensuring complete, accurate, and appropriately accessible recordkeeping of key actions and decisions” andunderpins sections 4.1.2 (recordkeeping) and 4.1.3 (accountability and proper use of Commonwealth resources) in the APS Code of Conduct in Practice. They also support privacy obligations under the Privacy Act 1988 (Cth) and the Australian Privacy Principles (APPs), particularly APP 6, which limits use or disclosure of personal information to the purpose for which it was collected, unless an exception or consent applies. Together, the APS Values and Code, the Privacy Act and the AI framework establish a unified system of accountability for how AI systems are procured, designed, implemented, recorded and monitored across government.
For corporate entities, similar expectations arise as part of responsible corporate stewardship, under directors’ duties of care and diligence, and under ISO 42001 and assurance frameworks such as the Guidance for AI Adoption, which require traceable documentation for model validation, audit, and compliance.
4. AI Governance Structures
Effective governance also requires AI oversight structures that deliver accountability at both the system and organisational levels. Optimal AI governance enables agencies to monitor risk, performance and compliance across individual AI systems while maintaining enterprise-wide oversight to ensure consistent assurance and responsible use.
The NAIC Guidance requires agencies to assign, document, and clearly communicate who is accountable for AI, and integrate AI governance within enterprise risk and performance frameworks. It also requires each organisation to establish a fit-for-purpose AI risk management framework that reflects the nature and scale of its AI use.
Accordingly, the key features of effective AI governance structures include:
- A named accountable official with authority to oversee AI use and policy compliance.
- A designated owner for each AI system.
- A risk management framework to identify and manage the risks of using AI.
In the corporate context, AI oversight is typically embedded within board risk or audit committees, supported by an accountable AI lead or equivalent senior executive. The UTS Human Technology Institute’s 2025 snapshot on Designing AI Governance Structures explains that when designing AI governance structures, there are two key decisions: whether to establish a dedicated AI governance committee and whether it should be centralised, decentralised, or hybrid. The HTI snapshot highlights the value of dedicated bodies, such as AI Governance Committees, which offer focus, expertise and accountability. On the other hand, existing governance committees, such as Information Governance, Digital/Data Governance, ICT Steering or Risk, may reduce duplication and costs. However, as the HTI snapshot points out, they need to be adapted to manage AI-specific risks. The snapshot concludes that many organisations start with dedicated AI governance structures and plan to integrate them into existing governing frameworks over time.
The form of AI governance structure and whether it is centralised will depend on each organisation’s priorities, resources and AI maturity. Whichever model is selected, key features should include:
- A clear terms of reference covering lifecycle risk management, privacy and security controls, bias/fairness testing, model change control, and recordkeeping under Principle 4 and the Archives Act.
- Defined reporting lines to the accountable authority and Risk (or equivalent) committee, with periodic assurance reporting integrated into PGPA governance.
Integrating AI with existing information governance can be highly effective because the disciplines share core principles of data quality, privacy, security, and recordkeeping and require similar cross-functional collaboration.
5. International Alignment
Australia’s framework aligns with international initiatives that seek to harmonise AI governance through global standards, including:
- OECD AI Principles (2019)
Australia, as a founding signatory, has committed to ensuring that AI is innovative, trustworthy, and respects human rights and democratic values. - The Bletchley Declaration (2023)
Signed by Australia and 27 other countries and the European Union at the UK Government’s 2023 AI Safety Summit, it emphasises international collaboration on AI safety, transparency, and governance. - NIST AI Risk Management Framework (2023)
Sets out functions of govern, map, measure, manage for identifying and mitigating AI risks. The Federal Government’s frameworks adapt this language for the Australian public-sector context. - Seoul Declaration for safe, innovative and inclusive AI (2024) The Australian Government endorsed three key outcomes building on the Bletchley Declaration, confirming a shared understanding of AI’s opportunities and risks and strengthening global collaboration on AI safety, governance and responsible innovation.
- ISO/IEC 42001:2024 – AI Management Systems Standard (and related AI risk standards)
Provides a globally recognised framework for establishing and maintaining governance and risk management processes for AI systems.
This alignment ensures that Australian agencies are building systems consistent with global expectations, which is essential for interoperability, data sharing, and participation in international digital trade.
6. The Regulatory Environment
Australia’s legal and regulatory landscape already covers most AI-related risks. Existing legislation and standards collectively require organisations to ensure that AI systems are safe, transparent, fair and secure. The main areas of legal compliance are:
- Security and Resilience
- Fairness and Accuracy
- Transparency and Disclosure
- Privacy and Data Governance
- Accountability and Supply-Chain Governance
Understanding how these laws intersect is essential for boards, executives, and public-sector leaders to demonstrate responsible and lawful AI use.
6.1 Security and Resilience
AI systems must be secure, resilient, and properly governed. Under the Corporations Act 2001 (Cth), directors have duties to exercise care and diligence that include identifying and managing risks to the organisation, including technology, data and AI-related risks.
The Security of Critical Infrastructure Act 2018 (Cth) and sector-specific laws in finance, energy, and telecommunications impose explicit risk management and cybersecurity obligations. For financial institutions, the Australian Prudential Regulation Authority (APRA) embeds AI considerations within two key prudential standards:
- CPS 230 Operational Risk Management (effective 2025) requiring entities to identify and control emerging technology risks, including AI.
- CPS 234 Information Security mandates that boards maintain oversight of all technology-related risks and ensure information assets remain protected against evolving cyber threats.
Privacy laws also require entities to take “reasonable steps” to protect personal information from misuse, interference, loss, or unauthorised access (APP 11). Failure to implement adequate AI risk controls could also give rise to negligence if foreseeable harm occurs.
For both public and private sectors, adherence to the Australian Signals Directorate’s Essential Eight remains the baseline for safeguarding data and AI environments.
6.2 Fairness and Accuracy
Ensuring fairness and accuracy in AI outputs is critical to avoiding harm and maintaining trust. AI systems that produce biased or erroneous results can expose organisations to a range of legal and regulatory risks.
Under the Privacy Act 1988 (Cth), entities must take reasonable steps to ensure the quality and accuracy of personal information used or generated by AI systems. Anti-discrimination laws, including the Fair Work Act 2009 (Cth), apply if AI systems negatively exclude or disproportionately affect individuals based on protected attributes such as race, gender, disability, or age.
Negligence and product-liability principles may apply where unsafe AI design, biased training data or inadequate testing causes harm. Under the Australian Consumer Law (ACL), organisations must ensure that AI systems and related services are of acceptable quality and fit for purpose.
6.3 Transparency and Disclosure
Transparency underpins responsible AI use. The ACL prohibits misleading or deceptive conduct and false or misleading representations, which can extend to how organisations describe or deploy AI. Obligations include:
- Accurately representing AI capabilities and limitations;
- Disclosing when AI generates content or influences decisions; and
- Avoiding deceptive uses of synthetic media or deepfakes.
The ACL also regulates unfair contract terms and statutory guarantees that require services, including AI-enabled systems, to be delivered with due care and skill.
Forthcoming privacy law reforms will strengthen transparency requirements for automated decision-making, requiring organisations to inform individuals when AI is used to make significant decisions affecting them and to provide meaningful information about how those decisions are made. These changes will further align Australia with international norms under the OECD AI Principles and the EU AI Act.
For government agencies, the Digital Service Standard and the Policy for the Responsible Use of AI in Government also require agencies to disclose when and how AI systems are deployed, reinforcing public trust and accountability.
6.4 Privacy and Data Governance
The Privacy Act 1988 (Cth) and the Australian Privacy Principles (APPs) regulate how organisations collect, use, and disclose personal information. The Office of the Australian Information Commissioner (OAIC) has issued specific guidance on AI, including:
- Guidance on privacy and developing and training generative AI models (2024, updated 2025); and
- Guidance on privacy and the use of commercially available AI products (2024, updated 2025).
These guidelines emphasise privacy by design and the need to conduct Privacy Impact Assessments (PIAs) before implementing AI systems that process personal data. Key obligations include:
- Collecting and using data lawfully, fairly and transparently;
- Limiting use or disclosure to the original purpose unless consent or exception applies (APP 6);
- Informing individuals when AI influences decisions that affect them; and
- Securing personal data appropriately and keeping personal data for no longer than necessary (APP 11).
The 2025 reforms to the Privacy Act introduced a statutory tort for serious invasions of privacy and expanded the OAIC’s powers to investigate and enforce AI-related breaches.
For government agencies, these obligations intersect with the APS Code of Conduct Principle 4 – Managing Information and the APS Value of Stewardship, which require complete, accurate, and accessible recordkeeping. In the corporate context, directors’ duties of care and diligence similarly demand traceable documentation to demonstrate lawful data use.
6.5 Accountability and Supply-Chain Governance
AI assurance extends beyond internal controls to the wider supply chain. Organisations remain responsible for the conduct of vendors, partners and data providers whose systems or datasets feed into their AI models. Entities must manage legal risks across the AI supply chain through due diligence, contractual controls, and documentation.
Key obligations include:
- Intellectual Property and Confidentiality – ensuring appropriate licences, usage rights, and protections for data, models, and outputs, consistent with copyright, trade secrets, and contract law.
- Privacy and Transparency – maintaining accurate privacy policies and collection notices that identify where data originates, how it is used, and to whom it is disclosed.
- Competition and Fair Trading – complying with prohibitions on anti-competitive or misleading practices when using AI in commerce, including pricing, advertising, or algorithmic decision-making.
- Product and Service Guarantees – ensuring AI tools and services provided to consumers or businesses meet statutory guarantees of quality, fitness for purpose, and due care and skill.
The Australian Competition and Consumer Commission (ACCC), OAIC, and Australian Communications and Media Authority (ACMA) are actively monitoring AI practices across sectors to ensure fairness, privacy, and transparency. Together, these regulators demonstrate that AI governance is already being enforced through existing statutory duties rather than a dedicated AI Act.
For corporate entities, the Australian Securities and Investments Commission (ASIC) has reinforced that effective oversight of AI is integral to directors’ duties under the Corporations Act. ASIC’s Report 787 Regulating AI in Financial Services and Report 798 Governance Maturity in Regulated Entities emphasise that boards must ensure appropriate governance, documentation, and assurance controls for all AI use cases.
6.6 Summary
Australia’s regulations already provide robust safeguards for the responsible use of most AI risks across both public and private sectors. Boards, executives, and public-sector leaders must understand how these regulations intersect and integrate AI governance into enterprise risk, compliance, and assurance processes to meet ongoing legal and ethical responsibilities.
7. Challenges and Future Directions
While the foundations for responsible AI are now firmly in place, organisations are still at different stages of maturity. Some are already integrating AI governance into enterprise risk and audit systems, while others are only beginning to map where AI is being used. This uneven progress presents a collective challenge to lift capability and effective AI Governance.
Pilot projects have shown how vital it is to embed bias testing, data documentation, and human oversight from the very start of system design. Many organisations also recognise the need to connect AI risk registers with broader enterprise risk management systems, ensuring that AI risks are monitored with the same rigour as financial, operational, or cybersecurity risks. These lessons reinforce a central truth: AI risk is not purely a technical issue; it is a governance issue that spans leadership, culture, and accountability.
The next phase will centre on demonstrating assurance maturity and integrating AI performance and risk reporting into agency dashboards, strengthening audit readiness, and building confidence through transparent assurance processes. The NAIC and the Department of Finance are leading efforts to build capability and provide practical tools to help organisations assess and improve their assurance practices.
Ultimately, progress in the responsible and safe use of AI will be enhanced by collaboration both within organisations and externally. Sharing lessons, benchmarks, and assurance outcomes will help create a mature, consistent, and accountable AI ecosystem that demonstrates Australia’s leadership in building public trust in the age of artificial intelligence.
8. Conclusion
Australia has a coherent and standards-aligned framework for governing AI across both the private and public sectors. The evolution of the AI framework and the NAIC Guidance for AI Adoption continue to advance the principles of accountability, transparency, and continuous improvement into achievable practices for every organisation.
The challenge ahead lies in effectively implementing and embedding AI governance within enterprise risk and assurance systems, testing controls, and ensuring that AI-assisted decisions remain explainable, well-documented, and defensible. Strengthening the integration between information governance, privacy, cybersecurity, and AI oversight will be critical to ensuring that AI use is both responsible and accountable.
For leaders and decision-makers, this will require ongoing vigilance, cross-disciplinary collaboration, and a sustained focus on governance capability to keep pace with the speed and complexity of AI. By building consistent and effective assurance and transparency practices, organisations can reinforce confidence in the safe, responsible and innovative use of AI.
Author: Dr Susan Bennett PhD, LLM (Hons), MBA, FGIA, FIP
Contact: susan.bennett@sibenco.com
Download article: click here
Sources and Further Reading
- Guidance for AI Adoption – National AI Centre, Department of Industry. Science and Resources, October 2025.
- National framework for the assurance of artificial intelligence in government – Department of Finance, June 2024.
- Voluntary AI Safety Standard – Department of Industry, Science and Resources, September 2024.
- Policy for the Responsible Use of AI in Government – Digital Transformation Agency, September 2024.
- Australia’s AI Ethics Principles – CSIRO Data 61, Department of Industry, Science and Resources, 2019 and updated 2024.
- APS Values Stewardship – requirement E – Australian Public Service Commission 2024.
- APS Values and Code of Conduct in practice – Australian Public Service Commission 2018 and updated 2021.
- Information management for records created using Artificial Intelligence (AI) technologies – National Archives of Australia 2019.
- Digital Service Standard – Digital Transformation Agency, 2023.
- Australian Privacy Principles – Office of the Australian Information Commissioner.
- Guidance on privacy and developing and training generative AI models – Office of the Australian Information Commissioner 2024 and updated 2025.
- Guidance on privacy and the use of commercially available AI products – Office of the Australian Information Commissioner 2024 and updated 2025.
- ISO/IEC 42001:2023 – AI Management Systems Standard – International Organization for Standardization.
- AI Risk Management Framework – National Institute of Standards and Technology, U.S. Department of Commerce, 2023.
- Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile – National Institute of Standards and Technology, U.S. Department of Commerce, 2024.
- Essential Eight – Australian Signals Directorate, Australian Cyber Security Centre
- OECD AI principles – Organisation for Economic Co-operation and Development, 2019.
- The Bletchley Declaration – November 2023.
- The Seoul Declaration – May 2024.
- REP 798 Beware the gap: Governance arrangements in the face of AI innovation – Australian Securities and Investment Commission, October 2024
- CPS 230 Operational Risk Management Prudential Standard – Australian Prudential Regulatory Authority.
- CPS 234 Information Security Prudential Standard – Australian Prudential Regulatory Authority.
- Designing AI Governance Structures – Human Technology Institute, UTS, October 2025