The rapid advancement of AI technologies is presenting both opportunities and challenges for a wide variety of industries. As AI becomes more integrated into business operations, robust regulatory frameworks are evolving as regulators seek to ensure AI’s safe, responsible and transparent use.
Whether a developer or buyer of AI technologies, companies must be aware of how to navigate the legal and regulatory risks associated with AI and develop governance and contracting structures that promote safe, responsible and transparent AI use. Regulators have recognised an increasing need to ensure our regulatory approach to AI in Australia achieves the balance between fostering innovation and ensuring a level playing field when it comes to compliance and protecting legal and individual rights. Against this shifting regulatory backdrop, there are steps that companies can take now to build robust AI governance frameworks and procurement practices – steps to manage AI-related risks, optimise AI-related opportunities, and prepare for the next stages of AI regulation.
The need for AI regulation
AI technology is advancing rapidly and has outpaced existing regulatory frameworks, highlighting gaps in current legislation. Additionally, public trust in AI in Australia is relatively low – a study commissioned by the Australian National AI Centre (the Responsible AI Index 2024) shows only 29% of businesses correctly implement safe AI practices, despite 78% believing they do.
Australia does not currently have any AI specific regulation, and the Australian government has acknowledged that self-regulation is insufficient. However, implementing AI-specific regulation in Australia is challenging due to the number of existing technology-neutral laws that already apply to AI, such as privacy and copyright laws, competition and consumer protection laws, laws relating to directors’ duties, online safety, anti-discrimination, criminal and sector specific laws.
The government is considering options such as adapting current frameworks, developing new ones, or introducing a cross-economy AI Act, similar to the EU’s approach. This could standardise guardrails but risks regulatory duplication. Another concern is the burden of multiple AI regulatory schemes, especially with extraterritorial laws like the EU’s GDPR. The government recognises the need for any new laws to be interoperable with other jurisdictions.
In September 2024, the Australian government released the Voluntary AI Safety Standard (Voluntary Standard) and proposals paper for introducing mandatory AI guardrails for AI in high-risk settings (Mandatory Guardrails). On 26 November 2024, the Select Committee on Adopting AI released its final report, which included a recommendation to introduce new, whole-of-economy, dedicated legislation to regulate high-risk uses of AI, in line with the third regulatory option set out in the Mandatory Guardrails for implementing the proposed guardrails.
Voluntary Standard
The Voluntary Standard offers practical guidance for developing and deploying AI. While it is not legally binding, it is helpful for organisations who want to align their AI practices with best practices and ethical principles. It is intended to give practical guidance to Australian organisations on how to use AI in a safe and responsible manner and support human-centred AI deployment.
The Voluntary Standard includes 10 guardrails that apply throughout the AI lifecycle and supply chain, which are designed to help organisations identify AI risks and provide practical guidance and requirements to mitigate and manage these risks. It focuses on risk management processes, data governance, transparency, accountability and stakeholder engagement.
Mandatory Guardrails
The Mandatory Guardrails focus on transparency, accountability, and safety. They follow extensive review and consultation, culminating in a recent consultation paper, and outline 10 mandatory guardrails.
The proposed Mandatory Guardrails aims to ensure the safe and responsible use of high-risk AI systems, aligning with global approaches like the EU’s AI Act. Key themes include:
- Transparency. Organisations must be open about product development, inform end-users about AI decisions, and share data on adverse incidents and risks.
- Accountability. Establishing risk management processes, publishing accountability measures, and enabling human oversight.
- Safety. Testing and monitoring AI systems, and conducting conformity assessments to certify compliance, performed by developers or third parties.
How to prepare for AI regulation
While we are in the relatively early stages of specific AI regulation in Australia, it is important for businesses to be across proposed regulation. Whatever form this regulation may take it will have a wide application and apply not only to the developers but also organisations that procure and use third-party AI systems. In particular, organisations looking to develop or deploy AI technologies should consider building an AI governance program, tailored to their role in the AI supply chain.
Some key takeaways for businesses to prepare for compliance include:
Develop an AI governance program
Develop and implement an AI governance program and strategy. This should take into account the proposed mandatory guardrails (if the AI system is high-risk) and voluntary AI safety standards, as well as compliance with existing technology-neutral laws (such as privacy and consumer protection laws) to help prepare for potential upcoming changes in Australian AI regulations.
Procurement practices
Where procuring AI systems and tools from third-party suppliers, establish robust standard contractual terms with suppliers which take into account the business’s specific requirements. These include terms for privacy and data protection, use of data in training AI systems, intellectual property ownership and third-party infringement, and adherence to applicable AI regulations and existing laws. Additionally, businesses should require suppliers to provide detailed documentation, including audit trails, testing results, and information about training data used in AI systems.
Privacy by design and security by design
Implement a robust privacy management plan and data governance framework. This approach ensures that privacy and cyber security measures are embedded into AI systems and tools developed or deployed by the organisation from the outset, rather than being retrofitted as afterthoughts. By doing so, businesses are better placed to mitigate risks associated with data breaches, algorithmic bias, and misuse of AI technologies, fostering trust among consumers and regulators. Moreover, integrating compliance measures early can reduce future legal and operational costs, particularly as jurisdictions worldwide, including Australia, the EU, and the US, introduce stricter AI governance frameworks.
Education and company involvement
Ensure there is a ‘minimal viable understanding’ of AI technologies and its associated risks among corporate leaders, front-line teams and operational teams (such as the IT, procurement, data and privacy, legal, compliance, HR and ESG teams). Cross-functional training and collaboration are key to ensuring that all teams are equipped to evaluate the ethical, legal, and operational implications of AI development and deployment. A unified approach not only supports regulatory compliance but also enhances the organisation’s capacity to leverage AI responsibly and sustainably.
Key takeaways
We are likely to see some form of AI-specific regulation in Australia sooner rather than later, and good governance will be essential for compliance. Organisations that build and implement robust AI risk management and governance programs now will be best positioned to navigate the evolving landscape.
For more information, please contact Sarah Gilkes, Sophie Bradshaw and Janice Yew.