With the ongoing rapid adoption and development of AI-driven technologies, lawmakers are continuing to sharpen their focus on the appropriate approach to ensure the safe and responsible development of AI.
In this article, we explore recent developments in AI regulation in Australia against the backdrop of ongoing international activity. We also discuss the key steps that corporate leaders and organisations can take to identify and address risks associated with the development, adoption and use of AI.
Key takeaways:
- The landscape of AI regulation in Australia and globally is rapidly changing. While Australia does not yet have AI-specific laws, existing technology-neutral laws apply to the development, supply and use of AI systems.
- Australian organisations that have an international connection, including in the EU, could also be caught by the extraterritorial reach of applicable foreign laws.
- Corporate leaders and organisations should take steps now in relation to responsible AI use and investment to address AI-related risks in their organisations. In the short to medium term, this will help with compliance with existing laws and building and maintaining trust with customers, suppliers and other stakeholders. In the long term, it helps to lay the groundwork for compliance with any future AI regulation in Australia.
Recent developments in Australia
While there are various laws and regulations that apply to AI technologies in Australia, there is currently no federal AI-specific regulatory framework that addresses its particular risks and challenges.
But that may not be the case for long: in January of 2024, the Department of Industry, Science and Resources released its interim response to its consultation on “Safe and Responsible AI in Australia”, which indicates the government intends to introduce regulation for certain uses of AI. The response follows a discussion paper released in June 2023, which received 510 written submissions from interested stakeholders on the approach that should be taken to AI regulation in Australia.
The response identifies that the government’s attention will be particularly focused on taking a balanced approach, ensuring the development and deployment of AI systems in Australia in legitimate (but high-risk) settings is safe and can be relied upon while ensuring the use of AI in low-risk settings can continue largely unimpeded.
The government’s immediate focus will be on determining whether mandatory safeguards are the appropriate mechanism to achieve this aim. If so, the next step will be to determine how to implement these safeguards (through existing laws or new regulations).
It remains unclear what scope of “high-risk” applications of AI will be regulated. The government’s discussion paper sought feedback on a definition that focused on impacts of AI that were “systematic, irreversible or perpetual” (e.g. the use of AI-enabled robots for medical surgery and AI in self-driving cars to make real-time decisions, as compared to a lower-risk application of screening parcels). The response did not confirm whether the government would proceed with this definition but noted similar approaches to regulating high-risk applications in the EU.
The government indicated that it will, in the short term:
- further consult on options for introducing new regulatory guardrails with a focus on testing, transparency and accountability;
- take steps to help businesses operationalise safe and responsible AI through:
- developing a voluntary industry AI Safety Standard that draws together existing responsible AI principles, guidelines and frameworks to produce a best-practice and up-to-date AI risk-based safety framework;
- commencing work with industry on considering the merits of voluntary labelling and watermarking of AI-generated material in high-risk settings; and
- establishing a temporary expert advisory group to support the government’s development of AI guardrails;
- further consider opportunities to strengthen existing laws to address risks and harms from AI (including, for example, in connection with privacy law reforms and the government’s work on its Cyber Security Strategy);
- take forward commitments it recently made in the Bletchley Declaration (outlined below);
- continue to work internationally to shape global AI governance, including by engaging with international partners to understand domestic responses to risks posed by AI and considering ways to bolster the engagement of Australian experts in key international forums that develop technical standards for AI; and
- further consider opportunities to ensure that Australia can maximise the benefit of AI technology (including the need for an AI Investment Plan).
The interim response follows other recent developments in AI regulation in Australia. These include:
- New South Wales AI Assessment Framework – In July 2024, the NSW government released its updated AI Assessment Framework (formerly known as the AI Assurance Framework), which has been refreshed with a mandatory self-assessment tool designed to guide responsible and safe AI usage in projects by NSW government agencies. The AI Assessment Framework is now applicable to any NSW agency project with a budget exceeding $5 million or any NSW government agency projects which are considered to be an “elevated risk”. This includes generative AI solutions. While its use is not compulsory outside of NSW government agencies, it still provides a useful tool for organisations for the analysis of AI system risks, implementation of mitigation controls and establishment of accountabilities.
- Framework for Generative Artificial Intelligence in Schools – On 5 December 2023, the Federal Department of Education approved the Australian Framework for Generative Artificial Intelligence in Schools. The framework was developed in consultation with industry stakeholders (including parent and school representatives, teachers, unions, students, academics etc) and aims to guide responsible and ethical use of AI that benefits and supports all people connected with school education.
- Bletchley Declaration – On 3 November 2023, the Australian Government (together with the EU and 27 countries, including the USA, UK and China) signed the Bletchley Declaration at the international AI Safety Summit hosted by the United Kingdom. The declaration commits to international collaboration to identify AI safety risks of shared concern, and building risk-based frameworks across countries to ensure AI safety and transparency.
- Ongoing review of related laws – This includes, for example, the review of the Australian Privacy Act. While draft legislation has not yet been released in connection with the proposed reforms to the Privacy Act, there will likely be increased transparency and accountability for organisations that use AI technologies to handle personal information, as well as enhanced enforcement powers for Australia’s privacy regulator (see more here). These changes to the Privacy Act are expected to be implemented as part of the government’s broader review of the regulation of AI and automated decision-making.
What’s happening overseas?
AI regulation in Australia has been evolving alongside the rapidly growing momentum of regulation in international jurisdictions.
Key recent developments include:
- European Union
The EU’s Artificial Intelligence Act (EU AI Act) is the world’s first comprehensive AI law. It commenced on 1 August 2024, and takes effect over a 2-year transition period, with exceptions for high-risk AI systems and general-purpose AI models already placed on the market.
The EU AI Act takes a risk-based approach to regulating the entire AI lifecycle and establishes obligations for various operators in the AI value chain. It categorises AI systems into four risk categories: unacceptable risk, high risk, limited risk and minimal risk. Most of the obligations imposed on developers (defined in the EU AI Act as ‘providers’) of high-risk AI systems (for example, medical devices and critical infrastructure management tools).
General purpose AI models, being models that have a wide range of applications such as large language models (for example, GPT-4), are dealt with separately under the EU AI Act. Additional obligations apply to general purpose AI models with ‘systemic risk’.
Similar to the EU GDPR, the EU AI Act applies extraterritorially (including, in some cases, to Australian organisations) and sets out large sanctions for non-compliance (including fines of up to €35 million or 7% of global turnover, depending on the infringement and size of the company).
- United States
The US does not have any overarching AI regulation and AI technology in the US is currently governed by a patchwork of federal and state laws and guidance.
At a federal level, President Biden issued an Executive Order on 30 October 2023 to define an approach to AI adoption and usage for the US government and federal agencies, with the goal of ensuring the “safe, secure, and trustworthy development and use of artificial intelligence”.
Key features of the sweeping order include:
- requiring developers of foundation AI models to share safety test results and other critical information with the US government;
- the development of standards, tools and tests to help ensure that AI systems are safe, secure and trustworthy; and
- the development of guidance for content authentication and watermarking to clearly label AI-generated content.
The US Congress has also hosted hearings and working groups in both the Senate and House to address AI-related issues, including in relation to intellectual property and national security.
At a state level, a number of legislatures have passed legislation relating to AI issues, including to improve transparency (including for users to understand when they are interacting with an AI model and the datasets used to train models), address sector-specific issues (such as use of AI models in making employment decisions) and other general AI issues (including requiring impact assessments).
- China
China has been an early mover in AI regulation and has introduced a suite of regulations for specific AI applications since 2021. However, China has not yet implemented overarching AI regulation.
Regulations introduced include laws addressing:
- the use of algorithm recommendation technologies and deep synthesis technologies (a subset of generative AI) for internet information services;
- the development and use of generative AI technologies in China; and
- ethical review of research and development of AI technologies.
China continues to release and publish further standards for public consultation (including recent standards released for data security and content regulation of generative AI).
- New Zealand
- No AI-specific regulation on the immediate horizon.
- Instead, it is likely that existing “technology neutral” regulations will be updated and amended to address potential AI harm and to enable innovation.
- When developing any AI regulation, NZ will likely leverage larger countries’ groundwork.
Where to next?
It is important to remember that while Australia does not yet have AI-specific laws, there are existing technology-neutral laws that apply to organisations developing, supplying or using AI systems, as well as the directors, senior executives and key personnel of such organisations. These include laws relating to privacy, consumer protection, copyright, work health and safety, anti-discrimination and directors’ duties, as well as sector-specific laws such as those regulating financial services and entities responsible for critical infrastructure.
Australian organisations that have an international connection, including in the EU, could also be caught by the extraterritorial reach of applicable foreign laws.
Given the rapidly evolving regulatory and policy positions in Australia and overseas, and increased focus in this area by governments, regulators and the public, key steps that corporate leaders and organisations in Australia can take include:
- conduct an audit to identify the AI systems and products it uses (or intends to use) and applicable regulatory frameworks;
- conduct a data mapping exercise to understand what data it is collecting, how it flows through the organisation, and who they are sharing it with (given the reliance most AI systems have on data inputs throughout the AI lifecycle);
- adopt a cross-functional approach to building AI skills within the organisation to ensure there is a ‘minimal viable understanding’ of AI systems and associated risks among corporate leaders, front-line teams and operational teams (which include IT, procurement, data and privacy, legal, compliance, HR, and ESG teams);
- consider AI-related risks and the impact of such risks in the valuation of mergers and acquisitions transactions involving companies that leverage AI technology in the products and services they offer and those that use AI solutions internally;
- consider implementing a responsible AI framework, which covers the organisation’s approach to AI governance, and policies and procedures for:
- the development and deployment of AI systems by the organisation;
- use of AI impact assessments;
- approving AI systems and use cases, including:
- vendor due diligence that should be conducted before procuring goods and services that may incorporate or use AI systems;
- contracting in relation to the procurement of goods and services that incorporate AI systems (including in relation to transparency, accountability, use of the AI system for automated decision making, and use of the organisation’s data for AI training); and
- staff training and education; and
- consider adopting the voluntary AI Safety Standard once developed by the Australian government.
This will likely simplify compliance with existing laws and any future AI regulation in Australia and help organisations build and maintain trust with their customers, suppliers and other stakeholders. For more information, please contact Sophie Bradshaw, Sarah Gilkes, Janice Yew, Verity Stone and Adam Rose.