The World’s First AI Rulebook: The EU AI Act and the Impact on Australia and New Zealand

The EU AI Act came into force on 1 August 2024. Setting out the world’s first comprehensive AI regulation, the EU AI Act takes a risk-based approach to regulating the entire AI lifecycle and establishes obligations for various operators in the AI value chain.  This article provides a high level summary of the EU AI Act and potential implications for New Zealand and Australian businesses.

Is it relevant to me in Australia or New Zealand? 

The short answer is yes.

Similar to the EU GDPR, the EU AI Act has extraterritorial scope and covers any New Zealand or Australian organisation that:

  • puts an AI system into service in the EU under its own trade mark or name;
  • makes the output of an AI system available for use in the EU;
  • incorporates an AI system in a product it manufactures and puts on the EU market under its own trade mark and name, and that product is subject to third-party conformity assessments under existing EU regulations (for example, toys, vehicles and medical devices).

“Output” can include content, predictions, recommendations, or decisions generated by an AI system.

Additionally, the New Zealand and Australian governments are currently in discussions regarding whether AI-specific regulation is required, and if so, what that would look like. In doing so, they may look to the EU AI Act and how it is implemented for guidance.

For more information about the state of AI regulation in Australia, USA, UK and China, see Navigating AI Regulation in Australia and Beyond: What Corporate Leaders and Organisations Need to Know.

So, what is an AI System and what does the EU AI Act mean when it is talking about risk?

The EU AI Act classifies AI systems according to risk to determine what AI systems require regulation or are prohibited entirely.

The EU AI Act sets out the following definitions:

  • ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.’
  • ‘risk’ means the combination of the probability of an occurrence of harm and the severity of that harm’.

Risk is then divided into four categories to determine what AI systems are captured by the Act:

  • unacceptable risk: AI systems, which pose a clear threat to people’s safety, livelihoods and rights, and are prohibited (for example, social scoring systems and manipulative AI);
  • high-risk: AI systems, which have the potential to harm health, safety, rights, environment, democracy and the rule of law, and are subject to strict obligations before they can be put on the EU market, including quality assurance (for example, medical devices and critical infrastructure management tools);
  • limited risk: AI systems, which are those designed to interact with natural persons and those that generate synthetic content, and are subject to various transparency requirements for individuals to be informed when they are interacting with AI systems or content (for example, chatbots and deepfake); and
  • minimal risk: AI systems, which are already widely deployed, and are not regulated under the EU AI Act (for example, spam filters, inventory management systems).

A fifth category covers General Purpose AI systems (GPAI), which are systems trained with a large amount of data and have a wide range of possible uses. This includes large language models like GPT-4o and Gemini 1.5 which power OpenAI’s ChatGPT and Google’s Gemini.

Effectively, the EU AI Act will regulate the development, supply and use of AI Systems based on their risk to the EU market.

However, the Act also specifically excludes:

  1. AI systems used for military, defence or national security.
  2. AI systems specifically designed for scientific development or research.
  3. Research, testing and development of AI systems or models or AI systems prior to being placed on the market or put into service.
  4. Individuals using AI systems for personal use.
  5. AI systems released under free and open-source licences, unless they are placed on the market or put into service as high-risk AI systems or as an AI system that falls under prohibited AI or certain GPAI models.

Who will be impacted by the EU AI Act?

The Act covers a range of actors in the AI value chain, with most obligations falling on Providers and Deployers of high-risk AI systems.  Importers and Distributors will be mainly subject to regulatory compliance verification and documentation obligations.

For assistance:

  • Provider is a person that develops or commissions the development of an AI system and places it on the EU market or puts it into service in the EU under its own name or trade mark, whether for payment or for free.
  • Deployer a person in the EU who uses an AI system for a professional activity.
  • Distributor is a person in the supply chain (other than Importer or Provider) that supplies an AI System to the EU market.
  • Importer is a person located or established in the EU that supplies an AI system bearing the trade mark of a non-EU established entity on the EU market.

The distinction between providers and deployers is crucial, as most of the obligations are imposed on Providers. Companies should also continually reassess their classifications, as the above roles are not fixed, and a company can change roles at any time due to the way it uses and/or modifies an AI system. For example, a deployer could become a provider if it customises an AI system that has already been placed on the EU market in a way that changes the intended purpose of the AI system.

What are the requirements?

As most of the obligations are imposed on Providers and (to a lesser extent) Deployers of high-risk AI systems, we have summarised their key obligations below.

Provider obligations: Providers have strict obligations before, during and after the launch of a high-risk AI system.  Broadly speaking, providers must:

  • establish risk and quality management systems across the AI lifecycle;
  • implement data governance;
  • create and maintain technical documentation for each high-risk AI system;
  • provide instructions for use to Deployers;
  • design human oversight into AI systems to enable people to understand their capabilities and limitations;
  • design and build accuracy, robustness and cybersecurity into AI systems and let users know that is the case;
  • pass a conformity assessment to verify the AI system complies with the EU requirements; and
  • develop and deploy AI systems transparently.

Deployer obligations: Deployers have obligations relating to their use of a high-risk AI system.  Broadly speaking, deployers must:

  • implement technical and organisation measures so that high-risk AI systems are used according to the Provider’s instructions of use;
  • ensure input data is relevant and minimises bias;
  • monitor high-risk AI systems in accordance with the Provider’s instructions;
  • maintain generated logs for at least 6 months;
  • conduct Data Protection Impact Assessments where AI uses personal information;
  • ensure transparency where AI systems are making decisions affecting people; and
  • conduct fundamental rights impact assessment to consider the impact on rights like equality and privacy.

How will the Act be governed?

The EU AI Act provides for the establishment of a European AI Office, which will support the implementation of the EU AI Act, and monitor the implementation and compliance of GPAI model Providers.

What sanctions apply?

The EU AI Act sets out sanctions for non-compliance, including fines from 7.5 million euros or 1.5% of global turnover to up to 35 million euros or 7% of global turnover, depending on the infringement and size of the company.

When will the EU AI Act come into force?

The EU AI Act entered into force on 1 August 2024 and takes effect over a 24-month transition period.

The sections of the EU AI Act that will be applicable sooner include:

  • the prohibition of AI systems posing unacceptable risks, which will apply 6 months after the entry into force;
  • the codes of practice, which will apply 9 months after entry into force; and
  • the rules for GPAI systems that need to comply with transparency requirements, which will apply 12 months after entry into force.

Next steps

Businesses that leverage AI in the products and services they offer, or use AI solutions internally, and have an EU connection should establish appropriate governance frameworks to identify, mitigate and monitor AI-related risks, and ensure that AI systems adhere to the AI Act.

To prepare for the EU AI Act, some key steps that you should take include:

  1. identifying the AI systems your business uses or supplies;
  2. considering, for each AI system:
    • whether it is excluded from the scope of the EU AI Act (research, military etc);
    • which risk category applies;
  1. identifying what role the business plays in relation to each AI system (e.g. Provider, Deployer etc);
  2. with the above in mind, considering what obligations apply to your business; and
  3. identifying what resource, strategic changes and processes are required to ensure compliance by the various deadlines.

To support entities which leverage AI, the Future of Life Institute developed an interactive EU AI Act Compliance Checker, which is available here:  https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/.  While independent legal advice is recommended, the compliance checker is a helpful tool to help businesses assess whether an AI system is caught by the EU AI Act.

Get in touch

As the EU AI Act begins to reshape the global landscape of AI regulation, it’s crucial for businesses in Australia and New Zealand to seek specific expert advice on how the Act applies to their situation and take proactive steps to ensure compliance. Businesses need to start assessing their AI systems now and implement the necessary governance frameworks to mitigate risks and secure their market position.

Reach out to Sarah Gilkes and Janice Yew for specialist guidance on the complexities of the Act and help your business thrive in this new regulatory environment.

***
This article was written in collaboration with our allied business, Source. Highly regarded for their commercially-minded approach, Source adds a full suite of in-house style professional services to our depth and breadth of experience at Hamilton Locke. Together with Source and Helios we provide the most practical and effective professional services on the market.

KEY CONTACTS

Partner, Head of IP & Technology

Senior Associate

Subscribe