The integration of Artificial Intelligence (AI) models and systems across all industries is occurring at a rapid pace. Australian organisations are expected to spend more than $3.6 billion on AI systems by 2025. The resulting efficiencies and profitability created by AI means that AI vendors that build their AI models systems well will soon become attractive targets.
Buyers however should take care to ensure that they are aware of the unique risks that need to be tested as part of the due diligence process when acquiring an AI company or AI system assets.
These risks include:
- proper identification of the AI asset;
- establishing ownership of the AI asset and understanding how this can be transferred under the sale agreement (whether a share or business asset sale);
- testing data governance, cyber security and resilience, and compliance with privacy laws;
- testing compliance with existing technology-neutral laws which apply to the development, supply and use of AI systems, together with various AI guardrails, standards and ethical principles for safe and responsible AI; and
- ensuring appropriate warranties for the AI asset (and its related data) are included in the sale agreement with specific warranties appropriate to the type of AI asset and related data sets.
In this article, we explore the five key risks involved when acquiring an AI company.
1) Identification of the asset
The first step in any due diligence should be to identify and understand the AI assets involved in the transaction. Unlike traditional software companies where the core intangible asset is generally copyright in the source code and algorithms, in AI companies the core value usually lies in the underlying data used to train the AI models, the trained models themselves, and any proprietary algorithms or methodologies developed by the target.
2) Establishing ownership
Once the AI assets are identified, the next step is to establish ownership of the assets to ensure these can be properly transferred under the sale agreement. In the context of a share sale, this means ensuring ownership of the AI assets by the target. In the context of an asset sale, this means ensuring the owner of the AI asset is properly identified and joined as a seller to the asset sale agreement.
Due diligence should verify the ownership of source code and algorithms used in the AI model itself and any other components of the AI system. Where the AI asset is generative, due diligence should also be conducted with respect to any copyright works generated by the AI asset. Establishing copyright is made more difficult in Australia as current copyright law only protects the work of human authors. In this case, diligence should test the level of human intervention overlaying the output to thereby attract copyright protection.
Data is likely to be key to an AI asset’s value as AI systems that incorporate machine learning or general purpose AI systems, are only as good as the data that trains it. However, the concept of data ownership is complicated. Under Australian law, there is no recognised proprietary right that attaches to data in and of itself (unlike the sui generis “database right” in the EU). However, in some cases, ownership rights can accrue in relation to the way data is recorded, aggregated or the form in which data is presented may be protected.
In this way, there is a bundle of data-related rights that apply to datasets, which may include copyright, confidential information and know-how. These rights are generally addressed through contractual arrangements, which provide for “ownership” and control of the relevant data and the records in which the data is held. Due diligence will need to be conducted on the agreements underpinning these arrangements to ensure that the target has these rights, and the right to transfer such rights to the buyer. In the context of an asset sale, this means novating the relevant contracts to the buyer. In the context of a share sale, this means ensuring the contract is with the target and does not contain any prohibitions on change of control.
3) Data, privacy and cyber security
AI systems generally rely on vast amounts of data to train and operate effectively throughout their entire lifecycle. The more data a company collects and retains, particularly personal information and confidential information, the greater the potential liability if there is a cyber security incident or data breach.
Accordingly, due diligence needs to focus on the target’s data governance and information handling practices. This is particularly the case where personal information is part of the AI data set the subject of the sale agreement.
In certain cases, technical due diligence may need to be conducted to test the strength of the target’s information security systems and controls together with past data breaches. The buyer should also consider how the target’s cyber insurance policy may respond post-transaction, given the risk that a cyber security incident or other data breach may not be discovered or known at the time of completion. Where the data set includes personal information (or training data used in connection with the AI asset included personal information), legal due diligence needs to test the target’s data collection, storage, and usage practices as against applicable privacy and data protection laws.
4) Regulatory Compliance Analysis
Currently, Australia has no AI-specific laws.
However, there are many existing laws that regulate AI technology development and use in a non specific way, such as privacy and copyright laws, competition and consumer protection laws, laws relating to directors’ duties, online safety, anti-discrimination, criminal and sector specific laws. The list is long.
The Australian Government’s view, following a long period of review and consultation, is that there are gaps and uncertainties in the general law when it comes to some of the known challenges and risks of AI. As such, the Government proposes to introduce mandatory AI guardrails for high-risk AI systems. If implemented, the mandatory AI guardrails would sit alongside the voluntary AI Safety Standard, released by the Government in September 2024. This standard has very similar guardrails as the proposed mandatory AI guardrails, but applies whether the AI system is high risk, or not.
While we await the approach in Australia to the regulation of high-risk AI systems to settle, it is important for buyers of AI assets to conduct due diligence as to compliance with applicable general laws that apply to the development, supply and use of AI systems, together with the various voluntary frameworks and ethical principles for safe, responsible and transparent AI.
Australian organisations that have an international connection, including in the EU, may also be caught by the extra-territorial reach of applicable foreign laws.
Testing of bias and human rights issues in deployed AI systems and models will also be relevant for a target’s compliance with its ESG requirements.
5) Appropriate warranties
As with any transfer of ownership of rights in assets, the transaction documents will need to provide for the appropriate warranties with respect to any AI assets (and related data).
Additional specific warranties should also be considered where any assigned datasets include personal information, including the means of collection, and ability to use in connection with the AI systems or models, and to transfer to the buyer.
Looking forward
The development and integration of AI technology (whether on a build or buy basis) make this an exciting area to watch from an M&A perspective. However, buyers need to be aware of the risks involved and ensure they have advisors sophisticated in this area to maximise their protection in this rapidly evolving space.
If you wish to discuss any aspect of this article further or for guidance navigating these updates reach out to Madeleine Kulakauskas, Sophie Bradshaw, Sarah Gilkes, Janice Yew or Matt Dean.