Used wisely, AI tools can arguably enhance the corporate decision-making process. There are risks, however, in directors overly relying on or blindly trusting in AI output – and they need to be able to justify its use in the context of their legal duties and responsibilities.
Need to know:
- AI use in workplaces is soaring, but directors must tread carefully or risk breaching their duties under the Corporations Act 2001 (Cth).
- Directors should maintain an enquiring mind in relation to AI tools. This means having a proper understanding of and strategy for how AI is used in the business and interrogating AI outputs in the context of decision-making and market statements.
Decision-making and AI in the context of directors’ duties
The Corporations Act 2001 (Cth) imposes a statutory duty on directors to discharge their functions with care and diligence. This duty is assessed objectively, having regard to what a reasonable person in the director’s position would have done in the circumstances. The substance of the duty is determined by looking at the balance of foreseeable risk of harm to the company flowing from a contravention of the duty, with the potential benefits that could reasonably be expected from the course of action under consideration.
Risks for directors relating to AI include:
- making public misstatements about a company’s AI capabilities for the purposes of gaining a competitive advantage or improving the company’s reputation in the market (known as ‘AI washing’);
- making data-based decisions based on AI-generated information without undertaking appropriate due diligence on the model inputs or algorithms; and
- exposing the company to the risk of harm associated with utilising untested AI models.
Balancing innovation and obligation
The use of AI by directors can be a double-edged sword. On the one hand, it can enhance operational efficiencies and support more informed and data-driven decision-making. However, there are inherent risks associated with its use, including data privacy concerns, algorithmic bias, cybersecurity threats, and difficulties in understanding or verifying how many AI systems produce their outputs.
Interrogating AI outputs
Directors should maintain an enquiring mind when assessing the effectiveness and reliability of AI applications. This includes critically evaluating the data and insights generated by AI systems and seeking to understand the underlying logic and decision-making processes.
This can be challenging as many AI models, particularly complex machine learning systems such as large language models using deep neural networks, operate as “black boxes”. These models derive outputs from statistical correlations in training data rather than transparent rule-based logic, making it difficult to trace how conclusions are reached or to assign accountability with precision.
To mitigate this risk, directors should actively engage with AI technologies, aiming to understand their capabilities and limitations. For example, AI models that incorporate explainability mechanisms, such as reasoning traces or model interpretability layers, can provide greater visibility into the logical steps or data correlations the AI model used to reach a particular conclusion.
A failure to engage with these risks, or a wilful disregard of said risks, may amount to a breach of the duty to act with care and diligence.
Exercising business judgement: discharging the duty when using AI
Directors are ultimately responsible for making decisions in the best interests of the company.
While AI can identify patterns in large datasets and provide valuable insights to support decision-making, it does not possess judgement, common sense, or a theory of mind. Its outputs are based on data-driven correlations rather than human reasoning or ethical understanding. As a result, AI-generated outputs may read very well, but may conflict with broader business values or stakeholder considerations, particularly in contexts involving ethical nuance, strategic trade-offs, or long-term vision.
To counter this, emerging generative AI models increasingly incorporate techniques such as chain-of-thought prompting or reasoning layers, which simulate intermediate logical steps before arriving at a conclusion. These features allow directors to engage more deeply with the model’s rationale, offering opportunities to challenge, test, and contextualise outputs within the company’s specific operational and strategic environment.
Bias and decision-making
If used responsibly, AI can support directors in making better decisions by analysing large volumes of data, identifying patterns, and generating insights that would be difficult to uncover manually.
Traditionally, many boardroom decisions have relied heavily on instinct or limited data analysis. The integration of AI into corporate governance allows directors to base decisions on more comprehensive assessments of corporate performance and industry trends, potentially reducing bias and limiting the influence of internal politics or personal agendas.
However, AI-generated results should not be accepted at face value. Directors must possess sufficient knowledge and expertise in the relevant subject matter to critically assess AI outputs, identify inconsistencies, and recognise when technology and algorithmic reasoning may be flawed, incomplete, biased, or contextually inappropriate.
Earlier this year, Justice Michael Lee commented that,
“[p]eople just can’t keep on saying that, ‘Oh well, it’s all too difficult for us to read the material that’s presented to a board.’ …. and what you’re effectively saying is that they can’t be expected to do all the work that the company is expecting to do because they can’t be expected to read all the materials”.1
While not mentioning AI, Justice Lee has warned fee-earning directors to “do the work” no matter how voluminous or complex. His Honour warned that directors cannot abdicate responsibility (to human actors or otherwise) and must personally engage with the documents, data, and decisions that come before them. Justice Lee’s position is that reviewing hundreds of pages may be tedious, but it’s part of the role.
These comments reflect a broader concern that AI could tempt directors to take shortcuts, become careless or overdependent on AI, leading companies astray. A substantial drawcard of using AI is the synthesis of voluminous or complex material into mere sentences. But directors need to take care in placing reliance on AI in decision making, to make sure decisions are based on accurate data which has been appropriately interrogated, particularly when it is being relied on for making critical decisions that might affect shareholder value.
ASIC’s view
To date, there has been no formal guidance from ASIC on the use of AI by Directors; however, in a keynote address, ASIC Chair Joe Longo opined that “current directors’ obligations under the Corporations Act aren’t specific duties – they’re principle based. They apply broadly, and as companies increasingly deploy AI, this is something directors must pay special attention to, in terms of their directors’ duties.”
However, ASIC has urged stronger AI governance for Australian financial services and credit licensees. ASIC will no doubt be seeking enforcement action against companies and directors for “AI washing” as well as potential breaches of statutory duty associated with the use of AI.
Practical tips for directors
Directors should ensure they possess sufficient technological understanding and agility to adapt governance and operational protocols as needed.
At a practical level, to discharge their legal obligations and minimise legal and reputational risks, directors are a minimum should:
- educate themselves about AI tools and how they are being used in the business;
- carefully assess the risks and benefits of adopting AI and ensure the business makes necessary adjustments to governance frameworks, particularly in relation to employees, customers, stakeholders, environmental impact, and cybersecurity threats. Ethical and responsible AI use should be embedded in company strategy, especially in sensitive areas like recruitment and data analysis, where AI systems may unintentionally reinforce bias; and
- ensure there is proper oversight of AI-related risks, including privacy and copyright concerns, vulnerability to cyberattacks, and issues of bias, accuracy, and data quality and ensure that any AI systems the company uses have auditable logs, transparent architecture, and, where appropriate, interfaces that show the model’s reasoning or allow users to query how a conclusion was formed.
Directors should also stay informed of both domestic and international AI governance trends. Globally, regulatory frameworks such as the EU Artificial Intelligence Act and the proposed U.S. Algorithmic Accountability Act are introducing risk-tiered compliance obligations. These include mandatory impact assessments, transparency requirements, and the right to explanation. While Australia has not yet introduced specific AI legislation for directors, legal exposure may still arise under existing obligations (e.g., misleading conduct, breach of privacy, or negligence in supervision).
AI’s place in good corporate governance
While AI is not a substitute for human directors, it can be a useful tool to augment and enhance corporate governance. AI may help boards make better decisions, identify threats and opportunities and manage risks with greater precision. It can also reduce human bias or mitigate the influence of personal agendas, ensuring strategic decisions are made in the best interests of the company.
As AI becomes more embedded in corporate culture, directors must act diligently to balance innovation with accountability. It is the director’s role to understand the technology, interrogate its outputs and ensure any decisions align with the company’s values and legal obligations.
For more information, please contact Peter Williams, Benny Sham, or Christina Hooper.
1https://www.afr.com/companies/games-and-wagering/judge-puts-directors-on-notice-if-you-take-the-fees-do-the-work-20250527-p5m2hy