ASIC’s report on AI use offers compelling insights for the financial services sector. We put a microscope to the report, plus its practical application for businesses.
ASIC released a first-of-its-kind report on Artificial Intelligence (AI) at the end of last year, examining the way Australian financial services and credit licensees are adopting and implementing AI solutions in their businesses, and the significant governance gap that has resulted.
The regulator analysed 624 AI use cases of 23 licensees in the banking, credit, insurance and financial advice sectors, reviewing the risk management and governance arrangements for AI and the future proposed use cases of AI in financial services.
ASIC has found that some licensees are adopting AI more rapidly than their risk and governance arrangements are being updated, which increases the risk of consumer harm. The specific findings include:
- 61% of licensees in the review planned to increase AI use in the next 12 months.
- 92% of generative AI use cases reported were less than a year old, or still to be deployed.
- 57% of all use cases were less than two years old or in development.
- Generative AI made up 22% of all use cases in development.
- Only 12 licensees had policies in place for AI that referenced fairness or related concepts such as inclusivity and accessibility.
- Only 10 licensees had policies that referenced disclosure of AI use to affected consumers.
ASIC’s findings highlight concerns about rapid adoption of AI – particularly that the pace of AI adoption in financial services may outstrip responsible risk management, potentially exposing consumers to unforeseen risks. They amplify the need for improved governance and transparency measures. ASIC did find that licensees were generally cautious about how AI is currently used in client service delivery, with a preference in the industry to limit customer interactions with AI.
How is AI currently being used in financial services?
Looking more broadly at the use of AI in financial services, the use cases at present can be split into two distinct categories:
- Low risk accepted AI use cases; and
- Emerging complex AI use cases.
Here are some common examples of these use cases:
Accepted AI uses | Emerging AI uses |
Predicting credit default risk or monitoring existing credit holders to inform collection strategies. | Predicting probability of recovery for defaults and/or arrears, and prioritising customers based on this. |
Optimising marketing communications. | Generative AI drafting marketing copy. |
Chatbots to answer simple questions or cashflow forecasting. | Generative AI use by customer-facing staff. |
Transaction monitoring for fraud detection and biometric information for identity verification. | Identifying customers that are more susceptible to scams, possible fake accounts, or account takeover. |
Internal process efficiency, for example; business analytics; quality assurance; documentation indexing or triaging. | Identification of financial hardship or vulnerability indicators, anomaly detection to identify non-compliance and internal errors, or automated data cleaning, verification and integrity checks. |
Actuarial models for risk, cost and demand modelling, support of the claims process, e.g. claim triaging. | Machine learning to enhance efficiency in the underwriting process. Use of generative AI and natural language processing to extract key information. |
What is the risk to consumers?
ASIC’s primary concern is that risk and governance arrangements or controls are not properly enlivened, even with AI models or techniques that have been tested.
Errors, injustice and bias – ASIC found gaps in how licensees applied their risk management frameworks, including how they considered risks to consumers such as potential errors, unfair treatment, and biases in automated decisions. The concern being this could lead to financial harm or discrimination with limited rights of remedy for customers.
Lack of regulatory compliance and consumer distrust – without strong governance, AI can amplify these issues due to poor oversight or lack of accountability, especially where companies lack frameworks to manage AI risks effectively. This raises concerns around transparency, compliance with regulations, and overall consumer trust.
Miscategorisation – where AI models in marketing tools were used to identify customers that are not within the target market for a financial product, without oversight of these models, there were risks of breaches of the design and distribution obligations.
Sensitivity and privacy – where AI was used for an insurance claim, the use of AI had not been disclosed where sensitive personal information was provided to third-party AI models as part of the claims assessment. The lack of disclosure creates a potential loss of consumer trust.
What are the regulatory/compliance/governance gaps?
ASIC mapped four quadrants of licensees’ AI governance maturity relative to AI use. It found that there were those that had:
- significant AI use but low AI governance
- both low AI use and AI governance
- low AI use and high AI governance
- significant AI use and AI governance maturity.
Those that had significant AI use and AI governance maturity were considered to be safer than those that had higher uses of AI, but lower AI governance.
ASIC’s analysis found that approximately half of licensees had specifically updated their risk management policies to address AI risks, while other licensees relied on existing policies and procedures.
In some licensees, not all AI risks were documented or considered, and in others, policies in relation to AI risks were not operationalised consistently. This inconsistency in the application of AI policies and procedures has led to ASIC questioning the readiness of the industry to implement AI more holistically.
What does this mean for current licensees?
ASIC found that AI maturity was most evident in those licensees that took a strategic and centralised approach to AI governance.
In order to best integrate AI into your business, licensees should:
- develop a clear AI strategy
- detail AI in your business’s risk appetite statement
- demonstrate ownership and accountability for AI usage at the organisational level, including at a Board level
- develop AI specific policies and procedures that take a risk-based approach, including considerations around AI bias and ethics
- consider what human resources, skills and capabilities you need to invest in to develop your AI maturity.
What’s over the horizon in the regulatory landscape?
ASIC continues to monitor the evolving AI regulatory landscape. There is clearly a push from government and regulators to more clearly define high-risk AI and shifts to include guardrails which will enhance the design, development, and deployment of high-risk AI use.
ASIC will continue to focus on technology-enabled financial misconduct, and the poor use of AI, with a key focus on risk management and governance arrangements. ASIC will also seek to contribute to Australia’s development of AI-specific regulation, engage with international regulators, and even take enforcement action, where the use of AI by licensees breaches their obligations – specifically the obligation to maintain adequate IT infrastructure and to act efficiently, honestly and fairly.
For more information, please contact Erik Setio or Nicholas Pavouris.