Financial Stability Review – September 20244.1 Focus Topic: Financial Stability Implications of Artificial Intelligence

Artificial intelligence (AI) is already having a substantial impact on how the financial system operates, especially in core functions. Many of these impacts are positive – AI can reduce costs and improve operational efficiency. Alongside these benefits, however, is the potential for AI to amplify existing risks and introduce new ones. Recognising these potential risks, the Council of Financial Regulator agencies are engaging with industry within the existing supervisory framework to understand and monitor the adoption of AI in the financial system.[1]

This Focus Topic considers the role of AI in the financial system and some of its implications for financial stability.

Key definitions

Artificial intelligence (AI) refers to the ability of a computer system to perform tasks that would typically require human attributes – such as learning, reasoning and making decisions. The field of AI encompasses various solutions that focus on different tasks. Examples include machine learning, which enables computer programs to learn from large datasets; and natural language processing, which enables computer programs to understand and process human language (e.g. speech recognition).

Generative AI (GenAI) is an emerging subfield within AI. GenAI has the ability to create new content such as text, images, voice, video and code in response to a prompt entered by a user.

Supply and demand factors have driven the adoption of AI.

On the supply side, advancements in AI capabilities and access have played a crucial role in its adoption. Continuous improvements in AI tools and computational power have made AI more accessible and effective for financial institutions. Additionally, the increased availability of large data sources and improved IT infrastructure, such as cloud computing, have reduced the barriers to adopting AI, making it easier for financial institutions to integrate AI into their operations.

On the demand side, the adoption of AI offers opportunities to enhance profitability through revenue generation, cost reduction and increased productivity. Competitive pressures to innovate and stay ahead in an increasingly digital landscape have encouraged financial institutions to explore use cases for AI. Customers expect personalised services, faster transactions and greater protection from scams and cyber-attacks – all of which can be supported by AI. Additionally, AI tools can assist in regulatory compliance, such as meeting anti-money laundering (AML) and know-your-customer (KYC) requirements, and contribute to risk management frameworks, by identifying patterns and predicting potential risks, among other things.

The use of AI in the financial system has brought economic benefits.

Financial institutions have been using AI for both back- and front-office operations to increase efficiency and productivity. AI has helped to automate processes, improve decision-making and enhance risk management practices in some areas. Some of the applications include:

  • assessing borrower credit worthiness and automating loan approvals
  • executing trades based on market data, historical patterns and real-time signals
  • monitoring transactions to identify unusual patterns, such as large withdrawals, that may indicate fraud.

Recent advancements in GenAI represent a step-change in potential use cases, although end-to-end automation without human intervention is still in the testing and experimentation stage.[2] Australian financial institutions have begun using more advanced AI tools to enhance productivity in areas such as customer service, marketing, fraud detection and regulatory compliance.[3] Initial examples include using GenAI to:

  • review lengthy documents against specific criteria, such as policy requirements
  • provide real-time assistance to employees to support customers more efficiently
  • help developers write better code faster.

Widespread use of AI brings both benefits and risks for financial stability.

There are some applications of AI that can enhance financial stability. Carefully designed and tested algorithms that improve financial firms’ operational efficiency, risk management and regulatory compliance could assist in this regard. These applications could extend to better controls on performance issues in systems and models, improved risk assessment, management and pricing, as well as new tools for effective regulatory compliance (RegTech) and supervision (SupTech).[4]

But AI could also contribute to financial system vulnerabilities and change how stress transmits through the system. Assessing the impact of AI at the system level requires an understanding of the compounding and dynamic effect of changes in firms’ behaviour, which is far from straightforward. More generally, how AI-related risks could interact with other risks and vulnerabilities in the global economy and financial system, including geopolitical risk, is largely unknown as there is limited relevant experience to draw on.

Four types of risk are traditionally identified in the context of AI, explained below.

Risk #1 – Operational risk from concentration of service providers

If financial institutions become overly reliant on a small number of AI and related third-party service providers, it could create vulnerabilities due to a single point of failure. Most financial institutions will have to rely on a few external AI providers due to a lack of in-house capabilities to develop or train AI models. Similarly, there are a limited number of cloud platforms that can provide the high computing power required by AI while meeting banks’ regulatory compliance requirements.

Risk #2 – Herd behaviour and market correlation

Easy-to-access AI solutions have supported the strong adoption of AI. The increased use of AI for risk assessments, trading, lending and insurance pricing, coupled with limited diversification of providers, models and data sources, may lead to higher correlation within markets. This, in turn, could exacerbate herd behaviour and aggravate the transmission of shocks to the financial system. Similarly, the decrease in diversity of behaviour and strategy within markets, resulting from the use of common AI platforms and models, might increase the correlation across markets and the risk of contagion.

Risk #3 – Increased cyber threats

Advances in AI have already increased the number and sophistication of cybersecurity threats and cyber-attacks that could significantly disrupt the financial system. The emergence of GenAI has led to an increase in credible misinformation and scam content – such as false news and deep fake images, videos or audio material – by malicious actors. This material has become increasingly difficult to identify and can cause financial losses, service disruption and erode trust in the targeted institution. At scale, this could amplify volatility and increase funding and liquidity vulnerabilities, affecting the entire financial system.

Risk #4 – Risks around models, data and governance

AI models – especially large language models (LLMs) and Gen AI – are complex and opaque, making it difficult to assess their reliability. Concerns range from a simple mistake or inaccurate risk assessment across many financial market participants to a commonly shared ‘AI hallucination’ that creates false realities with widespread market influence. Ultimately, this could compromise end-user interpretation and decision-making.

Data quality is also a complex issue that depends on factors such as quantity, representativeness and transparency of sources. High-quality data is essential for training the models and ensuring their reliability.

Developing the proper controls for governance and accountability is not straightforward. Yet, effective governance is essential to ensure that the benefits of AI are not outweighed by unexpected, potentially systemic consequences in the future.

There are laws and regulations around the use of AI in Australia.

The use of AI is subject to a range of existing laws and regulations. The Australian Government’s interim response to the consultation on Safe and Responsible AI in Australia noted:

[B]usinesses and individuals who develop and use AI are already subject to various Australian laws. These include laws such as those relating to privacy, online safety, corporations, intellectual property and anti-discrimination, which apply to all sectors of the economy.[5]

Internationally, jurisdictions have taken different positions, and some remain undecided, on whether the risks associated with AI technology can be addressed through extensions of existing regulatory frameworks, or whether new approaches are necessary.

Over the period ahead, Australian financial sector regulators will continue to rely on the existing regulatory frameworks. These were designed to be high-level, principles-based and technology neutral, such as the Australian Prudential Regulation Authority’s (APRA) Prudential Standard ‘CPS 230 Operational Risk Management’. Should concerns arise that cannot be addressed by the current regulatory framework, targeted initiatives may need to be considered. The regulators continue to engage with industry as part of their supervisory process, and APRA recently outlined its position to entities that wish to start using advanced AI models.

The Australian Government is coordinating a national approach to developing guardrails on the use of AI. Following the launch of the consultation on Safe and Responsible AI in Australia, the Government announced in January 2024 that it was considering introducing mandatory guardrails to promote the safe design, development and deployment of AI systems through the economy.[6] CFR agencies are engaged in a range of initiatives related to this work program, such as the Safe and Responsible AI work led by the Department of Industry, Science and Resources.

Endnotes

CFR (2024), ‘Quarterly Statement by the Council of Financial Regulators – June 2024’, Media Release No 2024-02, 11 June. [1]

OECD (2023), ‘Generative Artificial Intelligence in Finance’, December. [2]

McCarthy Hockey T (2024), ‘Taking Flight: Navigating the New Challenges Posed by Generative Artificial Intelligence’, Speech to the AFIA Risk Summit, 22 May. [3]

RegTech is the use of new technology in regulatory monitoring, reporting and compliance. SupTech is the use of technology by supervisors to deliver innovative and efficient supervisory solutions that will support a more effective, flexible and responsive supervisory system. [4]

Department of Industry, Science and Resources (2023), ‘Safe and Responsible AI in Australia’, June. [5]

Ministers for the Department of Industry, Science and Resources (2024), ‘Action to Help Ensure AI is Safe and Responsible’, Media Release, 17 January. [6]