Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email
Some 63% of the respondents claimed they expect their investments in AI to increase over the coming three years.


How AI can undermine financial stability 

Published 8 March 2024 in Finance • 6 min read

As AI is increasingly used in the financial system, it exacerbates existing sources of instability and creates new ones. What we should be worried about, say Jon Danielsson and Andreas Uthemann, is the interaction between the technology and the sources of instability.

The rapidly growing use of AI promises much-improved efficiency in the delivery of financial services – but it will also create new threats to financial stability that arise when AI vulnerabilities interact with frailties in the financial system.

AI is now widely applied to such tasks as risk management, asset allocation, credit decisions, fraud detection, and regulatory compliance. The financial authorities already employ AI for low-level analysis and forecasting. They are expected to expand its use to designing, monitoring, and enforcing financial regulations, identifying and mitigating financial instabilities, and resolving failing institutions and crises.

Four channels of potential instability

Building on previous work in this field, we identified four sources, or “channels,” where the use of AI could pose societal risks.

1. Malicious use of AI by human operators

Malice is a particular concern in the financial system because it is full of highly resourced, profit-maximizing economic operators, many of whom are unconcerned about the social consequences of their activities. These users can bypass controls and change the system by manipulating AI engines or using them to find loopholes to evade control. They may even deliberately create market stress, which can be highly profitable for those forewarned.

We expect the most common malicious use of AI will be by employees of financial institutions, who are careful to stay on the right side of the law. AI will likely also facilitate illegal activities, such as rogue traders and criminals, as well as terrorists and nation-states aiming to create social disorder.

2. Misinformed use of and overreliance on AI

The second area of concern arises when AI users are misinformed about its abilities but are also strongly dependent on it. This is most likely to happen when data-driven algorithms, such as those used by AI, are asked to extrapolate to areas where data is scarce and objectives unclear, which is very common in the financial system.

AI engines are designed to provide advice even when they have very low confidence about the accuracy of their answer. They can make up facts or present arguments that sound plausible but would be considered flawed or incorrect by an expert – both instances of the broader phenomenon of “AI hallucination.” The risk here is that the AI engines will present confident recommendations about outcomes they know little about.

Reports of the death of the job may not have been greatly exaggerated – but fresh thinking can support organizations to be better equipped to get the talent they need to sustain success
“While concerns about how AI can destabilise the financial system might make us careful in adopting AI, we suspect it will not.”

3. AI misalignment and evasion of control

The third channel of instability emerges from the difficulties in aligning the objectives of AI with those of its human operators. AI is much better at handling complexity than humans, and while it can be instructed to behave like a human, there is no guarantee it will do so, and it is almost impossible to pre-specify all the objectives AI has to meet. This is a problem, since AI is very good at manipulating markets and is not concerned with the ethical and legal consequences of its actions unless explicitly instructed.

Research has shown how AI can spontaneously choose to violate the law in its pursuit of profit. Using GPT-4 to analyze stock trading, the AI engine was told that insider trading was unacceptable; however, when the engine was given an illegal stock tip, it proceeded to trade on it and lie to the human overseers: an example, perhaps, of AI mirroring the illegal behavior of humans.

The superior performance of AI can also destabilize the system even when it is only doing what it is supposed to do. More generally, AI will find it easy to evade oversight because it is very difficult to patrol an almost infinitely complex financial system. AI can keep the system stable while aiding the forces of instability at the same time. In its complexity, it is always one step ahead of humans: the more we use AI, the more complex the computational problem for the authorities becomes.

4. Risk monoculture and oligopolies

The final risk area emerges from the business model of those companies designing and running AI engines. AI analytics businesses depend on three scarce resources: computers with the requisite GPUs, human capital, and data. An enterprise that controls the biggest share of each will likely occupy a dominant position in the financial AI analytics business. As a result, the AI industry is being pushed towards an oligopolistic market structure dominated by a few large vendors. The end result is uniformity across the sector and an absence of the competitive edge that might detect systemic fragilities earlier.

As the oligopolistic nature of the AI analytics business increases systemic financial risk, it is cause for concern that neither the competition nor the financial authorities appear to have fully appreciated the potential for increased systemic risk due to oligopolistic AI technology in the recent wave of data-vendor mergers.

Ensuring benefit outweighs risk

The expanding use of AI in the private and public sectors presents enormous efficiency gains and cost-saving advantages. While concerns about how AI could destabilize the financial system might make us careful in adopting it, we suspect it will not. Technology is often initially met with skepticism, but as it becomes apparent that it is out-performing what came before, it is increasingly trusted.

Nonetheless, we should be careful not to over-focus on these issues. The benefit of AI will likely be overwhelmingly positive in the financial system – as long as the authorities are alive to the threats and adapt regulations to meet them. The ultimate risk is that AI becomes irreplaceable and a source of systemic risk before the authorities have formulated an appropriate response.

This is an edited version of the VoxEU article How AI can undermine financial stability. Any opinions and conclusions expressed here are those of the authors and do not necessarily represent the views of the Bank of Canada.


Jon Danielsson

Director, Systemic Risk Centre at LSE

Jon Danielsson is director of Systemic Risk Centre and Reader of Finance at the London School of Economics. He has also worked for the Bank of Japan and the International Monetary Fund. Since receiving his PhD in the economics of financial markets from Duke University in 1991, Jón’s work has focused on how economic policy can lead to prosperity or disaster. He is an authority on both the technical aspects of risk forecasting and the optimal policies that governments and regulators should pursue in this area.

Andreas Uthemann

Principal Researcher at the Bank of Canada.

Andreas Uthemann is a principal researcher at the Bank of Canada. He is a research associate of the LSE’s Systemic Risk Centre and a research affiliate of the Centre for Finance at UCL. His research is in financial economics with a focus on market structure and design, financial intermediation, and financial regulation. He obtained a PhD in Economics from University College London.


Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
Sign up

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience