Sponsored by ?

This article was paid for by a contributing third party.More Information.

AI in risk management: one giant leap forward or a risk too far?

AI in risk management: one giant leap forward or a risk too far?

The panel

  • Rajat Baijal, Former managing director and global head of enterprise risk management, Cantor Fitzgerald
  • Jeffrey Garnett, Chief risk officer, Antara Capital
  • Frank Manahan, Principal, US trust specialities, PwC
  • Moderator: Harry Stahl, Enterprise strategy leader, Capital Markets, FIS

For securities and investment firms, as for the rest of the world, artificial intelligence (AI) is a big deal, with real potential to improve efficiency, productivity and insight. And among its gifts is the power to manage risk more effectively.

But, as the technology advances at lightning speed, AI brings its own, not inconsiderable, risks. How, then, are today’s risk managers using AI tools to their best advantage – and what threats do they face along the way? In a recent Risk.net webinar, sponsored by financial technology provider FIS, an expert panel weighed up the opportunities and dangers of AI for risk management.
 

Transformative tech

AI has the potential to transform many areas of business, and risk management is no exception. With many new threats to tackle, in ever-increasing volumes, securities and investment risk managers are exploring how AI can help drive efficiencies, create opportunities and build competitive advantage.

Environmental, social and governance risk, for example, remains a priority. So, for Frank Manahan, principal, US trust specialities at PwC, it makes sense that AI is increasingly in demand for climate risk management.

“We use technology such as geographic information systems to plot assets globally. We then overlay that with a ‘climate layer’ to show where there is specific climate risk and how that risk is changing. This requires a huge amount of human capital and, more and more, [the application of] technology such as AI and generative AI.”
 

Serious threats

AI’s superhuman efficiency comes with caveats, as Rajat Bailal, former managing director and global head of enterprise risk management at Cantor Fitzgerald, pointed out: “As much as I would like to see AI enabling risk managers to better manage the data and make sense of it, there’s obviously a threat, at least in the short term, as we get our heads around the technology.

“There are two main risks. One is the risk of technology ending up in the wrong hands, and ‘bad actors’ using AI to harm organisations or steal their data. The second is transition risk. As organisations prepare to take the leap to embrace AI, there is the risk that comes with implementation of a big change-management project.

“There is also the risk of AI amplifying the risk of cyber or information security risk. This is why boards need to take it seriously.”
 

Strong considerations

Harry Stahl, FIS
Harry Stahl, FIS

Despite the clear risks, securities and investments firms aren’t avoiding AI. In the webinar, an audience poll revealed that most respondents (69%) are exploring potential options for AI, including machine learning or generative AI, within their production systems.

What’s more, around one-quarter (24%) said either that AI is already part of multiple production systems, they have one risk-management system using AI or they have a live prototype or proof of concept. Only a small number (7%) said they are not considering or using AI at all.

Reflecting on conversations with risk managers, Harry Stahl, enterprise strategy leader, Capital Markets, FIS, observed: “Clearly they see the benefits, but risk management professionals are also being thoughtful about how and when to adopt AI.”
 

Human intervention

To get the best from AI, firms need the right human skills, such as in data management. Jeffrey Garnett, chief risk officer at Antara Capital, commented that the quality of data is in proportion to the amount of time spent on it. He also emphasised the importance of accuracy, for instance in describing instruments that the firm is trading, and better-quality data. He concluded that many operational risks can be eliminated by building a good data model at the outset.

Reflecting on the potential impact of AI on human capital, Stahl commented: “Some say that, when generative AI takes over, data science teams won’t be needed. But others say that, while there are things that generative AI can do, there are also things that it doesn’t do well.”

He added that the answer might be to keep data science teams, but hire more ‘prompt’ engineers, or cross-train people to carry out the prompts for generative AI.

A second poll asked the audience if they had made provision for any AI-enabled third-party products or internal products or projects in near-term budgets. The majority (40%) reported there was ongoing funding for an internal product or project. Around one-third (32%) said there was ongoing funding for a third-party project. Fewer (16%) said there was new funding for an internal product or project, and 12% recorded new funding for a third-party project.
 

Mixed feelings

Given the possible disadvantages, as well as advantages, of using generative AI, firms have differing views on whether to adopt it and allow people to use it.

PwC’s Manahan said: “Around 80% of risk management time is about creating content by transforming data into information. About 20% of that time is spent analysing, understanding and interpreting the output.

“Generative AI is going to ‘flip’ these figures, although not necessarily to 20%/80%, respectively. It’s going to speed up how quickly we can create content from vast datasets. This will allow more time for analysing and challenging that output, which is the much more valuable part of the process. It’s a risk, but it’s also a huge opportunity.”

Antara Capital’s Garnett commented that, when he first started researching ChatGPT, he had been struck by how easy it was to use. He saw this as a feature to replicate in other applications of generative AI, so non-experts in his firm could easily make use of data on its systems.

The audience was also consulted on its views on generative AI. More than half (57%) reported that their firm would be conducting tightly controlled experiments this year. Just under one-quarter (21%) said that, while the concept is exciting, it is also dangerous and their firm will not take action until it views generative AI as completely safe.

A smaller number (14%) said they had embraced it, with free access for the firm and plans to resolve problems later. Stahl said this represented a vote of confidence in the risk/reward balance. A few (7%) dismissed generative AI completely, viewing it as a ‘flash in the pan’ they don’t intend to consider.
 

Ways forward

The fact remains that, across the securities and investment industry, risk managers are establishing concrete use cases for AI and are already reaping the benefits. But, critically, they are not advancing their operations without caution around the potential impact on security, jobs and business as usual.

As Stahl concluded, AI is not a ‘black box’ you can switch on and leave to its own devices. As long as firms continue to find ways to address the risks and to mitigate them, the leap forward in risk management will be a move worth making.

Learn more

Watch the FIS webinar, Empowering risk management with AI

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here