Skip to main content

When AI models malfunction, address the problem not the math

Governance of artificial intelligence models should focus on actionable outcomes rather than interpretability, argues former chief regulator

A robot lies on its side, to represent broken-down AI

When a bank’s value-at-risk model fails backtesting, risk managers know what to do. They trace the problem to specific components – like a stale risk factor time series, or an underperforming revaluation model – and make targeted fixes. This decomposability has been fundamental to traditional model risk management for decades.

But large language models (LLMs) and generative AI systems don’t work

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options

Want to know what’s included in our free membership? Click here

Show password
Hide password

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here