Humans struggle to keep pace with machine learning

Banks and regulators grapple with ‘XAI’ challenge

The financial crisis revealed a few harsh truths to the quantitative finance profession – not least the folly of believing that modelling assumptions will hold true in all market circumstances.

A decade on, the models banks use to power everything from loan decisions to economic forecasting are subject to far more rigorous regulatory scrutiny. For the past several years, the parameters that underpin models, the purpose they’re put to and the governance applied to them have been subject to strict criteria from prudential authorities, forcing banks to implement validation frameworks to cope with the tougher compliance expectations.

But rapid investment in one area of quant research – machine learning – has left banks with a growing number of models that don’t easily fit into the above mould. Models that rely on ML techniques to determine their outputs, particularly more complex deep learning approaches, often defy easy explanation – posing banks a problem when regulators ask them to show their workings.

This problem is cropping up with increasing frequency in banking, but it is not new: practitioners grappling with it in other industries – doctors using image recognition technology to spot tumours, or military leaders developing drones that can spot targets hidden behind layers of defences – have given the field its own, slightly awkward acronym: XAI, or explainable artificial intelligence.

All acknowledge, to varying degrees, that a failure to justify their models’ conclusions will erode trust in their approaches, and ultimately curb the widespread use of such methods in future – even where they offer potentially huge advantages.

It’s the same in banking. Unsurprisingly, regulators are already probing how banks use ML models, even where they only intend to deploy them as challengers for validation purposes – let alone active use.

Models with client-facing applications will not be alone in attracting scrutiny: it would reflect badly on a bank if its chief reporting officer couldn’t explain how the firm’s new ML-based anti-money laundering software verified the source of client funds in response to a regulator’s spot-check.

Banks are clear, however, that the direction of travel on ML research and deployment is only going to run one way – and unequivocal that regulators will have to keep up.

One senior risk manager at a large bank argues watchdogs must invest time and resources in training model validators to understand the rudiments of more complex ML techniques, akin to how banks themselves previously laid on seminars to educate regulators on the quantitative techniques underlying capital models imposed under the original Basel accords, which increased banks’ freedom to model their capital requirements.

Should regulators fail to keep pace with ML model development, he implies, they will be standing in the way of progress.

“Regulators will have to go on the same journey they did 20 years ago on quant models in the primary risk space. They will have to learn, they will have to invest and they will have to deal with it.”

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here