The AI explainability barrier is lowering
Improved and accessible tools can quickly make sense of complex models
In April, the Bank of England’s Financial Policy Committee – which has set itself a watching brief on the use of AI in finance – speculated that machine learning models might one day collude and manipulate markets even “without the human manager’s intention or awareness”, something quants have spoken to Risk.net about too.
In the same paper, the BoE suggested rogue AI bots might even learn to provoke market chaos – because they had learned it would offer chances to make profit.
The authors of that piece might have imaginations running wilder than the industry average. But theirs are not isolated concerns. More-or-less everyone in financial markets would like to have a better idea of what goes on inside the most complex machine learning models.
Those who are pressing on with AI find that hard-to-fathom models present practical as well as theoretical challenges. That’s to say, they take longer to approve for live trading. Firms that are too slow to understand their models may be missing a competitive edge. Risk.net wrote in 2018 about the problems as they were afflicting banks at the time, describing it as the “explainability barrier”.
The apparent lowering of the explainability barrier is good news for the many hedge funds using – or hoping to use – machine learning technology
All of which explains why explainable AI – or XAI – has become a hot topic. This is the field of developing ‘post-hoc’ tools to help managers understand what their models are up to. The positive news is that XAI tools are nowadays easily at hand and increasingly familiar.
In the past, explainability might have been the job of a dedicated developer or engineer. Today, open-source code libraries can handle the task almost off-the-shelf, according to Daniele Bianchi, an associate professor of finance at Queen Mary University of London. Bianchi was speaking to Risk.net at the Quant Strats Europe conference in London yesterday.
Explainability techniques are imperfect, of course. But used collectively they can shorten the time taken to put models into production, he says. “The tools are so accessible. There are no excuses any more.”
So-called Shapley values, for example, offer a way to score the contribution of different features in a model towards generating its output. They’re ideal for determining why a model made a specific prediction at a specific time. The type of model doesn’t matter. The features could be CPI numbers or news sentiment, or factors such as stock price momentum.
Surrogate models
Another neat trick is to create a surrogate AI model that learns to mimic the output of a more complex model given the same inputs. The surrogate – which uses a simple easy-to-read approach such as decision tree learning, for example – provides the manager with a stepping stone to understanding what the more complex model is doing.
Surrogate models enable an understanding of what a model does more generally. Bianchi has used them in his own research, training a neural network on the effects of transaction costs in markets, and then using a surrogate to interpret the model.
Of course, tools such as these are individually imperfect – Bianchi says they are “lenses that are more interpretable, to interpret things that are less interpretable”. The calculation of Shapley values can require a lot of computer power. It works essentially by gaming out how a model would perform if denied different combinations of its features. These potentially number in their thousands, hence millions of calculations.
But ways to quicken the process – such as calculating a subset of combinations and homing in on those that matter – are now common. Even for complex machine learning models such as generative adversarial networks, the task takes hours rather than days. In most cases the analysis takes minutes.
The apparent lowering of the explainability barrier is good news for the many hedge funds using – or hoping to use – machine learning technology. In a survey of more than a hundred such firms by broker IG, almost a third predicted AI would have a “game changing impact” on their business in the next three years.
In the past, such fans of machine learning faced a choice between easy-to-interpret models that couldn’t fully explain the markets, and models that explained markets better, about which the creators could say little.
Bianchi, speaking to delegates at the conference, said that was no longer the case. “This conventional dilemma we face, whether to go for simplicity and explainability or complexity and effectiveness – this is a false dilemma,” he said.
Editing by Kris Devasabai
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe
You are currently unable to print this content. Please contact info@risk.net to find out more.
You are currently unable to copy this content. Please contact info@risk.net to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@risk.net
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@risk.net
More on Our take
Roll over, SRTs: Regulators fret over capital relief trades
Banks will have to balance the appeal of capital relief against the risk of a market shutdown
Thrown under the Omnibus: will GAR survive EU’s green rollback?
Green finance metric in limbo after suspension sees 90% of top EU banks forgo reporting
Has the Collins Amendment reached its endgame?
Scott Bessent wants to end the dual capital stack. How that would work in practice remains unclear
Talking Heads 2025: Who will buy Trump’s big, beautiful bonds?
Treasury issuance and hedge fund risks vex macro heavyweights
Do BIS volumes soar past the trend?
FX market ADV has surged to $9.6 trillion in the latest triennial survey, but are these figures representative?
DFAST monoculture is its own test
Drop in frequency and scope of stress test disclosures makes it hard to monitor bank mimicry of Fed models
Lightening the RWA load in securitisations
Credit Agricole quants propose new method for achieving capital neutrality
How much do investors really care about Fed independence?
The answer for some is more nuanced than you might think