Skip to main content

The AI explainability barrier is lowering

Improved and accessible tools can quickly make sense of complex models

Three wooden blocks with a robot shaking a human hand, a brain with mechanical cogs, and an AI cube

In April, the Bank of England’s Financial Policy Committee – which has set itself a watching brief on the use of AI in finance – speculated that machine learning models might one day collude and manipulate markets even “without the human manager’s intention or awareness”, something quants have spoken to Risk.net about too.

In the same paper, the BoE suggested rogue AI bots might even learn to provoke market chaos – because they had learned it would offer chances to make profit.

The authors of that piece might have imaginations running wilder than the industry average. But theirs are not isolated concerns. More-or-less everyone in financial markets would like to have a better idea of what goes on inside the most complex machine learning models.

Those who are pressing on with AI find that hard-to-fathom models present practical as well as theoretical challenges. That’s to say, they take longer to approve for live trading. Firms that are too slow to understand their models may be missing a competitive edge. Risk.net wrote in 2018 about the problems as they were afflicting banks at the time, describing it as the “explainability barrier”.

The apparent lowering of the explainability barrier is good news for the many hedge funds using – or hoping to use – machine learning technology

All of which explains why explainable AI – or XAI – has become a hot topic. This is the field of developing ‘post-hoc’ tools to help managers understand what their models are up to. The positive news is that XAI tools are nowadays easily at hand and increasingly familiar.

In the past, explainability might have been the job of a dedicated developer or engineer. Today, open-source code libraries can handle the task almost off-the-shelf, according to Daniele Bianchi, an associate professor of finance at Queen Mary University of London. Bianchi was speaking to Risk.net at the Quant Strats Europe conference in London yesterday.

Explainability techniques are imperfect, of course. But used collectively they can shorten the time taken to put models into production, he says. “The tools are so accessible. There are no excuses any more.”

So-called Shapley values, for example, offer a way to score the contribution of different features in a model towards generating its output. They’re ideal for determining why a model made a specific prediction at a specific time. The type of model doesn’t matter. The features could be CPI numbers or news sentiment, or factors such as stock price momentum.

Surrogate models

Another neat trick is to create a surrogate AI model that learns to mimic the output of a more complex model given the same inputs. The surrogate – which uses a simple easy-to-read approach such as decision tree learning, for example – provides the manager with a stepping stone to understanding what the more complex model is doing.

Surrogate models enable an understanding of what a model does more generally. Bianchi has used them in his own research, training a neural network on the effects of transaction costs in markets, and then using a surrogate to interpret the model.

Of course, tools such as these are individually imperfect – Bianchi says they are “lenses that are more interpretable, to interpret things that are less interpretable”. The calculation of Shapley values can require a lot of computer power. It works essentially by gaming out how a model would perform if denied different combinations of its features. These potentially number in their thousands, hence millions of calculations.

But ways to quicken the process – such as calculating a subset of combinations and homing in on those that matter – are now common. Even for complex machine learning models such as generative adversarial networks, the task takes hours rather than days. In most cases the analysis takes minutes.

The apparent lowering of the explainability barrier is good news for the many hedge funds using – or hoping to use – machine learning technology. In a survey of more than a hundred such firms by broker IG, almost a third predicted AI would have a “game changing impact” on their business in the next three years.

In the past, such fans of machine learning faced a choice between easy-to-interpret models that couldn’t fully explain the markets, and models that explained markets better, about which the creators could say little.

Bianchi, speaking to delegates at the conference, said that was no longer the case. “This conventional dilemma we face, whether to go for simplicity and explainability or complexity and effectiveness – this is a false dilemma,” he said.

Editing by Kris Devasabai

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here