BlackRock shelves unexplainable AI liquidity models

Risk USA: Neural nets beat other models in tests, but results could not be explained

SI question
Quants cannot explain the results of neural networks

Quants at the world’s largest asset manager have decided to shelve promising AI liquidity risk models because they have not been able to explain the models’ output to senior management.

“The senior people want to see something they can understand,” Stefano Pasquali, head of liquidity research at BlackRock, said at Risk USA on November 9, commenting on why the fund manager chose not to deploy two AI models – one aimed at forecasting market volume, the other at redemption risk. They were set aside despite indications they would have “dramatically” outperformed traditional approaches, he said.

In his address, Pasquali updated the industry on BlackRock’s efforts to construct a more accurate model for liquidity risk, a problem well-suited to machine learning given the very scant data on some securities and markets, and the high signal-to-noise ratio in others.

Liquidity risk has become a hot topic of late. US mutual funds will soon have to report the liquidity profiles of their portfolios on a monthly basis to the Securities and Exchange Commission. The new rules will be phased into effect beginning December 1.

The BlackRock models in question were built using neural networks, a type of AI designed to identify complex, non-linear patterns and market correlations that may not necessarily be obvious. But the drawback to neural networks, as opposed to other machine learning techniques such as random forests and decision trees, is that, as of yet, there is no way to explain their results.

“For the volume forecasts, we tested neural networks and, no surprise, the model was over-performing the random forest a little bit and potentially working better,” Pasquali said. “It may have dramatically over-performed the random forest, but we did not allow this into production.”

A second neural networks model was intended to forecast fund redemptions. Prior to this, BlackRock had taken what Pasquali called a “responsible” approach by testing non-machine techniques first, then machine learning logistic regression, a technique that produces more comprehensible results. Neither approach worked. “The model was poor in all these scenarios,” he said.

In contrast, the second neural network model showed promising results for a few types of funds. But much remains to be done, Pasquali said: first, making sure the approach works on a wide variety of funds and not just a few characteristic funds, and second, tackling interpretability.

The firm will continue to develop liquidity risk models based on explainable AI techniques.

In the quant world, opinion is divided on whether allowing a machine to take the reins – even when the choices it makes aren’t understood – violates a firm’s fiduciary duty to its clients. Pasquali called for the industry to address the problem of interpretability in machine learning before putting investor money into play.

“The terminology of AI makes us think we are in the age of the machine, but particularly in finance, the human being is very important because we have a fiduciary role to all investors,” he said.

Besides investors and senior management, risk managers are also uneasy putting on positions blind.

Risk managers, he said, will always ask why – and insist on an answer. “If I blindly go in the machine learning direction just because it’s a sexy topic, my answer will be ‘I don’t know’, and this guy will never invest money,” Pasquali said.

The good news, he said, is the explosion of initiatives to solve the problem of interpretability in finance and beyond, known as explainable AI, or XAI.

“I’m expecting, in the next few years, to be in a good position for this,” he added.

Until a solution is found, BlackRock will continue to keep unexplainable models out of action.

“If the model says that selling $10 million of bonds is going to have transaction cost of 10bp, I’m not willing to give up any interpretability [for performance],” he added.

BlackRock asked for it to be made clear that research into deep learning techniques, such as neural networks, and the interpretability of those models is ongoing and has not been abandoned. 

November 13, 2018: This article was updated with additional information from BlackRock, in the final paragraph.  

  • LinkedIn  
  • Save this article
  • Print this page  

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact [email protected] to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: