Quants call for better grasp of how AI models ‘think’

Tools from image recognition can help with interpretability


Quants must find new ways to understand how models using elements of artificial intelligence (AI) – specifically deep learning – actually work, as regulators press for greater transparency, say industry experts.

AI and deep learning, as currently used, seem to be “an excuse to trade off” interpretability for accuracy, said Ken Perry, a “quantamental” risk consultant and former chief risk officer at Och-Ziff Capital Management. He made his remarks at Risk.net’s Quant Summit USA in New York on

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options


Want to know what’s included in our free membership? Click here

This address will be used to create your account

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here