Quants call for better grasp of how AI models ‘think’

Tools from image recognition can help with interpretability

dynamic-light-patterns.jpg

Quants must find new ways to understand how models using elements of artificial intelligence (AI) – specifically deep learning – actually work, as regulators press for greater transparency, say industry experts.

AI and deep learning, as currently used, seem to be “an excuse to trade off” interpretability for accuracy, said Ken Perry, a “quantamental” risk consultant and former chief risk officer at Och-Ziff Capital Management. He made his remarks at Risk.net’s Quant Summit USA in New York on Ju

To continue reading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an indvidual account here: