Prising open the black box of AI

Shapley values, Lime and other tools can help decipher machine learning’s output. It’s a start…

Picture this: one day, an AI algorithm used to confect investments unimagined by any fundamental analyst goes wildly awry, crash-landing a major asset manager in a sea of red ink. Imagine then that Congress summons the firm to testify on the computer-hatched debacle. Legislators thunder at the executive in charge – how in hell did this happen? And ashen-faced, his rictus of composure now curdled, the executive bitterly thinks to himself, “Wouldn’t we all like to know …”

Some version of this

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact [email protected] to find out more.

To continue reading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: