Prising open the black box of AI

Shapley values, Lime and other tools can help decipher machine learning’s output. It’s a start…

Picture this: one day, an AI algorithm used to confect investments unimagined by any fundamental analyst goes wildly awry, crash-landing a major asset manager in a sea of red ink. Imagine then that Congress summons the firm to testify on the computer-hatched debacle. Legislators thunder at the executive in charge – how in hell did this happen? And ashen-faced, his rictus of composure now curdled, the executive bitterly thinks to himself, “Wouldn’t we all like to know …”

Some version of this

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options