メインコンテンツに移動

Quants call for better grasp of how AI models ‘think’

Tools from image recognition can help with interpretability

dynamic-light-patterns.jpg

Quants must find new ways to understand how models using elements of artificial intelligence (AI) – specifically deep learning – actually work, as regulators press for greater transparency, say industry experts.

AI and deep learning, as currently used, seem to be “an excuse to trade off” interpretability for accuracy, said Ken Perry, a “quantamental” risk consultant and former chief risk officer

コンテンツを印刷またはコピーできるのは、有料の購読契約を結んでいるユーザー、または法人購読契約の一員であるユーザーのみです。

これらのオプションやその他の購読特典を利用するには、info@risk.net にお問い合わせいただくか、こちらの購読オプションをご覧ください: http://subscriptions.risk.net/subscribe

現在、このコンテンツをコピーすることはできません。詳しくはinfo@risk.netまでお問い合わせください。

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options

無料メンバーシップの内容をお知りになりたいですか?ここをクリック

パスワードを表示
パスワードを非表示にする

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

ログイン
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here