Wells touts new explainability technique for AI credit models

Novel interpretability method could spur greater use of ReLU neural networks for credit scoring

Neural networks

A team of researchers at Wells Fargo has begun deploying a novel explainability technique for deep learning models – something the bank hopes will allow it to begin using more complex approaches to power credit decisioning.

Banks have long sought to tap the potential of neural networking – a family of deep learning approaches that works by seeking to replicate human thought patterns – for complex problem-solving in credit risk. Yet the technique finds itself underused, since models that rest on

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact [email protected] to find out more.

To continue reading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: