Interpretability of neural networks: a credit card default model example

Recently developed techniques aimed at answering interpretability issues in neural networks are tested and applied to a retail banking case


Ksenia Ponomareva and Simone Caenazzo show the feasibility of overcoming the interpretability hurdles around the application of neural networks in the estimation of credit risk for a portfolio of credit cards

Historically, the widespread use of advanced deep learning models in sensitive fields such as medicine and finance has been hindered by a fundamental lack of interpretability regarding the outcomes of such models. Simpler techniques such as linear or logistic

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact or view our subscription options here:

You are currently unable to copy this content. Please contact to find out more.

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to View our subscription options

You need to sign in to use this feature. If you don’t have a account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here