Journal of Risk Model Validation

Risk.net

What can we learn from what a machine has learned? Interpreting credit risk machine learning models

Nehalkumar Bharodia and Wei Chen

  • Lack of transparency and interpretability of machine learning models prevents regulators, model validation and business users from widely engaging these models.
  • New interpretation methodologies provide ways to reveal insights from what ML models have learned.
  • Applying ML models and interpretation methodologies to LendingClub data helps us assess LendingClub credit risk management soundness.

For being able to analyze unstructured and alternative data, machine learning algorithms are gaining popularity in financial risk management. Alongside the technological advances in learning power and the digitalization of society, new financial technologies are also leading to more innovation in the business of lending. However, machine learning models are often viewed as lacking in terms of transparency and interpretability, which hinders model validation and prevents business users from adopting these models in practice. In this paper, we study a few popular machine learning models using LendingClub loan data, and judge these on performance and interpretability. Our study independently shows LendingClub has sound risk assessment. The findings and techniques used in this paper can be extended to other models.

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here