Journal of Risk Model Validation

The first issue of this, the eighth volume, of The Journal of Risk Model Validation contains four papers. The first, "A statistical repertoire for quantitative loss given default validation: overview, illustration, pitfalls and extensions" by Matthias Fischer and Marius Pfeuffer, deals with loss given default (LGD) validation. The authors argue that this topic is relatively neglected and they set about putting this right by providing a detailed summary and analysis of LGD validation instruments.

The second paper in the issue, "Credit scoring optimization using the area under the curve" by Anne Kraus and Helmut Küchenhoff, presents results for consumer credit scoring: in particular, the receiver operating characteristic curve and the area beneath it, which is knownby the acronym AUC (for "area under the curve"). This concept has a long history as an approach to modeling a model's predictive performance. The authors compare procedures based on the AUC against alternative methods based on the logistic model. They demonstrate, via simulation, the superiority of the AUC approach in cases where the logistic model assumption fails. They also provide analysis based on German retail data.

The issue's third paper, "A proposed framework for backtesting loss given default models" by Gert Loterman, Michiel Debruyne, Karlien Vanden Branden, Tony Van Gestel and Christophe Mues, sets a record for The Journal of Risk Model Validation in providing five authors: excellent news, of course, in swelling the ranks of the risk validation community. Their abstract, from which I shall quote, provides a succinct summary of the links between the Basel Accords, model validation and backtesting:

"The Basel Accords require financial institutions to regularly validate their loss given default (LGD) models. This is crucial so banks are not misestimating the minimum required capital to protect them against the risks they are facing through their lending activities. The validation of an LGD model typically includes backtesting, which involves the process of evaluating to what degree the internal model estimates still correspond with the realized observations."

This paper, in common with the first paper in the issue, makes the point that there is a dearth of research on LGD validation, but the focus here is on backtesting. The authors provide a backtesting framework using statistical hypothesis tests to support the validation of LGD models. They apply their tests to an LGD model fitted to real-life data and evaluated through a statistical power analysis.

Our issue's final paper is quite different from the first three, which we could almost describe as classical, dealing as they do with well-established concepts using well established methods. These remarks should not be misconstrued as critical, though; in fact, the hardest challenge is often to find something new to say about ideas that are well-known. Our fourth paper, "Modeling portfolio risk by risk discriminatory trees and random forests" by Bill Huajian Yang, applies discriminatory trees to estimate the exposure at default for a commercial portfolio. Tree methods are a generic class of procedures that partition space in various clever ways to accelerate search procedures or make them more accurate. One feels that such ideas should find application in large retail loan books with large numbers of loan characteristics, which is why we are delighted to publish Yang's paper: we feel that these methods should be of interest to many of our readers.

Steve Satchell
Trinity College, University of Cambridge

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here