Journal of Risk Model Validation

Steve Satchell
Trinity College, University of Cambridge

One of the few benefits of self-isolation is that it gives one the time to write editorials in a rather leisurely manner. In this edition of The Journal of Risk Model Validation – Volume 15, Issue 1 – we again offer four papers, which are listed and discussed below. I mention the volume and issue number because this publication marks the journal’s 15th anniversary; it has been very satisfying to oversee such a long-lasting project.

In our first paper, “Bifractal receiver operating characteristic curves: a formula for generating receiver operating characteristic curves in credit-scoring contexts”, Błażej Kochański formulates a mathematical model for generating receiver operating characteristic (ROC) curves without underlying data. With credit portfolios, it is often the case that the availability of historical data ranges from poor to nonexistent, so theoretical models can be used to provide bounds on and/or insights into the actual but unknown ROC curve. A familiar example is when we base calculations only on accepted loans. All credit-scoring practitioners know that the Gini coefficient usually drops if it is only calculated on cases above the cutoff; however, this fact is not a mathematical necessity, as it is theoretically possible to get an ROC curve that keeps the same Gini coefficient no matter how big a share of the lowest-score cases are excluded from the calculation (a “right-hand” fractal ROC curve). A “left hand” fractal ROC curve would be, by analogy, a curve that keeps its Gini coefficient constant below any cutoff point. The model proposed here is a linear combination of left- and right-hand ROC curves. A bifractal ROC curve is drawn with just two parameters: one responsible for the shape of the curve, and one responsible for the area under the curve (a Gini coefficient). As is shown in this paper, most real-life credit-scoring ROC curves lie between the two fractal curves.

The issue’s second paper, “Research on listed companies’ credit ratings, considering classification performance and interpretability”, is authored by Zhe Li, Guotai Chi, Ying Zhou and Wenxuan Liu. This paper focuses on validation metrics as tools in model construction. Its authors postulate that “[a]ny credit evaluation system must be able not only to identify defaults, but also to be interpretable and provide reasons for defaults”. This is a view that I, as an economist, strongly support. Their approach is to use what one might think of as regression analysis to select the initial features of a credit evaluation system, and then to employ a validity index for a second selection to ensure that the chosen system has the optimal ability to discriminate when determining default status. Additional features, which I will not detail here, allow the authors to build a model that they claim offers a good classification performance: a statement for which they provide some empirical support. Further, the model maintains interpretability, can provide at least five years’ forecasting, and, thus, can potentially offer risk warnings to banks. I think many readers will find a model with this array of features extremely interesting.

“A verification model to capture option risk and hedging based on a modified underlying beta”, the third paper in this issue, is by Chuan-He Shen and Yang Liu. The authors address the question of option hedging measured in terms of the beta of the underlying asset. Two kinds of underlying beta estimation models are introduced in this paper: the first is adjusted by kurtosis and skewness, and the second involves curvature and high-order-moment error terms (which readers will need to delve into the paper to get a grasp of). Shen and Liu then establish a verification model for capturing option risk and hedging by employing the modified underlying beta. They apply these methods to the financial markets of the Chinese mainland, Hong Kong and the United States. They find that the hedging effect of the underlying beta model is improved by curvature and high-order-moment error terms, and, they argue, it is superior to the model of the underlying beta adjusted by kurtosis and skewness.

Our final paper, by Wojciech Starosta, is “Beyond the contract: client behavior from origination to default as the new set of the loss given default risk drivers”. The author takes an approach that models loss given default (LGD) not only in terms of a contract’s characteristics, which is the standard approach, but also in terms of the behavior/characteristics of the contract owner. This approach is very reminiscent of quantitative market research models, where decisions can be explained not only by the characteristics of the choices but also by the characteristics of the chooser. Starosta claims that this approach reduces the values of selected error measures and consecutively improves forecasting ability. The effect is more visible in a parametric method (regression) than in a nonparametric method (regression tree). These findings suggest incorporating client-oriented information into LGD models.

Finally, I am very pleased to announce that I am now being guided by our new Journals Administrator, Rochelle Adadevoh, who will be in touch with Journal of Risk Model Validation contributors in the coming months.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: