Journal of Risk Model Validation

This spring issue of The Journal of Risk Model Validation is fairly representative of the journal: it includes two papers that directly address model validation plus one paper on value-at-risk (VaR) and one on loss given default (LGD). It is good to return to core activities after experimenting with some more esoteric offerings. Spring as a season reflects the optimism we see in the economy. Never in my many years as a financial observer have I seen a downturn in the underlying markets being regarded so benignly.

Our first paper, “Underperforming performance measures? A review of measures for loss given default models” by Katarzyna Bijak and Lyn C. Thomas, provides a critical review of existing LGD measures as well as suggesting some new ones. The four new measures it suggests are the mean area under the receiver operating characteristic curve (MAUROC), the mean accuracy ratio (MAR), the mean enhanced Lin–Lin error (MELLE) and a generalized lift. I will not attempt to provide detailed definitions in this editor’s letter but will leave readers to consult the paper instead, where some empirics are also provided.

The issue’s second paper, by Dilip B. Madan and King Wang, is titled “Validation of profit and loss attribution models for equity derivatives”. It is a great honor to have a contribution to the journal from a scholar of Dilip’s stature, and we hope he will write for us again in the future. The paper addresses the use of regression techniques in modeling profit and loss attribution models for equity derivatives. The advantage of such techniques is the possibility of decomposing effects into separate contributions, which allows for more insightful empirical validation.

The third paper in the issue is “A central limit theorem formulation for empirical bootstrap value-at-risk” by Peter Mitic and Nicholas Bloxham. It is a valuable paper because it argues that the central limit theorem approach to assessing minimal operational risk capital does at least as well as picking particular distributions. There is much to recommend in this approach, as it reduces model dependence on specific assumptions and can be done in conjunction with existing methods.

The issue’s final paper is by Pedro Gurrola-Perez and is titled “The validation of filtered historical value-at-risk models”. I found this contribution especially interesting, as it addresses the rescaling of data to reflect current volatility and the issues that this creates with backtesting. I have always harbored concerns about backtesting, largely because of the near impossibility of reconstructing the information set at past points in time. This paper addresses some of my concerns. This is an important area for future research and it has strong practical implications.

Steve Satchell
Trinity College, University of Cambridge

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an indvidual account here: