This spring issue of The Journal of Risk Model Validation is fairly representative of the journal: it includes two papers that directly address model validation plus one paper on value-at-risk (VaR) and one on loss given default (LGD). It is good to return to core activities after experimenting with some more esoteric offerings. Spring as a season reflects the optimism we see in the economy. Never in my many years as a financial observer have I seen a downturn in the underlying markets being regarded so benignly.
Our first paper, “Underperforming performance measures? A review of measures for loss given default models” by Katarzyna Bijak and Lyn C. Thomas, provides a critical review of existing LGD measures as well as suggesting some new ones. The four new measures it suggests are the mean area under the receiver operating characteristic curve (MAUROC), the mean accuracy ratio (MAR), the mean enhanced Lin–Lin error (MELLE) and a generalized lift. I will not attempt to provide detailed definitions in this editor’s letter but will leave readers to consult the paper instead, where some empirics are also provided.
The issue’s second paper, by Dilip B. Madan and King Wang, is titled “Validation of profit and loss attribution models for equity derivatives”. It is a great honor to have a contribution to the journal from a scholar of Dilip’s stature, and we hope he will write for us again in the future. The paper addresses the use of regression techniques in modeling profit and loss attribution models for equity derivatives. The advantage of such techniques is the possibility of decomposing effects into separate contributions, which allows for more insightful empirical validation.
The third paper in the issue is “A central limit theorem formulation for empirical bootstrap value-at-risk” by Peter Mitic and Nicholas Bloxham. It is a valuable paper because it argues that the central limit theorem approach to assessing minimal operational risk capital does at least as well as picking particular distributions. There is much to recommend in this approach, as it reduces model dependence on specific assumptions and can be done in conjunction with existing methods.
The issue’s final paper is by Pedro Gurrola-Perez and is titled “The validation of filtered historical value-at-risk models”. I found this contribution especially interesting, as it addresses the rescaling of data to reflect current volatility and the issues that this creates with backtesting. I have always harbored concerns about backtesting, largely because of the near impossibility of reconstructing the information set at past points in time. This paper addresses some of my concerns. This is an important area for future research and it has strong practical implications.
Trinity College, University of Cambridge
This paper reviews the ways of measuring the performance of LGD models that have been previously used in the literature and also suggests some new measures.
The aim of this paper is to validate profit and loss attribution generated by daily movements of option prices as seen through their Black–Scholes (Black and Scholes 1973) and Merton (1973) implied volatilities.
In this paper, the importance of the empirical bootstrap (EB) in assessing minimal operational risk capital is discussed, and an alternative way of estimating minimal operational risk capital using a central limit theorem (CLT) formulation is presented.
In this paper, the authors examine the problem of validating and calibrating FHS VaR models, focussing in particular on the Hull and White (1998) approach with EWMA volatility estimates, given its extended use in the industry.