The first paper in this issue of The Journal of Risk Model Validation is by Lawrence R. Forest Jr., Gaurav Chawla and Scott D. Aguais and is titled "Biased benchmarks". The authors are concerned about the widespread use by regulators and credit analysts of both long-run average default rates from default studies by S&P and Moody's and expected default frequencies from the Moody's KMV public firm model as benchmarks for evaluating the accuracy of an institution's probability of default models. They contend that recent evidence indicates that these benchmarks have, over the last eleven years, exaggerated default risk for nonfinancial corporate entities. For nonfinancial corporate entities, over the cyclically neutral period from the start of 2003 through 2013, the average one-year realized default rate of almost every S&P and Moody's alphanumeric grade is well below the average default rates experienced up to 2003. Expressed in terms of grades, it appears that both S&P and Moody's have been grading nonfinancial corporate entities more harshly over the past eleven years than they had previously: by about one alphanumeric notch in the speculative-grade range and by about two notches in the investment-grade range. The sources of this time inconsistency bias are not identified, but the evidence presented in the paper raises concerns that lending institutions that apply these benchmarks may be unduly restricting corporate lending, which should be of great interest to a wide range of lenders and their quantitative advisors.
The issue's second paper, "Backtesting Solvency II value-at-risk models using a rolling horizon", is by Miriam Loois and is what seems to me to be a very valuable study on backtesting. If I may quote from the abstract: Solvency II value-at-risk (VaR) models focus on a one-year horizon and a confidence interval of 0.5%. To accurately backtest such models, a multiple of 200 years of historic data is necessary. Due to a lack of data, backtests are often performed using a rolling horizon. We investigate the effect of using this rolling horizon. We show that this leads to a significant increase in the probability of finding an extreme event. The author illustrates the problem by analyzing a review of the equity stress parameter for Dutch pension funds. The review report states that the number of historic violations is too high and, therefore, the stress parameter is too low. The author finds that the number of historic violations can be explained through the use of a rolling window and proposes a step-by-step approach to correctly backtest in this situation. She finds that the probability of finding an extreme event can (falsely) increase by a factor of seven. This should be of great value to the vast number of us who routinely carry out backtests using rolling windows as part of their research. It is also an opportunity to remind contributors that the journal is very interested in publishing research on backtesting methodology.
The third paper in the issue, "Stress testing and modeling of rating migration under the Vasicek model framework: empirical approaches and technical implementation" by Bill Huajian Yang and Zunwei Du, is concerned with stress testing the Vasicek model by extending the correlation structure for nondefault ratings. Two models are proposed. Both models can be estimated and explanations are provided as to how to proceed. Analytical formulas for conditional transition probabilities are derived. Modeling downgrade risk rather than default risk addresses the issue of low default counts for high-quality ratings.
Our last paper is "Commodity value-at-risk modeling: comparing RiskMetrics, historic simulation and quantile regression" by Marie Steen, Sjur Westgaard and Ole Gjølberg. The authors investigate the risk modeling of commodities. They note that return distributions differ widely across different commodities, both in terms of tail fatness and skewness. These are features that they take into account when modeling risk. The authors outline the return characteristics of nineteen different commodity futures during the period 1992-2013. They then evaluate the performance of two standard risk modeling approaches, ie, RiskMetrics and historical simulation, against a quantile regression approach. Their findings strongly support quantile regression. It has always been believed that quantile regression is better at picking up nonlinear dependence, but it is my understanding that a linear quantile relationship is not in any way generic for a broad class of conditional distribution functions.
This issue covers a splendid range of topics: credit mismeasurement, backtesting methodology, stress testing and, finally, commodity risk model validation.
Trinity College, University of Cambridge
The authors of this paper contend that recent evidence indicates that benchmarks have, over the last eleven years, exaggerated default risk for nonfinancial corporate entities.
The author of this paper performs an analysis on a review of the equity stress parameter for Dutch pension funds.
Stress testing and modeling of rating migration under the Vasicek model framework: empirical approaches and technical implementation
This paper is concerned with stress testing the Vasicek model by extending the correlation structure for nondefault ratings. Two models are proposed.
Commodity value-at-risk modeling: comparing RiskMetrics, historic simulation and quantile regression
The authors of this paper investigate the risk modeling of commodities. They note that return distributions differ widely across different commodities, both in terms of tail fatness and skewness.