As in a year with thirteen full moons, sometimes we have occurrences that exceed the expected output; in that spirit, I am happy to edit an issue of The Journal of Risk Model Validation that contains not the customary four papers, but five.
The issue's first paper, "Consensus information and consensus rating: a simulation study on rating aggregation" by Christoph Lehmann and Daniel Tillich, addresses the important issue of rating aggregation. It is fair to say that aggregation is generally understudied in economics and finance, partly due to the difficulty of getting very far theoretically. This paper contributes to the field by offering simulation evidence on the efficacy of different approaches to aggregation.
Our second paper, by J. M. Pérez Sánchez and E. Gómez-Déniz, is titled "On modeling zero-inflated insurance data". The authors provide a detailed and very lucid account of how to estimate zero-inflated power series distributions using Bayesian methods, and how to validate one's estimates. The application is of general interest in risk modeling as it directly addresses situations where, typically, we have a majority of instruments that behave normally (zeros), while a minority default.
The third paper in the issue, "A prudent loss given default estimation for mortgages" by Bogie Ozdemir, looks at the effect of economic cycles on biases in loss given default and loan to value, which are used in the risk management of housing mortgages. The emphasis here is more on stress testing than direct model validation.
The issue's fourth paper, "Risk reduction in a time series momentum trading strategy" by KiHoon Hong, KiBong Park and Yong Woong Lee, considers the validation of some risk reduction techniques for trading strategies applicable to financial markets. These strategies are momentum-based and usually applied to equity and foreign exchange markets. It is of interest to The Journal of Risk Model Validation to publish applications that apply to markets other than credit markets.
Our fifth and final paper, "A quick tool to forecast value-at-risk using implied and realized volatilities" by Francesco Cesarone and Stefano Colucci, is a further application that may be seen as a mixture of papers one to three and four, as it examines value-at-risk calculations using financial information such as implied volatility and realized volatility. Such a procedure requires a level of data availability that may make it applicable only to exchange-traded liquid instruments with derivatives written on them. But, nevertheless, it is a valuable contribution, as it informs us how we can use additional information to improve our models.
This issue, then, contains not only our usual credit-driven research but also wider applications to other asset classes. We hope to be able to repeat this offering in the future.
Trinity College, University of Cambridge
The authors of this paper use power series distributions to develop a novel and flexible zero-inflated Bayesian methodology.
This paper explores the aggregation of different single ratings to a ‘consensus rating’ to get a higher precision of a debtor’s default probability. It builds upon the methodology published by Grün et al, 2013 and Lehmann and Tillich, 2016.
The authors propose a naive model to forecast ex ante value-at-risk (VaR), using a shrinkage estimator between realized volatility estimated on past return time series as well as implied volatility quoted in the market.
The author of this paper proposes a prudent methodology to correct for potential biases in LGD estimations due to historical price appreciations, appraisal biases and wear-and-tear or potential damage to the house.
In this paper, the authors investigate the four most commonly used risk measures – return volatility, beta, value-at-risk and stressed value-at-risk – of a TSM trading strategy.