University of Cambridge
We live in an environment in which concerns about the double dip are widespread, particularly in Europe, and especially in the UK and the Mediterranean countries. Whether our bankruptcy models are capable of detecting changes in default risk levels seems highly relevant right at the moment. The Journal of Risk Model Validation is therefore happy to publish a paper by Alessandra Amendola, Marialuisa Restaino and Luca Sestini that considers variable selection in default risk models with an application to model validation techniques.
They develop statistical models for the bankruptcy prediction of Italian firms in the limited liability sector using annual balance-sheet information. Several issues – such as the structure of the data, the sampling procedure and the selection of financial predictors – are investigated. The proposed models have been validated by means of accuracy measures over different time horizons. This notion of a validation term structure seems to be a useful idea. The second paper, by Oliver Blumke, has the related theme of time-evolving model validation. To quote from his abstract:
Validating the discriminatory power of a rating system is not trivial: the underlying default probabilities that determine the discriminatory power change over time due to changes in the macroeconomic environment and the credit portfolio.
This analysis is based on the Basel II one-factor model and establishes links between the factor’s importance and the model’s discriminatory power, which is fundamental to model validation. Such analysis is helpful for understanding our empirical results. Our third paper, by Fen-Ying Chen, investigates the performance of competing risk models in the context of structured product risk. The author’s results for generalized autoregressive conditional heteroskedasticity type models differ from previous research in that they find that their overall superiority breaks down when conditioning information, in particular oil price levels, is brought into play. Anyone who has worked with structured products will know how difficult it is to advance analysis due to product heterogeneity, so new results in this area are always of value. The final paper, by Peter Grundke, looks at reverse stress testing. This has become a hot topic recently in a number of widely circulated regulatory documents and so this contribution is very timely. To quote from the abstract:
While, for regular stress tests, scenarios are chosen based on historical experience or expert knowledge and their influence on the bank’s survivability is tested, reverse stress tests aim to find exactly those scenarios that cause the bank to cross the frontier between survival and default. Afterward, the most likely of these scenarios has to be found.
The author argues that bottom-up approaches such as the specific integrated risk management technique are ideal candidates for carrying out quantitative reverse stress tests because “they model interactions between different risk types already on the level of the individual financial instruments and risk factors”. He illustrates his approach with a version of the CreditMetrics model that has a correlated interest rate and rating-specific credit-spread risk. My feeling is that this paper will be very helpful to many practitioners who are currently concerned with how to implement reverse stress testing. Although some discussion in the UK has focused on the need to involve senior executives in discussions regarding exactly which straws “break the camel’s back”, it is always valuable to have a parallel process giving us more objective insights into the weaknesses of business models.
As always, we are on the lookout for good, relevant articles for our journal and welcome enquiries regarding the applicability of an author’s research to the model validation issues.