Editor: Steve Satchell
Published: 13 Jan 2012
Papers in this issue
by Patrick Hénaff, Claude Martini
by David Li, Ruchi Bhariok, Radu Neagu
by Jimmy Skoglund, Donald Erdman, Wei Chen
Trinity College, University of Cambridge
The current issue contains four papers. The first of these is by Patrick Henaff and Claude Martini and is titled "Model validation: theory, practice and perspectives". It addresses the July 2009 directive from the Basel Committee requiring institutions to quantify model risk
It details the state of play and raises a number of problems with this quantification. I find the paper highly relevant, as model risk has been an issue of concern in economics for some time. The notion that we, as statisticians, know what the true model is might sound appealing but, in practice, it is very problematic to implement. In fact, to implement this idea in any meaningful way virtually requires one to become a Bayesian.
The second paper, by Jimmy Skoglund, Donald Erdman and Wei Chen, is called “On the time scaling of value-at-risk with trading” and it discusses issues of time scaling.As young finance students learn, you can obtain an annual volatility by taking a daily volatility and multiplying by sixteen. However, in practice, when markets are doing anything interesting, this massively underestimates or overestimates the realized volatility depending on whether autocorrelation is positive or negative. The authors take this idea and move it into the domain of value-at-risk. Interestingly, they show that the style and strategy of the portfolio determines the nature of the scaling. The third paper, by Kete Long, is titled “The fallacy of an overly simplified asymptotic single-risk-factor model”. It looks at problems associated with asymptotic singlerisk- factor models in terms of their impact on economic capital measures. I have always worried about the single-factor assumption and I found this paper extremely informative and helpful.
The final paper, byDavid Li, Ruchi Bhariok and Radu Neagu, is titled “The effect of variant sample sizes and default rates on validation metrics for probability of default models”. It looks at validation metrics for scoring models and shows that none of the usual suspects gives an accurate evaluation while the sample size and the default rate are varied. While this is perhaps unsurprising, the authors also provide some procedures to help decision makers to decide whether or not a model is valid. This is an innovative paper that should lead to considerably more research in this area. It is clear that there is no definitive answer at this stage.
I have purposefully avoided trying to contextualize our papers in terms of current market conditions as I normally do. The current state of financial markets is so driven by political factors that economic analysis seems to be of second-order importance. I shall leave economic linkages and research for later issues.
Search the archive
Subscribe to gain full access to The Journal of Risk Model Validation and its valuable archive.
Call for papers
Submit your work and we can offer you:
Please contact [email protected] for more information.