Editor: Steve Satchell
Published: 20 Jun 2009
Papers in this issue
by Radu Neagu, Sean Keenan, Kete Chalermkraivuth
by Ralf Pauly, Jens Fricke
by Alfred Hamerle, Kilian Plank
by Jimmy Skoglund, Wei Chen
University of Cambridge
After the previous issue (Volume 3 Number 1) concentrated on the credit crunch and the big issues associated with the survival and very essence of the financial system, it is a relief to return to more mundane matters, reflecting, perhaps, the calmer waters we now find ourselves paddling in. The material in this issue is very much core information, discussing statistical issues to do with tests and methodologies used in risk model validation.
The first paper, actually a note, by Hamerle and Plank suggests an improvement for the Berkowitz test when discrete, rather than continuous, distributions are assumed.
The second paper, by Neagu et al, looks at internal credit rating systems. These are widely in use in the UK, at least in mortgage markets, and have always been rather neglected by academic study, not least because of their use of expert judgement models. These models are difficult to conceptualize mathematically, unless one is a Bayesian. At the heart of such a process is a mapping between scores and probability default rates and it is always interesting to see how this relationship is operationalized.
The next paper, by Pauly and Fricke, looks at underestimation of market risk due to poor forecasting. The authors argue that:
. . . resulting ex ante exceedance rates are related to the ex post failure rates which result from the number of exceptions in the backtesting procedure of the Basel II regulation. An empirically grounded analysis reveals a systematic underestimation risk which clearly differs between common forecasting models. With regard to the underestimation risk our paper unfolds the necessity for an adjustment of the Basel II guidelines.
We note that these conclusions have important repercussions for risk model validation and for the regulatory guidelines surrounding it.
Our final paper is by Skoglund and Chen. They look at risk decomposition in a more general setting than is conventionally done. Those of us raised on linear factor models will be surprised by how much can still be done in non-linear models, although factor homogeneity now becomes a key assumption. The authors show what is analytically possible and then demonstrate an empirical alternative likely to be useful in practice.
I conclude with my normal health report on the well-being of the journal. In the middle of Volume 3, we find ourselves blessed with good copy, readers and referees.
Search the archive
Subscribe to gain full access to The Journal of Risk Model Validation and its valuable archive.
Call for papers
Submit your work and we can offer you:
Please contact email@example.com for more information.