Journal of Risk Model Validation

Steve Satchell

University of Cambridge

Related to risk model validation, the following question has been of interest to me: Suppose I was going to start with a blank piece of paper and build a new generation of risk models. If I were to do so, how would I design them so that they could be easily validated? It is quite unclear, to me at least, what the answer is. However, in the specific context of equity portfolio risk, some features stand out. One would want to be much more precise about what the actual horizon of the risk model forecast is. It is a common feature of virtually all models currently in the market that they are wonderfully vague about whether the forecast is for one month, one year or any time that matches the answer. It's a similar feature of analysts' forecasts. They seem to be for a year but if you get the answer before the year ends it is counted as a success, a most curious system.

Turning now to the contents of this issue, Fall 2010. I shall start with the paper by Frederik Herzberg. This is a theoretical contribution which defines an implied value-at-risk (VaR) level based on VaR and expected shortfall estimators. The author derives a number of interesting formulae and his approach looks most fruitful in understanding the mapping between densities and VaR.

The second paper by Marco Morone and Anna Cornaglia presents an econometric model to quantify downturn loss given default on residential mortgages. I find this paper especially interesting because of the attempts to map macro-economic variables as determinants of default rates. This procedure could be used practically and validation approaches associated with it are discussed.

The third paper by Arcady Novosyolov and Daniel Satchkov focuses on stresstesting, which I regard as a legitimate part of model validation. Although a technical paper, the authors digress to discuss some philosophical issues about stress-testing, which I think all readers will enjoy. The paper actually looks at two stress-testing models and validates them in extreme periods. The authors use both a time weighted model and an event weighted model. They find evidence, perhaps not surprisingly, that the event weighed model is better at capturing risk in extreme events. The final paper provides a simple but improved approach to statistically testing VaR; it basically gives a confidence interval for non-normal data when the data are independently and identically distributed. Although this approach requires large samples it does chime with recommendations that VaR should not be based on short horizon estimation.

Finally, going forward I would like to say that I’m always interested in issues and thoughts about what constitutes relevant subject matter for this journal. There is very little written that is directly on model validation. There is a great deal written on issues that touch on model validation.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here