University of Cambridge
The current issue contains four papers covering the topics of model validation and stress testing. It may be that the distinction between these two concepts exists more in the titles of the papers than in the substance. Interestingly, a common theme running throughout the four papers, particularly the last three, is a focus on notions of uncertainty interpreted in various ways. This uncertainty can be thought of in terms of changes in the distribution, changes in the model or problems with the data. Such is the strength of this theme that this could be interpreted as a special issue, although it was not planned that way.
Our first paper, "Stress testing a retail loan portfolio: an error correction model approach" by Steeve Assouan, uses techniques such as cointegration and error correction modeling to set up a framework for stress testing a loan portfolio. These techniques are widely used by economists in situations where individual processes look like random walks but move together. Intuitively, it is a bit like two drunks leaving a pub but falling over in the same place. In the appendix to the paper, Assouan gives a lucid description of these methodologies for those not steeped in their mysteries.
The second paper, "On bounds for model calibration uncertainty" by Mikhail V. Deryabin, looks at upper and lower bounds on coherent model risk measures. The approach is based on uncertainty rather than risk. The idea here is that one needs to take into account, especially in the case of derivatives, what the impact of getting the pricing model wrong will be on the traded value of the derivative. It seems clear that, in order to get a sensible answer out of this problem, one needs to impose a fair amount of structure, which the author does. I found the paper technically challenging but extremely interesting.
The third paper, "Quantifying model risk within a CreditRisk+ framework" by Matthias Fischer and Alexander Mertel, focuses on the CreditRiskC modeling framework of Credit Suisse Financial Products. Essentially, this paper weakens some of the assumptions of the model, replacing it with a more "flexible" family of distributions, to borrow the paper's terminology. The authors demonstrate the attractive properties of this approach and provide some empirical examples.
The final paper, "The effect of imperfect data on default prediction validation tests" by Heather Russell, Douglas Dwyer and Qing Kang Tang, looks at the way in which data problems influence the testing procedures that are used in model validation. The focus is on measures such as probability of default, and simulation methods are used. This paper is most welcome, as problems with financial data are often much more serious than data providers or data users are prepared to acknowledge. The presence of stale data in most asset classes has profound effects on portfolio construction and risk management.
I hope you find these papers interesting. The topic of uncertainty, and attempts to assess it, is a perennial issue in model validation.
It has been brought to our attention that the paper ‘A realistic approach for estimating and modelling loss given default' by Rakesh Malkani, which was published in The Journal of Risk Model Validation in 2012 (Volume 6, Number 2, pp. 103-116), consists almost exclusively of work written by Mr Christopher Karr, without proper attribution.
We recognise that the paper should never have been published under Rakesh Malkani's name and apologize to Mr Karr unreservedly. We are reliant on our authors for giving us accurate attribution information and it is sadly the case that errors of this kind will occasionally happen. We would like to assure our readers that we regard correct attribution as being of the utmost importance.
Steve Satchell, Editor-in-Chief
Nick Carver, Publisher