Journal of Risk Model Validation
Editor-in-chief: Steve Satchell
Volume 4, Number 2 (June 2010)
University of Cambridge
It is often assumed that risk model validation is solely about credit risk; that is, the risk of a loan to default or degrade in some way (by, for example, having its rating changed). That this is not true could hardly be gleaned from this issue of The Journal of Risk Model Validation, which presents four excellent papers on credit risk and zero papers on other risk categories; I shall delay describing them in detail to discuss at least one of the other key risk models that also needs validation and explain why much less research occurs in these areas.
I shall focus the discussion on market risk, which we shall define as stochastic variation in the value of a portfolio.We usually measure this by volatility of portfolio returns, typically the annualized standard deviation of absolute or benchmarkrelative returns; the latter is known as tracking error. So, validation boils down to seeing what the match is between ex ante and ex post portfolio volatility.
Unfortunately, the latter can only be estimated, rather crudely as it turns out, due to, inter alia, changes in portfolio weights through time. Whilst the same broad class of difficulty seems present in credit risk validation as well, this hardly seems to act as an impediment to the production of research, as we can see from this issue.
I have raised this point with some friends in the equity space who claim that they are more cautious and more rigorous; an extreme version of this, also communicated to me, is that, had the equity community run credit products over the last decade, the financial crisis would never have occurred. This is a somewhat extreme position. However, it leaves us with the interesting result that most equity risk models are barely validated at all. Given the atrocious performance of stock-based risk models over the last three years, this must be a pertinent area of research. Or perhaps, we know the answer: linear factor-based risk models cannot deal with extreme events now commonplace in the returns data. Vendors of risk models, inextricably committed to linear factor models, are not in a position to fix things and hope that that the problems will eventually cure themselves and that Gaussianity will return. This mindset is not all that different from the manager who keeps various derivativeenhanced credit products on his balance sheet at some pre-crisis valuation waiting for the market to return.
We turn now to a discussion of the papers in this issue. The first paper, by Kakeru Miura, Satoshi Yamashita and Shinto Eguchi is titled “Area under the curve maximization method in credit scoring”. This interesting paper takes an optimization approach to credit-scoring model construction. The authors show that the industry standard logit model can be improved upon in terms of area under the curve (AUC) if we set up the model construction problem as an AUC maximization problem. The relevance of this to validation is not immediate, but it is clear that such a procedure could be used as a yardstick to validate a given model.
The second paper, by Oliver Blümke, titled “Probability of default estimation and validation within the context of the credit cycle”, deals with improving the identification of macro factors in through-the-cycle, with the purpose of explaining variability in default rates. The paper uses non-parametric methods to carry out its analysis.
The third paper, by Andrew Chernih, Luc Henrard and Steven Vanduffel, titled “Reconciling credit correlations”, looks at the widespread use of Merton’s model in the computation of asset correlations. As the authors observe, “It is common practice by industry practitioners to apply a financial approach, also known as Merton’s model of the firm, and we remark that the latter approach also underpins modern solvency standards such as Basel II and Solvency II”. They note that alternative measures of asset correlation lead to answers that differ from this approach and provide explanations as to why.
The final paper, titled “Stress test of retail mortgages: a study based on nonstationary Markov chains and t-copula simulation”, by Chang Liu, Min Guo and Raja Nassar, looks at real life retail mortgage loan data from a China national commercial bank and uses it to analyze the projected distribution over different mortgage states. Data was gathered and used to simulate the projected portfolio distribution over the states of a retail mortgage loan under different shock scenarios by a t-copula. Such a copula should capture fat-tails and, under some representations, forms of non-linear dependence other than those measured by standard correlation. As the reader can see, credit risk issues are discussed in great depth in this issue; indeed, this could happily be a special issue on the subject.
Papers in this issue
Reconciling credit correlations
Probability of default estimation and validation within the context of the credit cycle
Stress testing of retail mortgages: a study based on non-stationary Markov chains and t-copula simulation
Area under the curve maximization method in credit scoring