Journal of Risk Model Validation

This issue brings us four interesting papers, which I shall discuss below. As an aside, though, I was delighted to meet a loan book risk management team recently who are avid readers of The Journal of Risk Model Validation. While respecting their anonymity, I take on board their comments that a pressing issue for them is learning how to react to the ever-changing regulatory framework. We would be delighted to publish such material but it is hard for the journal to provide timely information, not least because of the built-in time lags between creation, production and dissemination of relevant material.

Our first paper, "AERB: developing AIRB PIT-TTC PD models using external ratings" by Gaurav Chawla, Lawrence R. Forest Jr and Scott D. Aguais, addresses the criticism of the use of credit rating agency (CRA) ratings in the lending practices of credit institutions since the financial crisis. It is believed that this criticism is due to the fact that these ratings and observed risks have diverged. Some regulators do not allow external ratings to be used as direct inputs into a credit institution's internal probability of default/ratings model, and hence the capital planning process as regulators prefer the use of internal assessments generally. However, regulators allow the use of CRAs' long-run average default rates to benchmark the output of an institution's internal probability of default model. The authors propose a class of "agency replication" style models that make use of obligor information and CRA long-term default rate information. They show how one can use this class of models to model portfolios such as large corporates, banks, insurance companies, etc. They also present a range of applications of their methodology.

The issue's second paper, "A mean-reverting scenario design model to create lifetime forecasts and volatility assessments for retail loans" by Joseph L. Breeden and Sisi Liang, discusses a forecasting model for loan prices over the life of a given loan. The approach taken is to use macroeconomic scenarios for the near term and then assume the long-run average for future years. The authors create a loan-level forecasting model using an age-vintage-time structure for retail loans (in this case, a small auto loan portfolio). The loan-level age-vintage-time model is similar in structure to an age-period-cohort model, but it is estimated at the loan level for greater robustness for small portfolios. They employ a concept called the environmental function of time, which is correlated to macroeconomic factors and which allows for greater model stability. This approach is in line with the explicit goals of the new Financial Accounting Standards Board loan loss accounting guidelines. In addition, the authors' model provides a simple mechanism to transition between point-in-time and through-the-cycle economic capital estimates using an internally consistent model.

The third paper in the issue, "Downside risk measure performance in the presence of breaks in volatility" by Johannes Rohde, looks at downside risk measures using a framework based on the loss function for the comparative measurement of the sensitivity of quantile downside risk measures to breaks in the volatility or the distribution by extending the model comparison approach introduced by Lopez in 1998. The author finds that expected shortfall appears to be the superior measure because of its ability to identify breaks in the volatility. An empirical study - in which data from six stock market indexes is used - additionally demonstrates the applicability of this procedure and reconfirms the findings from the simulation study.

Our final paper, "Liquidity stress testing: a model for a portfolio of credit lines" by Marco Geidosch, discusses how cash outflows due to credit lines can be modeled in a liquidity stress test. The model is based on bootstrapping from a portfolio time series of daily credit line drawdowns. It is claimed that it does not rely on any distributional assumptions or any complex parameter estimation, ie, model risk is low; and secondly, it is argued that it is intuitive and straightforward to implement. The author provides simulation results to support his claims. Returning to our earlier aside, we would welcome regular commentary from practitioners related to issues of model risk and risk measurement more generally.


Steve Satchell
Trinity College, University of Cambridge

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here