Journal of Risk Model Validation

The four papers included in this issue of The Journal of Risk Model Validation contain a wider spread of topics than usual, and I will therefore desist in trying to find a common theme and proceed directly to describing the individual papers.

Our first paper, "Conditioned likelihood estimation of nonnormal distributions: risk estimation of credit portfolios in stressed markets" by Kingsley Oteng-Amoako, uses the Box-Cox transformation to address issues of nonnormality. The resulting estimators are applied in the parameterization of an estimated tail loss measure that is used to assess the risk exposure of traded CDO collateralized debt obligation tranches. The author's findings highlight that the transformed estimated tail loss provides a more consistent assessment of portfolio risk, particularly during periods of significant market stress. As such, we have a procedure that allows one to take on board varying degrees of nonnormality depending on market conditions.

The second paper in the issue, "Modeling systematic risk and point-in-time probability of default under the Vasicek asymptotic single-risk-factor model framework" by Bill HuajianYang, deals with issues of systematic risk. Under theVasicek asymptotic single-risk-factor model framework, entity default risk for a risk-homogeneous portfolio can be divided into two parts: systematic and entity specific. The author makes the following claim.

While entity-specific risk can be modeled by a probit or logistic model using a relatively short period of portfolio historical data, modeling of systematic risk is more challenging. In practice, most default risk models do not fully or dynamically capture systematic risk.

The author proposes an approach to modeling systematic and entity-specific risks by parts and then aggregating the parts together analytically. Systematic risk is quantified and modeled by a multifactor Vasicek model with a latent residual, a factor accounting for default contagion and feedback effects. The asymptotic maximum likelihood approach for parameter estimation for this model is equivalent to least-squares linear regression. Conditional entity probabilities of default for scenario tests and through-the-cycle entity probability of default both have analytical solutions. For validation, the author models the point-in-time entity probability of default for a commercial portfolio and stresses the portfolio default risk by shocking the systematic risk factors. Rating migration and portfolio loss are assessed.

The issue's third paper, "Sensitivity analysis of risk measurement for catastrophe losses caused by natural disasters" by Myung Suk Kim, is slightly different from our usual papers as it addresses issues of budgeting against catastrophic loss. To my mind, this has interesting things to say to risk managers as many active trading strategies have an implicit budgeting problem lurking behind them. The author examines the sensitivity of risk measurement to losses caused by natural catastrophes. Annual losses from natural disasters for sixteen cities and provinces in South Korea during the period 1979-2011 are used for the case study.Various distributions are suggested to model the loss distribution, and they are evaluated using the Kolmogorov-Smirnov test. Test results indicate that normal mixture and lognormal distributions are suitable for modeling losses in several districts. Using these distributions and annual value-at-risk, losses for one-in-ten-year, one-in-twenty-year and one-in-a-hundred-year events are estimated for each district. The corresponding event-to-budget ratios are reported in order to examine their impacts on government budgets. The empirical results show that risk measurements for potential catastrophe losses are very sensitive to the assumptions made about the nature of the loss distribution. This study may help improve governments' risk-based decision-making strategies in the event of natural disasters but, as I mentioned earlier, it is also likely to be of wider interest, as the approach could be used to look at asset allocation in hedge funds, for example.

Our final paper, "Validation of term structure forecasts with factor models" by Alexander B. Matthies, addresses issues associated with the use of dynamic factor models. In particular, the author examines the predictive content of dynamic factor models in term structure modeling and carries out exercises in evaluation and validation. Under a purely statistical data-driven approach, different sets of variables and different estimation and forecasting methods are compared. Central assumptions of standard term structure factor models are thereby tested, and the author finds that the inclusion of macroeconomic variables is useful for improving forecasts. Furthermore, a combination of static representation for factor estimation and autoregressive factor forecasts produces superior forecasts. These results confirm the statistical assumptions of term structure models made in prior research. So we have a mix of papers: one on distributional issues and, loosely speaking, three on systematic/factor issues. It is a philosophical question whether exogenous catastrophes represent model factors, but from a risk management perspective we should try to achieve this, while acknowledging the difficulty of doing so using the conventional linear structure.

Steve Satchell
Trinity College, University of Cambridge

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here