Journal of Risk Model Validation

Welcome to the fourth issue of the thirteenth volume of The Journal of Risk Model Validation.

This issue contains four papers, all of which maintain the high standard of scholarship we have come to expect from our authors. The issue’s first paper, “A study on window-size selection for threshold and bootstrap value-at-risk models” by Anri Smith and Chun-Kai Huang, investigates the effects of window-size selection on various models for value-at-risk (VaR) forecasting using high-performance computing. Subsequently, automated procedures using change-point analysis for optimal window-size selection are proposed. In particular, stationary bootstrapping and the peaks-over-threshold method are utilized for a rolling daily VaR estimation and are contrasted with the classical conditional Gaussian model. The authors provide empirical evidence that change-point procedures can, on average, result in better risk predictions than a predetermined, fixed window size. This result is interesting as it falls within a class of results whereby changing the “clock” of a process can enhance our understanding of that process.

Our second paper, “Quantification of the estimation risk inherent in loss distribution approach models” by a record five authors (K. Panman, L. van Biljon, L. J. Haas- broek, W. D. Schutte and T. Verster), addresses the topic of model risk. Financial institutions rely heavily on risk models for operational and strategic purposes. These models are simplified representations of complex dynamics and are obviously susceptible to model risk. The authors propose a generic approach to quantifying the estimation risk of risk models using the error of a maximum likelihood estimator. They refer to their method as “simulate errors from assumed truth”, or SEAT. To demonstrate SEAT, operational risk examples using the loss distribution approach are examined. Model risk exposure is shown to be reduced when loss data samples are larger, the severity distribution exhibits light tails, and the model is used for less- extreme quantiles of the aggregate loss distribution. Further, the authors show that the aggregate model risk across a portfolio of risk models is much smaller than that of the individual risk models. Their analysis extends to looking at capital requirements as well.

“Value-at-risk in the European energy market: a comparison of parametric, historical simulation and quantile regression value-at-risk” by Sjur Westgaard, Gisle Hoel Arhus, Marina Frydenberg and Stein Frydenberg, the third paper in this issue, looks at a set of VaR models and their ability to appropriately describe and capture price-change risk in the European energy market. The authors make in-sample,
one-day-ahead VaR forecasts using one simple parametric model, one historical simulation model and one quantile regression (QR) model. They apply their models to nine different energy futures: Brent crude oil, API2 coal, UK natural gas and three German and Nordic power futures in the period 2007–17. The models are tested at both long and short positions. Their research suggests that the QR model is easy to implement and offers accurate VaR forecasts in the European energy market. QR is not always well understood. If the data is normal, there is little to be gained from QR. A requirement for gain is nonnormality.

Our final paper, “Validation of index and benchmark assignment: adequacy of capturing tail risk” by Lukasz Prorokowski, provides practical recommendations for the validation of risk models under the Targeted Review of Internal Models (TRIM). The European Central Bank has recently launched the first phase of the TRIM, which investigates the ability of banks’ internal risk models to adequately capture tail risk in underlying portfolio items. Against this background, the paper reviews statistical tools for validating the assumption that a given risk proxy (in this case, an equity index/fund benchmark) adequately represents tail risk in the returns of an individual asset. This paper reviews the following validation tools for comparing the return distributions of the proxy and the underlying assets: the Kappa coefficient, the paired t -test and the two-sample Kolmogorov–Smirnov test. In doing so, it shows the short- comings of using certain statistical tools to assess the ability to capture tail risk. The motivation of this paper is to advise on specific aspects of the validation practice that have grown in importance under the TRIM framework but remain under- researched, with no academic studies explicitly advising on the validation of the risk proxy assignment.

Finally, I would like to take this opportunity to thank Ciara Smith, who has been working with me on The Journal of Risk Model Validation for a number of years and is moving on. She has been a delight to work with and I wish her well for the future.

Steve Satchell
Trinity College, University of Cambridge

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here