Journal of Risk Model Validation

LETTER FROM THE EDITOR-IN-CHIEF

Steve Satchell

Trinity College, University of Cambridge

This issue of The Journal of Risk Model Validation brings with it, as always, four interesting research papers that I hope our readers will enjoy. You will find that this editor’s letter continues along familiar lines: I have made a mental note not to talk about epidemiology or any of its manifestations, and I plan to stick to this. I would like to say, however, that I hope you are all well, and to record my pleasure in providing editorial services for the journal.

In our first paper, “Old-fashioned parametric models are still the best: a comparison of value-at-risk approaches in several volatility states”, Mateusz Buczy´nski and Marcin Chlebus make the intuitively reasonable point that one’s choice of value-at-risk (VaR) model is not independent of market conditions; rather, it depends, inter alia, upon the state of volatility. The authors present backtesting results for 1% and 2.5% VaR of six indexes from emerging and developed countries, using several of the best-known VaR models. These include generalized autoregressive conditional heteroscedasticity (GARCH), extreme value theory (EVT), conditional autoregressive VaR (CAViaR) and filtered historical simulation (FHS), with multiple sets of parameters. The backtesting procedure is based on the excess ratio, Kupiec and Christoffersen tests for multiple thresholds and cost functions. The main contribution of this paper is that it compares these models in four different scenarios, with different states of volatility in the training and testing samples. The results indicate that the best of the models, ie, the one that is least affected by volatility changes, is GARCH.1; 1/ with a standardized Student t distribution. What the authors describe as nonparametric techniques (eg, CAViaR with a GARCH setup, or FHS with a skewed normal distribution) have very prominent results in testing periods with low volatility, but they are worse in turbulent periods. I have always had reservations about CAViaR, as it is my understanding that its creation involved the writing down of an equation, and that very little research has followed. Analyzing conditions for the existence of steady states, etc, by using a statistical process without understanding its deeper properties invites trouble.

“An empirical evaluation of large dynamic covariance models in portfolio value-at-risk estimation”, the second paper in the issue, is by Keith K. F. Law, Wai Keung Li and Philip L. H. Yu. The estimation of portfolio VaR requires a good estimate of the covariance matrix. It is also well known that a sample covariance matrix based on some historical rolling windows is noisy and is a poor estimate for the high-dimensional population covariance matrix. In order to estimate conditional portfolio VaR, the authors develop a framework using the dynamic conditional covariance model, within which various de-noising tools are used. The authors provide an excellent summary of the techniques used, which include shrinkage methods, random matrix theory methods and regularization methods. Readers should consult this paper for definitions, but the regularization techniques appear to come off worse when tried empirically.

Andŕe Santos and João Guerra are the authors of “Risk-neutral densities: advanced methods of estimating nonnormal options underlying asset prices and returns”, our third paper. While this paper is an unusual one for The Journal of Risk Model Validation to publish, I think it is of considerable interest. The background story here is the theory of derivatives pricing, whereby assets are priced through replication by other assets with known prices. This active replication induces something called the risk-neutral density, and this can be inferred by inspecting option prices. The paper provides a state-of-the-art analysis of advanced techniques that could be used in this context. While its application to most forms of credit is probably limited, it is potentially applicable to situations where credit products can be reasonably well hedged by liquid traded assets.

The issue’s fourth and final paper, “An alternative statistical framework for credit default prediction” by Mohammad Shamsu Uddin, Guotai Chi, Tabassum Habib and Ying Zhou, introduces a gradient-boosting model that is robust to high-dimensional at a and can produce a strong classifier by combining the predictors of many weak classifiers for credit default risk prediction. The authors recommend this method for practical applications. This study compares the gradient-boosting model with four other well-known classifiers: namely, a classification and regression tree, logistic regression, multivariate adaptive regression splines and a random forest. Six real-world credit data sets are used for model validation. The paper concludes that this procedure seems to work well as described, ie, in a situation where there is a large number of weak classifiers. While this is valuable research, it is not clear that such a situation describes all markets. For example, default risk in the UK mortgage market is strongly dominated by two factors: loan-to-value and commercially available credit scores.

To conclude, one positive byproduct of self-isolation is that one gets to read a lot more. I hope our readers use their time to absorb the papers presented here.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here