Journal of Risk Model Validation

Steve Satchell
Trinity College, University of Cambridge

Faced with unprecedented political risk, which does not normally form part of the risk model architecture, we are witnessing macroeconomic events that may well inform future approaches to validation. History is running ahead of research, although some The Journal of Risk Model Validation papers have presented frameworks that can address this problem in general.

One such paper, and the first paper in this issue, is “Modeling credit risk in the presence of central bank and government intervention” by Bernd Engelmann. Engelmann notes that new challenges in credit modeling have resulted from the central bank and government interventions that followed the outbreak of Covid-19. Relationships between credit risk and macroeconomic factors that had remained stable over decades have now broken down. One example of this is the unemployment rate, which has been widely used in predicting default rates in retail loan segments. Because of government interventions since mid-2020, this no longer works. As a result, default rates are substantially lower than predicted by credit models. Using data published by the US Federal Reserve Bank in 2021 Q1, Engelmann suggests a framework that quantifies the effect of interventions and shows how to include intervention scenarios in credit models to improve the accuracy of their short-term predictions and to allow analysts to evaluate long-term scenarios along with other related outcomes. This topic is of personal interest to me as I have worked in credit risk research on a mortgage book and was faced with zero defaults: a challenge for measuring default risk.

The second paper in the issue, “Predicting financial distress of Chinese listed companies using a novel hybrid model framework with an imbalanced-data perspective”, is by Tong Zhang and Zhichong Zhao. When predicting financial distress, the presence of imbalanced data for firms is ubiquitous. The authors propose a hybrid model framework to solve the problem of financial distress prediction in Chinese listed companies with imbalanced data. This framework is developed using a logistic regression and a backpropagation neural network in combination with a safe-level synthetic minority over-sampling technique (SMOTE). Zhang and Zhao validate the model on a dataset of Chinese listed companies and compare the proposed model with seven other baseline models. Their results suggest that the proposed model has a superior performance.

“Estimating value-at-risk using quantile regression and implied volatilities”, by Petter E. de Lange, Morten Risstad and Sjur Westgaard, is the issue’s third paper. De Lange et al propose a semi-parametric, parsimonious value-at-risk (VaR) forecasting model based on quantile regression (QR) and readily available market prices of option contracts from the over-the-counter foreign exchange (FX) interbank market. Explanatory variables include implied moments (IM) with plausible economic interpretation. The forward-looking nature of the model, induced by the application of implied moments as risk factors, ensures that new information is rapidly reflected in VaR estimates. The proposed model outperforms traditional benchmark models when evaluated in-sample and out-of-sample on EUR/USD data. The model is relatively easy to estimate, which facilitates practical application. The QR IM model is subjected to extensive risk model validation by means of backtesting, using both coverage tests and loss functions. Thus, this paper is relevant for both risk modeling and risk model validation in the context of FX risk. A word of caution is merited, however: quantile regression is often seen as a panacea for the ills of linear regression; in fact, its application raises deeper issues of robustness. In this instance, the backtesting and validation have been carefully applied and the results look very good.

The final paper in this issue is “The importance of window size: a study on the required window size for optimal-quality market risk models” by Mateusz Buczynski ´ and Marcin Chlebus. When it comes to market risk models, should we use all the data that we possess or should we instead find a sufficient subsample? Buczynski and Chlebus conducted a study of different fixed moving window lengths (moving window sizes varying from 300 to 2000 were considered) for 250 combinations of data and a VaR evaluation method. Three VaR models (historical simulation, a generalized autoregressive conditional heteroscedasticity (GARCH) model, and a conditional autoregressive value-at-risk (CAViaR) model) are used for three different indexes (the Warsaw Stock Exchange 20, the Standard & Poor’s 500 and the Financial Times Stock Exchange 100) for the period 2015–19. The results indicate that increasing the size of the training sample to more than around 900–1000 observations does not increase the quality of the model, while lengths lower than this cutoff provide unsatisfactory results and decrease the model’s predictive power. Change point detection methods provide more accurate models: applying the algorithms to each model’s recalculation on average improves results.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here