Journal of Risk Model Validation

Steve Satchell
Trinity College, University of Cambridge

The Journal of Risk Model Validation has always had a keen interest in backtesting, as it is clearly a key tool in model validation. Surprisingly, there is not a great deal of research on backtesting methodology, so we are delighted to publish two papers that make direct contributions to this under-researched area. The other theme addressed in this issue is counterparty risk.

Our first paper, by Mante Zelvyte and Matthias Arnsdorf, which covers both themes, is titled “Bayesian backtesting for counterparty risk models”. It introduces a new framework for counterparty risk model backtesting based on Bayesian methods, providing an approach for analyzing model performance that is both conceptually sound and straightforward to implement. Zelvyte and Arnsdorf show that their methodology has important advantages over a typical, classical backtesting approach. In particular, they find that the Bayesian approach outperforms the classical one in identifying whether a model is correctly specified (which is the principal aim of any backtesting framework). The power of the Bayesian methodology is down to its ability to test individual parameters and thus to identify not only which aspects of a model are misspecified but also the degree of misspecification. As always in a Bayesian world, gains over classical approaches will occur when the prior contains incremental information that would require the modeler to be aware of their model’s weaknesses. Such awareness is, sadly, not always present.

The issue’s second paper, “A modified hybrid feature-selection method based on a filter and wrapper approach for credit risk forecasting” by Guotai Chi and Mohamed Abdelaziz Mandour, focuses on feature selection. The paper introduces a feature-selection technique based on a modified approach with the aim of improving classification performance using fewer features; such an approach may be thought of as being inspired by Occam’s razor. Chi and Mandour call their approach the chi-squared with recursive feature elimination (χ2-RFE) method. This combines χ2 as a filter approach with recursive feature elimination as a wrapper approach. The algorithm developed for the χ2-RFE method is tested against six other algorithms and found to be superior in terms of average performance, with an acceptable computing time. The superiority of the χ2-RFE method is demonstrated by its application to a data set of Chinese listed companies with a sample size of 47 172 and 535 characteristics, and the efficacy of the algorithm is further confirmed by an experiment on a German data set with a sample size of 1000 and 24 characteristics. An application to imbalanced data issues is also included.

In the modern world of research collaboration, it is increasingly rare to see single-authored papers. A welcome exception is “What can we expect from a good margin model? Observations from whole-distribution tests of risk-based initial margin models” by David Murphy, the third paper in this issue. Murphy presents an approach to testing initial margin models based on their predictions of the whole future distribution of returns of the relevant portfolio. He claims (and I would agree) that his testing methodology is much more powerful than the usual “backtesting” approach based on returns in excess of margin estimates. Murphy also provides a methodology for calibrating margin models via the examination of how test results vary as the model parameters change. He tests some popular classes of initial margin models for various calibrations, and his findings give some insight into what it may be reasonable to expect from an initial margin model. In particular, he finds that, for the examples studied, margin models meet regulators’ expectations that they are accurate around the 99th and 99.5th percentiles of returns, but they do not accurately model the far tails. Moreover, different models, all of which meet regulatory expectations, are shown to provide substantially different margin estimates in the far tails. The policy implications of these findings are discussed.

The final paper in the issue, on the theme of counterparty risk, is “The validation of different systemic risk measurement models” by Hu Wang and Shuyang Jiang. To appreciate their contribution, it is helpful to know a little about the DebtRank algorithm used to compute systemic risk in banking networks. This recursive algorithm takes into account the impact of the distress of an initial node (bank) across the whole network (eg, banking system) on each iteration. The DebtRank of a node is a number representing the fraction of the total economic value in the network that is potentially affected by the distress or the default of the node. Wang and Jiang improve on the DebtRank model by incorporating a capital buffer. They use data from China’s banking industry to compare the systemic risk measured by the original DebtRank model and by the differential DebtRank model (developed by Marco Bardoscia and coworkers and explained by Wang and Jiang in this paper) with that measured by their improved model. Their improved model measures systemic risk more accurately than both the original and differential DebtRank models.

You need to sign in to use this feature. If you don’t have a account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here