Journal of Risk Model Validation


This issue of The Journal of Risk Model Validation has a mildly actuarial flavor, as two of our four papers deal with collective risk models. This is a term used in the insurance literature when the variable of interest is total claims, which is a random sum of individual claims. Clearly, such a structure fits neatly into a framework for aggregate counterparty default loss, for example, as well as other ideas familiar to our readers.

The issue’s first paper, “Forward ordinal probability models for point-in-time probability of default term structure: methodologies and implementations for IFRS 9 expected credit loss estimation and CCAR stress testing” by Bill Huajian Yang, addresses some of the issues that arise in models that involve ranked data and what the author refers to as sensitivities. Using rank-specific sensitivities as basic building blocks in modeling allows one to directly implement matters such as rating changes/upgrades, among others. An estimation approach is outlined, certain calculations that arise from International Financial Reporting Standard 9 (IFRS 9) are discussed, and an example illustrating the superiority of this approach to the Merton model is included.

“Bayesian analysis in an aggregate loss model: validation of the structure functions” by A. Hernández-Bastida, J. M. Pérez-Sánchez and M. P. Fernández-Sánchez is the second paper in this issue. It deals with the empirical evaluation of a collective risk model using a Bayesian structure. Much Bayesian analysis is driven by the search for mathematical beauty, so the prior distribution is often chosen so that the posterior distribution will be elegant, rather than because it is a suitable vehicle to express investor/agent beliefs. Further, elicitation (the act of gleaning information on the prior structure from the relevant agents) is often completely ignored. If Bayesian analysis is to have any added value, elicitation needs to be carried out carefully. I am pleased to say that this paper focuses on both of these aspects of Bayesian analysis.

Our third paper, by Tao Pang, Wei Chen and Le Li, is “On the correlation and parametric approaches to calculation of credit value adjustment”. Credit value adjustment is defined by the authors as “an adjustment added to the fair value of an over-the-counter trade due to the risk of counterparty defaults”. We might think of it as a risk premium. One of the key ideas in this area of research is directional-way risk (DWR).The authors investigate DWR using a parametric approach, where the critical parameter is, in some sense, the correlation between the size of exposure to the counterparty default and the probability of counterparty default.

Nick Georgiopoulos’s “The use of the triangular approximation for some complicated risk measurement calculations” is the fourth and final paper in this issue of The Journal of Risk Model Validation. In it, Georgiopoulos introduces the triangular approximation to the normal distribution in order to extract closed-form and semi-closed-form solutions that are useful in risk measurement calculations. In particular,he applies the triangular approximation to the normal density to derive closed-form solutions for risk measurement using actuarial models. These include not only insurance risk, such as gamma and Pareto-distributed losses, but also financial risk. In addition, Georgiopoulos approximates the collective risk model under lognormally distributed severities and estimates its value-at-risk. The accuracy of the approximations is checked via Monte Carlo.

Steve Satchell
Trinity College, University of Cambridge

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here