Welcome to the second issue of Volume 12 of The Journal of Operational Risk.
It has been more than a year since Basel’s infamous paper introducing the standardized measurement approach (SMA) was published, and, at the time of writing, we still have no final ruling on this controversial approach. This is quite unusual as the consultation period usually takes around six to eight months. At this stage I believe it is safe to say that the SMA is facing some kind of roadblock, and the Basel Committee may be struggling with a lack of consensus on approving the SMA. From my conversations with regulators, it seems the focus is now more on trying to find ways to standardize the advanced measurement approach (AMA) across the industry. Currently, banks have too much leeway when it comes to developing their models, so the Basel Committee may want to focus on limiting banks’ options. Since we are still waiting for any kind of signal from the committee, we are also still asking the industry to submit papers on this subject. We have had an overwhelming response from authors, and readers of the last several issues will have seen many papers on the topic. This trend is set to continue, with even more papers on the SMA and AMA earmarked for upcoming issues.
The Journal of Operational Risk, as the leading publication in this area, aims to be at the forefront of these discussions. We therefore welcome papers that shed light on this subject.
In this issue we have three research papers and one forum paper. The issue’s first paper tackles a problem that is currently of high importance in operational risk: the development of better goodness-of-fit techniques to help in the selection of statistical distributions. This is followed by a paper on the approximation of a total aggregate loss quantile, which is a popular topic in The Journal of Operational Risk. Our third paper is a very interesting one on the robustness of operational risk estimates: an issue that demands significant attention from practitioners. Finally, we present a model developed by Lloyds Bank quants to estimate retail banking mis-selling risk.
In our first paper, “On a family of weighted Cramér–von Mises goodness-of-fit tests in operational risk modeling”, Kirill Mayorov, James Hristoskov and Narayanaswamy Balakrishnan tackle the issue of severity distribution selection. They analyze the limiting properties of a family of weighted Cramer–von-Mises (WCvM) goodness-of-fit test statistics, with weight function ψ(t)=1/(1-t)β, which are suitable for the more accurate selection of severity distributions. More specifically, the authors apply classical theory to determine if limiting distributions exist for these WCvM test statistics under a simple null hypothesis. They show that limiting distributions do not exist for β≥2. For β=2, they provide a normalization that leads to a nondegenerate limiting distribution. Where limiting distributions already exist for β<2, or are obtained through normalization, the authors show that for β between 1.5 and 2, the test’s practical utility may be limited due to a very slow convergence of the finite sample distribution to the asymptotic regime. Their results suggest that the test provides greater utility when β=1.5, and that for cases in which β≥1.5 utility is questionable, as only Monte Carlo schemes are practical even for very large samples.
In “Various approximations of the total aggregate loss quantile function with application to operational risk”, the issue’s second paper, Ross Griffiths and Walid Mnif work with the compound Poisson distribution and its applications to operational risk. Put simply, the compound Poisson distribution is a sum of independent and identically distributed random variables over a count variable that follows a Poisson distribution. Generally, this distribution is not tractable. However, it has many practical applications that require the estimation of the quantile function at a high percentile, eg, the 99.9th percentile. The authors assume that the support of random variables is nonnegative, discrete and finite. They also investigate the mechanics of the empirical aggregate loss bootstrap distribution and suggest different approximations of its quantile function. Further, they study the impact of empirical moments and large losses on the empirical bootstrap capital at the 99.9% confidence level.
The third paper in this issue, “A note on the statistical robustness of risk measures” by Mikhail Zhelonkin and Valérie Chavez-Demoulin, deals with an issue that has emerged fairly recently but has already attracted considerable attention. The problem of robustness in risk measurement is one that has been studied using various approaches, and several methods aimed at robustifying risk measures have been proposed. However, a general robustness theory has not yet been found. The authors focus on the parametric estimators of risk measures and use Hampel’s infinitesimal approach to derive the robustness properties. They derive the influence functions for the general parametric estimators of the value-at-risk and expected shortfall. For various distributions, the classical estimators, such as maximum likelihood, have unbounded influence functions and are not robust. Using the expression of the influence function, the authors propose a general strategy to construct robust estimators and explore their properties. The use of their proposed methodology is demonstrated through several illustrative examples, as we always recommend in The Journal of Operational Risk. Finally, Zhelonkin and Chavez-Demoulin discuss an operational risk application and highlight the importance of the complementary information provided by nonrobust and robust estimates for regulatory capital calculation.
Our fourth paper, “A structural model for estimating losses associated with the mis-selling of retail banking products”, finds Huan Yan and Richard M. Wood proposing a structural model for estimating losses associated with the mis-selling of retail banking products. The approach employed makes use of frequency/severity techniques under the established loss distribution approach (LDA). Rather than calibrate the constituent distributions through the typical means of loss data or expert opinion, this paper develops a structural approach in which these distributions are determined using bespoke models built on underlying risk drivers and dynamics. For retail mis-selling, the frequency distribution is constructed using a Bayesian network, while the severity distribution is constructed using system dynamics; this has not been used to date in driver-based models for operational risk. By using system dynamics with elements of queuing theory and multi-objective optimization, the paper advocates a versatile attitude toward modeling, ensuring the model is appropriately representative of the scenario in question. The constructed model is thereafter applied to a specific and currently relevant scenario involving packaged bank accounts, and illustrative capital estimates are determined. The authors find that using structural models could provide a more risk-sensitive alternative to using loss data or expert opinion in scenario-level risk quantification. Furthermore, such models could be exploited for a variety of risk management uses, such as the assessment of control efficacy and operational and resource planning.
This paper applies classical theory to determine if limiting distributions exist for WCvM test statistics under a simple null hypothesis.
Various approximations of the total aggregate loss quantile function with application to operational risk
This paper investigates the mechanics of the empirical aggregate loss bootstrap distribution.
This paper focuses on the parametric estimators of risk measures and uses Hampel’s infinitesimal approach to derive the robustness properties.
In this paper, a structural model is presented for estimating losses associated with the mis-selling of retail banking products. It is the first paper to consider factor-based modeling for this operational/conduct risk scenario.