Journal of Operational Risk

Welcome to the fourth issue of Volume 11 of The Journal of Operational Risk. Although the Basel Committee has not, at the time of writing, issued its definitive paper ruling on the use of the standardized measurement approach (SMA), there is a great deal of discussion across the industry on this topic and on the controversial consultative paper issued earlier this year that proposed the SMA as a replacement for the advanced measurement approach (AMA) for the purpose of operational risk regulatory capital calculation. The intention of the consultative paper was to encourage discussion of the proposal within the industry, and we have seen significant opposition to the SMA from most associations and financial institutions. This was reflected in the comments received by the Basel Committee, which are publicly available on their website.

While we wait for their final verdict, I am happy to say that The Journal of Operational Risk has never received so many papers. If anything, the SMA proposal has made the industry more combative and eager to improve operational risk measurement. As the leading publication in the area, The Journal of Operational Risk would like to be at the forefront of these discussions, and we welcome papers that shed some light on these discussions.

In this issue, readers are rewarded with five papers: four technical papers and one forum paper. Two of the technical papers deal with the issue of parameter estimation for statistical distributions, while the other two look at operational risk measurement for insurance companies under Solvency II and correlation issues in copulas.


The first paper, "The benefit of using random matrix theory to fit high-dimensional t-copulas" by Jiali Xu and Loïc Brin, addresses the issue that, in risk modeling, the dimension of the copula is often high; this makes maximization intractable, as the number of pairwise correlations that need to be estimated can become too numerous. Previous studies have suggested procedures that consist of using a correlation matrix - estimated usingKendall's rank correlation matrix and likely transformed to be definite positive - as an input in the likelihood function in order to deduce a value for the degree of freedom. The authors show that there is a bias in this degree of freedom's estimator due to the noise in the correlation matrix estimate. They address this problem using a technique based on random matrix theory to improve the correlation estimate. Using simulation studies, they show how this improved procedure gives an estimator of the degree of freedom of t-copulas with no bias and a smaller variance. Finally, they fit a t-copula to real operational risk data in order to illustrate the benefits of their procedure.

"Operational risk and the Solvency II capital aggregation formula: implications of the hidden correlation assumptions", the second paper in this issue, finds Artur Cifuentes and Ventura Charlin analyzing the Solvency II standard formula (SF) for capital risk aggregation in relation to the treatment of operational risk (OR) capital. They show that the SF implicitly assumes that the correlation between OR and other risks is very high: a situation that seems to be at odds with both the empirical evidence and the views of most industry participants. They also show that this formula, which somehow obscures the correlation assumptions, gives different insurance companies different benefits for diversification effects in relation to OR. These benefits are based on the relative weights of the six basic capital components, and not on any risk-related metric. Hence, contrary to what has been claimed, the SF gives diversification benefits (albeit minor ones) in relation to OR. Further, since the SF does not treat the correlation between OR and the other risks explicitly, it provides no incentive to gather data regarding this effect. Given all of these considerations, for the time being, the authors recommend the adoption of the well-known linear aggregation formula, using low-to-moderate correlation assumptions between OR and the other risks.

In the issue's third paper, "Optimal B-robust posterior distributions for operational risk", Ivan Luciano Danesi, Fabio Piacenza, Erlis Ruli and Laura Ventura claim that a way of obtaining robust capital estimates is through optimal B-robust (OBR) methods. Previous research has shown that OBR methods might mitigate the bias in capital risk quantification when compared with classical maximum likelihood estimation. Motivated by requirements related to operational risk measurement, their work integrates prior information into a robust parameter estimation framework via OBR-estimating functions. Unfortunately, the evaluation of OBR-estimating functions for different parameter values is cumbersome, and this rules out the use of many pseudo-likelihood methods. To deal with this issue, the authors suggest resorting to approximate Bayesian computation (ABC) machinery, using the OBR- estimating function as the summary statistic. Unlike other methods, the proposed ABC-OBR algorithm requires the evaluation of the OBR-estimating function at a fixed parameter value but using different data samples, which is computationally trivial. The method is illustrated using a small simulation study and applications to two real operational risk data sets.

Paul Larsen notes in our fourth paper, "Operational risk models and asymptotic normality of maximum likelihood estimation", that operational risk models commonly employ maximum likelihood estimation (MLE) to fit loss data to heavy-tailed distributions.Yet several desirable properties of MLE (eg, asymptotic normality) are generally valid only for large sample sizes, a situation that is rarely encountered in operational risk. In this paper, Larsen studies the situations in which asymptotic normality does hold for common severity distributions in operational risk models. He then applies these results to evaluate errors caused by the failure of asymptotic normality in constructing confidence intervals around MLE-fitted parameters.


We have one forum paper in this issue, "The death of one thousand flowers or the advanced measurement approach reborn?" by Jimi M. Hinchliffe, which discusses the impact of the SMA. The author explores the reasons for the pending demise of the AMA approach to operational risk, which was introduced through Basel II in 2004. The author identifies a number of drivers of the Basel Committee on Banking Supervision's decision and argues that, although the drivers of the decision to withdraw AMA include failings on the part of banks, there have also been significant regulatory failures that have undermined the AMA. Hinchliffe then analyzes the new SMA approach that will replace the AMA (as well as the basic indicator approach (BIA) and the standardized approach (TSA)) and identifies potential benefits from the introduction of SMA to those firms currently using the BIA or TSA. In relation to firms using the AMA, the author contends that there is a possibility that the SMA will unleash a new era of sophisticated analytics through Pillar 2, but that this will depend on the recalibration of the SMA and the proper use of Pillar 2 by supervisors. However, the author concludes that if the calibration issue is not addressed, and if Pillar 2 is not applied correctly, the result will be a perfect storm scenario. This may present a significant systemic risk, as large global systemically important financial institutions are incentivized to take more risk on the one hand and to invest less in risk management on the other.

Marcelo Cruz

You need to sign in to use this feature. If you don’t have a account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an indvidual account here: