Journal of Operational Risk

Marcelo Cruz


Welcome to the first issue of Volume 19 of The Journal of Operational Risk.

There has recently been a discussion on, our sister publication, regarding operational risk taxonomy differences (“Tall order: why a unified operational risk taxonomy is still elusive”; URL: -order-why-a-unified-op-risk-taxonomy-is-still-elusive). The issue is that, even now, 20 years on from the official publication of Basel II, there are significant differences within the industry with regards to how operational risk is classified. From a regulatory point of view all banks need to classify loss data and capital by the seven event risk types defined in Basel II. However, some very complex financial institutions have developed their own definition of operational risk, and several have upward of 15 risk types. These financial institutions obviously have to map their own internal taxonomy to the Basel II one for regulatory reporting, but this mapping creates a layer of possible classification errors in their work. The Operational Riskdata eXchange Association (ORX) developed a new taxonomy to see if banks could adopt a more unified and consistent approach to classifying operational risk; according to a survey, 80% of banks use the ORX taxonomy. However, the Basel Committee on Banking Supervision has not refreshed its taxonomy since 2004, and operational risk challenges have changed significantly since then (eg, cyber risk was not considered seriously by anyone in 2004, but today it is the most important risk). I would love to see more discussions on the issue of taxonomy in these pages.

The editorial board are also interested in receiving more papers on the application of machine learning (ML) techniques – one of the current key interests in the industry. In addition to papers on ML and artificial intelligence (AI), we also welcome those on cyber and IT risks (not just on their quantification but also on better ways to manage them). We also aim to publish more papers on enterprise risk management (ERM) and everything this broad subject encompasses (eg, risk policies and procedures, implementing firm-wide controls, risk aggregation, revamping risk organization, internal audit). Analytical papers on operational risk measurement are always welcome but should ideally focus on stress testing and actually managing such risk. The Journal of Operational Risk, as the leading publication on operational risk measurement and management, is at the vanguard of discussions, and it welcomes papers that shed light on discussions relating to ERM and ML/AI as well as on the “Basel III Endgame”. These are certainly exciting times in OpRisk!


In the issue’s first paper, “Composite Tukey-type distributions with application to operational risk management”, Linda Möstel, Matthias Fischer and Marius Pfeuffer note that, like many other quantitative disciplines, operational risk modeling requires flexible distributions defined for nonnegative values, which enable heavy-tail behavior. Because these distributions can account for the different requirements related to “extreme” observations in the tail and “ordinary” observations in the body of such distributions, so-called composite or spliced models have always gained modelers’ attention. The focus of this paper is on composite Tukey-type distributions, a class of distributions whose tails follow a generalized truncated Tukey-type distribution, which allows for greater flexibility than the commonly used generalized Pareto distribution. After reviewing the classical Tukey-type family, the authors discuss the leptokurtic properties that emerge from a general kurtosis transformation and study several estimation methods for the truncated Tukey-type distribution. They also empirically demonstrate the flexibility of their new composite model using an operational risk data set and business interruption losses.

George Drogalas, Michail Pazarskis, Grigorios Lazos and Konstantinos Golidopoulos, the authors of “The important role of information technology and internal auditing in risk management: evidence from Greece”, the second paper in this issue, believe that an organization’s activities are greatly aided by information technology (IT), which also speeds up corporate operations. Today, internal auditing is frequently conducted using IT, and this trend will continue as businesses increasingly switch to paperless systems. Drogalas et al assess and document how information systems contribute to internal audit, so as to adequately address any risks that may arise in the context of the modern business environment. They conducted a survey of Greek businesses via a questionnaire and then performed data regression on the survey data. Their findings show that when businesses and internal auditors use IT properly, the internal audit process becomes more effective, quicker and more affordable, resulting in more effective management of risks relating to business operations and, by extension, greater success for business entities in what is a competitive modern environment.

In “Semi-nonparametric estimation of operational risk capital with extreme loss events”, our third paper, Heng Z. Chen and Stephen R. Cosslett claim that the Basel II advanced measurement approach often leads to counterintuitive value-at-risk operational risk capital for regulatory requirements due to extreme loss events. To address this issue, their research suggests that the semi-nonparametric (SNP) model proposed by Chen and Randall in 1997 can be adopted to enrich the family of distributions for the parametric model misspecification. Further, they showed that the SNP model has the same maximum domain of attraction as its parametric kernel, and it follows that the SNP methodology is consistent with the extreme value theory–peak over threshold method but allows shape and scale parameters that can be different from those of its parametric kernel. By using both actual and simulated operational risk loss data sets, the authors show that the SNP model is a significant improvement on the parametric model, and it captures heavy tails satisfactorily through an appropriate increase in the number of parameters. The SNP quantile estimates at 99.9% are not overly sensitive to the change of body–tail thresholds, in sharp contrast to the parametric model. The capital estimate becomes more intuitive and is of the same magnitude as the total operational risk loss. These results suggest that the SNP model may enable banks to continue to use the advanced measurement approach to calculate their operational risk capital and manage operational risks.

The issue’s fourth and final paper, “Estimating the probability of insurance recovery in operational risk” by Ruben D. Cohen, Jonathan Humphries and Jia Lu, finds that insurance can effectively mitigate significant operational risks. However, not all losses are insurable, and payments to cover losses are not generally reimbursed in full, for reasons that include the chosen limits of cover and the risks or exposures that may be excluded from the coverage. When incorporating insurance into a firm’s operational risk model, the risk mitigation calculation needs to appropriately reflect the insurance coverage afforded in a framework that is well reasoned and documented, demonstrating that the calculation is quickly aligned to the institution’s operational risk profile, and that the institution’s methodology for recognizing insurance captures all the relevant elements through discounts or haircuts in the amount of insurance recognition. Haircuts (or discounts) can emanate from, for example, mismatches in coverage, uncertainty over payment, and policy terms and conditions, all of which are often difficult to estimate because of the ambiguities around policy coverage and terms and conditions when considered in the context of operational risk loss events. A major source of haircuts is policy wording, which consists of the clauses, definitions and exclusions that define the scope of insurance coverage. In insurance modeling, the effects of haircuts are generally lumped into a single parameter known as the probability of insurance recovery (PoIR). Given the apparent lack of any previous modeling efforts in the public domain to estimate the PoIR, we must start somewhere. Cohen et al first introduce an underlying methodology to address the building blocks of the PoIR, then they integrate them into the risk taxonomy of a firm or unit of measure, and, finally, they incorporate the outcome into insurance and capital models.

You need to sign in to use this feature. If you don’t have a account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here