Welcome to the first issue of Volume 13 of The Journal of Operational Risk.
In December 2017, the Basel Committee on Banking Supervision issued Basel III, which, I presume, most of the industry will have carefully read by now. Basel III implements the previously discussed standardized measurement approach (SMA), thus terminating the advanced measurement approach (AMA), which allowed banks to calculate their operational risk regulatory capital using internal models. Under Basel III, operational risk regulatory capital will be calculated using a simple formula. As has been discussed in this journal before, we feel that the new approach undermines operational risk management in financial institutions as it will mean operational risk capital is no longer risk sensitive. In the last eighteen months we have published a number of papers from some of the industry’s best minds. These authors have undertaken extensive research that clearly points to this result. There is still hope that the industry will continue to use internal models for Pillar 2 as well as for better allocation of capital among business units. We hope to continue receiving papers that focus on these aspects of risk management, as well as the more qualitative ones.
The Journal of Operational Risk, as the leading publication in this area, aims to be at the forefront of these discussions. We welcome papers that shed light on them.
In this issue, we have three technical papers and one forum paper. All of the technical papers follow our editorial guidelines by providing a numerical application of the theory being presented. I hope our readers can see the value of this approach.
In the issue’s first paper, “Shapley allocation, diversification and services in operational risk”, Peter Mitic and Bertrand K. Hassani develop a method for allocating operational risk regulatory capital among business units using the Shapley method. It is assumed that if business units form coalitions, the value added to a coalition by a new entrant is a simple function of the value of that entrant. This function represents the diversification that can be achieved by combining operational risk losses. Two such functions are considered. The calculations account for a service that further reduces the capital payable by business units. The results that the authors derive are applied to recent loss data, which (as discussed above) is our editorial preference in The Journal of Operational Risk.
In “Tail dependence in small samples: from theory to practice”, the second paper in this issue, Sophie Lavaud suggests using tail indexes to detect the presence of tail dependence in a given data set, thereby allowing us to improve the process for selecting a copula. The author notes that tail dependency often comes hand-in-hand with data scarcity, and she tackles this specific issue through an application to operational losses in the banking industry. This is a very practical paper that gives a step-by-step demonstration of how to tackle the tail-dependence issue. I am sure that readers will appreciate that.
“Modeling catastrophic operational risk using a compound Neyman–Scott clustering model” by Zied Gara and Lotfi Belkacem is the third paper in this issue. In it, the authors propose a novel approach, as the title clearly suggests, to modeling catastrophic operational risk by using a compound Neyman–Scott clustering model. The particularity of this compound model is that it relies on a Neyman–Scott process (the frequency component of the loss distribution approach) to model the occurrence behavior of catastrophic operational loss events. The motivation behind the model is that catastrophic operational risk may be a manifestation of a two-level risk generation mechanism: on the first level, natural and human-made catastrophes (referred to as “operational storms”) occur and they trigger, on the second level, clusters of catastrophic operational loss events. A graphical analysis based on a historical series of 334 extreme operational loss events supports the clustering structure of the event occurrences. The calibration of the Neyman–Scott process reveals a satisfactory model fitness and underlines the high vulnerability of financial organizations to the eventual operational storms.
Finally, we have one forum paper in this issue. In “Bridging networks, systems and controls frameworks for cybersecurity curriculums and standards development”, Yogesh Malhotra notes that applied cybersecurity practices in the United States are becoming increasingly focused on cyber risk management. It is therefore becoming increasingly necessary to align IT-cybersecurity professional association application standards and related educational curriculums with emerging applications. The author discusses the fact that current standards and educational curriculums seem fragmented across network protocol and network analysis tool frameworks, systems and network infrastructure frameworks, and risk management and control policy frameworks. The paper develops an applied framework for aligning, integrating and streamlining standards and curriculums across the above three levels to align them with the needs of applied risk management practice. The cyber risk management framework is developed, with a focus on voice over internet protocol (VoIP) networks, which have been gaining prominence across diverse industries, such as global banking and finance, over the past decade. Despite their central role technologically and economically, sparse attention has been paid to critical vulnerabilities that are described as the “weakest links” in global banking and finance networks, as is evident from cybersecurity and penetration testing. The author demonstrates the contribution of the proposed cyber risk management framework in addressing such critical gaps in global banking and finance cybersecurity and information assurance practices, while also highlighting its extendability to a number of other industries, such as health care.
In this paper, the authors propose a method of allocating operational risk regulatory capital using a closed-form Shapley method, applicable to a large number of business units (BUs).
In this paper, the authors study tail dependence by defining the conditions required for all the methods used to perform and to quantify their efficiency and accuracy.
In this paper, the authors discuss the hazard generated by OpRisk driven by natural and human-made disasters, and argue the position of the LDA as the most-fitted statistical approach to deal with it.
Bridging networks, systems and controls frameworks for cybersecurity curriculums and standards development
This paper proposes a risk management framework designed to facilitate the alignment, integration and streamlining of professional practice standards and computer science/cybersecurity educational curriculums by bridging NPNATFs, SNIFs and RMCPFs.