Journal of Operational Risk
Editor-in-chief: Marcelo Cruz
Volume 17, Number 4 (December 2022)
Welcome to the fourth issue of Volume 17 of The Journal of Operational Risk.
This is another very interesting issue, with several good and unique discussions. For the first time we have two papers on the use of machine learning (ML) in operational risk. It is very interesting to see how the industry has adapted and started to use state-of-the-art techniques to help in classifying and managing risks. One of the papers in this issue develops an ML algorithm to classify losses according to the Basel Committee categories, while another uses ML algorithms to spot credit card fraud. These techniques can help financial institutions to save precious time and resources and allow risk managers to focus on resolving issues that might demand more human interaction.
These are certainly exciting times! Looking ahead, operational risk resilience is one of the industry’s current key interests, and we welcome more submissions on this subject. In addition to resilience, we would also welcome more papers on cyber risk and IT risk, not just on their quantification but also on better ways to manage these risks. We also hope to publish more papers on important subjects such as enterprise risk management and everything this broad subject encompasses (eg, establishing risk policies and procedures, implementing firm-wide controls, risk aggregation, revamping risk organization). We still welcome analytical papers on operational risk measurement, and particularly those that focus on stress testing and managing risks.
The Journal of Operational Risk, as the leading publication in this area, aims to be at the forefront of such discussions, and we welcome papers that shed light on these topics.
In the first paper in this issue, “Modeling very large losses. II”, Henryk Gzyl, a regular contributor to this journal, follows up his 2018 paper “Modeling very large losses” (The Journal of Operational Risk Volume 13, Issue 2, pages 83–91). The underlying idea in the earlier paper was that very large operational risk losses should be aggregated to the standard losses as a model extension. Standard losses are those that are described in the Bank for International Settlements (BIS) classification by risk type and business line, observed and duly recorded. A very large loss, when observed, might fit into one of the cells of the BIS classification, but we would not consider it to be caused by the same risk factors as the other losses in that cell. An unanswered question in the 2018 paper was to determine how to estimate the frequency of the very large losses. In the current paper, Gzyl proposes an approach to estimate large losses that is similar to that used by Fermi and Drake to estimate the existence of extraterrestrial life. It consists of supposing the event of interest is a result of a concatenation of independent factors and estimating the probability of each factor. This paper makes interesting reading, treating large operational losses as aliens!
In the issue’s second paper, “Imbalanced data issues in machine learning classifiers: a case study”, Mingxing Gong notes that there has been no specific discussion regarding the resampling ratio used to rebalance data in ML, or on how imbalanced data issues affect different kinds of ML classifiers, especially more advanced ML classifiers. Gong outlines the special characteristics of the classifiers, compares different methods in dealing with imbalanced data issues, and provides best practice in model development, evaluation and validation to avoid common pitfalls. Although the methods discussed in this paper can apply to general ML classifiers in applications with imbalanced data issues, by using a case study in fraud detection, Gong aims to calls practitioners’ attention to the imbalanced data problem in credit card fraud detection, where the class imbalance problem is often mistreated and lacks theoretical discussion.
In our third paper, “Machine learning for categorization of operational risk events using textual description”, Suren Pakhchanyan, Christian Fieberg, Daniel Metko and Thomas Kaspereit provide an overview on how ML can help in categorizing textual descriptions of operational loss events into Basel II event types. They apply PYTHON implementations of support vector machines and multinomial naive Bayes algorithms on precategorized ÖffSchOR data to demonstrate that operational loss events can be automatically assigned to one of seven Basel II event types with very few costs and with satisfying accuracy. The authors’ case study on ÖffSchOR data, which includes the provision of a parsimonious PYTHON code, is also useful for practitioners, who can use this knowledge to improve their processes for categorizing operational risk events in terms of cost efficiency and/or reliability.
In the fourth and final paper in the issue, “Systemic operational risk in the Australian banking system: the Royal Commission”, Patrick McConnell shows that the proceedings of the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry shocked the public, the government and many in the financial industry. Over a period of just over a year, the Commission held open sessions in which the directors and CEOs of the largest financial institutions in Australia were forensically questioned as to cases of serious misconduct in their firms and related entities. During the inquiry, many cases of misconduct were unearthed across a range of banking services, most importantly: inappropriate consumer lending (in pretty much all consumer credit segments); inappropriate financial advice; and insurance and superannuation (pensions) mis-selling. The most egregious of this misconduct was found in many financial institutions, and particularly in the largest banks (the so-called Four Pillars), which together handle more than 75% of banking products in Australia and New Zealand. The Royal Commission concluded that the financial institutions identified in the proceedings were at fault and that remediation would have to be paid to any customers who had been harmed. McConnell carries out analysis of individual case studies of misconduct, as identified by the Royal Commission, and to aid this analysis the paper dissects the various cases from an operational risk perspective using Turner’s framework for analyzing organizational disasters.
Papers in this issue
Modeling very large losses. II
This paper presents a means to estimate very large losses by supposing the event is the result of a succession of factors and estimating the probability of each factor.
Imbalanced data issues in machine learning classifiers: a case study
The author outlines characteristics of machine learning classifiers, compares methods for dealing with imbalanced data issues, and proposes terms of best practice in model development, evaluation, and validation.
Machine learning for categorization of operational risk events using textual description
The authors summarise ways that machine learning can help categorize textual descriptions of operational loss events into Basel II event types.
Systemic operational risk in the Australian banking system: the Royal Commission
The author investigates the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry and its most prominent cases, as well as detailing examples of operational risk events that the commission did not cover.