Journal of Operational Risk

Marcelo Cruz


Welcome to the third issue of Volume 18 of The Journal of Operational Risk.

I have just returned from attending an array of risk management conferences around the world. It is always a pleasure to talk to practitioners and academics, both to hear about the interesting work they are doing to push the development of operational risk to the next level and to understand the many challenges the financial industry faces. I also had the opportunity to receive direct feedback, adding to my impression that the industry really appreciates what we are doing with The Journal of Operational Risk. I am enthused that – despite the upcoming changes to regulation in light of the advanced management approach – most banks still follow the mantra that you cannot manage what you cannot measure, and that they therefore keep using data and models to manage and report operational risk.

The application of machine learning techniques is currently a hot topic in our industry, and we are interested in receiving more papers on the subject. In addition to papers on machine learning and artificial intelligence, we also welcome more submissions on cyber and IT risks – not just on their quantification but also on better ways to manage them. We also aim to publish more papers on important subjects such as enterprise risk management (ERM) and everything that this broad subject encompasses (eg, establishing risk policies and procedures; implementing firm-wide controls; risk aggregation; revamping risk organization and internal audit). We also still welcome analytical papers on operational risk measurement, particularly those that focus on stress testing and managing operational risks. As the leading publication in the area, The Journal of Operational Risk aims to stimulate discussions, and we welcome papers that shed more light on the above topics. These are certainly exciting times!


In the first paper in this issue, “How to choose the dependence types in operational risk measurement? A method considering strength, sensitivity and simplicity”, Xiaoqian Zhu, Yinghui Wang, Mingxi Liu and Jianping Li propose a method for choosing the most appropriate dependence type for banks’s models, noting that there are many types of dependencies within operational risk, such as loss frequency dependence, loss severity dependence and aggregate loss dependence. The authors model different types of dependencies under the loss distribution approach to verify whether they have distinct effects on operational risk measurement results. The paper puts forward three innovative criteria – strength, sensitivity and simplicity – to comprehensively evaluate the dependence types. The empirical analysis is based on the Chinese Operational Loss Database, which is the largest operational risk data set for China. Zhu et al’s results show that the different dependence types do indeed have distinct effects on the estimated operational risk values for banks. Further, loss frequency dependence and aggregate loss dependence can in practice be good choices for banks, because they are generally easier to model.

In the issue’s second paper, “Integrating text mining and analytic hierarchy process risk assessment with knowledge graphs for operational risk analysis”, Zuzhen Ji, Xuhai Duan, Dirk Pons, Yong Chen and Zhi Pei observe that learning from the past can be invaluable when it comes to enhancing risk resilience and developing risk prevention strategies. One common approach to investigating operational risk is to analyze safety records, which can potentially contain a huge amount of incident data. The authors believe, however, that traditional operational risk analysis methods have several limitations. In their view, a significant drawback is that safety records are often documented as unstructured or semi-structured data, and the database can turn out to be enormous, making it challenging to extract risk information efficiently. Further, the traditional risk assessment method per the ISO 31000 standard is qualitative and subjective, which can lead to inconsistent and inaccurate risk computation, especially when dealing with hazards that have multidimensional consequences. To address these issues, Ji et al have developed a new method, called the risk-based knowledge graph (RKG), which integrates text mining and analytic hierarchy process (AHP) risk assessment with knowledge graphs for operational risk analysis. This approach provides a systematic method with which industrial practitioners can examine operational risk by using AHP risk assessment and graphical semantic networks to illustrate cause-and-effect relationships between risk entities. The use of text mining improves the efficiency of risk information extraction, while the use of AHP risk assessment enhances the consistency and accuracy of risk computation. To evaluate the accuracy and efficacy of RKG, the authors conducted a case study of a computer numerical control manufacturer and found that, overall, the RKG method shows promise in addressing the limitations of traditional operational risk analysis methods; it provides a more efficient and accurate way to extract and analyze risk information, making it easier for industrial practitioners to evaluate and manage risks associated with complex hazards.

In our third paper, “A text analysis of operational risk loss descriptions”, Davide Di Vincenzo, Francesca Greselin, Fabio Piacenza and Ričardas Zitikis note that operational risk databases contain, among other things, event descriptions, and this presents the opportunity to extract information from such texts. The paper introduces a novel structured workflow for the application of text analysis techniques (one of the main natural language processing tasks) to operational risk event descriptions in order to identify managerial clusters (which are more granular than regulatory categories) that cause the underlying risks. The authors complement and enrich the established framework of statistical methods based on quantitative data. Specifically, after delicate tasks such as data cleaning, text vectorization and semantic adjustment, they apply methods of dimensionality reduction and several algorithmic clustering models and compare their performance and weaknesses. The results of this comparison add to the knowledge of historical loss events and can enable the mitigation of future risks.

The primary objective of the research by Tarika Singh Sikarwar, Harshita Mathur, Vandana Lothi and Aarti Tomar in the issue’s fourth paper, “Operational risk and regulatory capital: do public and private banks differ?”, is to understand methods of quantifying operational risk and regulatory capital in financial institutions and to investigate any interrelationships between them. This research is based on a sample of public and private sector banks and demonstrates the capability of certain public sector banks to bear operational risk at a particular level of regulatory capital. It also shows that the ability of a bank to be successful under unfavorable conditions is related to how that bank manages its operational risk, its regulatory capital and its management processes.

You need to sign in to use this feature. If you don’t have a account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here