Journal of Operational Risk

Welcome to the first issue of Volume 14 of The Journal of Operational Risk.

Although the new Basel III framework will eliminate the advanced measurement approach (AMA), it is comforting to see that research continues both in industry and in academia to find better ways to estimate and, more importantly, stabilize operational risk figures. However, the focus of these studies is slowly changing from operational risk capital to operational risk loss estimation, with a particular focus on stress testing and the Federal Reserve System-mandated Comprehensive Capital Analysis and Review (CCAR), both of which demand the projection of such figures. We are also receiving papers that boast new research and ideas on more qualitative issues, such as estimating new products’ operational risks. We include one of these papers in this issue.

From now on, we will be expecting more papers on cyber and IT risks, and not just on quantifying them but on better ways to manage them too. We would also like to publish more papers on important subjects like enterprise risk management (ERM) and everything that encompasses this broad subject: establishing risk policies and procedures, implementing firm-wide controls, aggregating risk, revamping risk organization, etc. As I said before, we still expect to receive analytical papers on operational risk measurement, but now they will come with a focus on stress testing and actually managing those risks. These are certainly exciting times!

The Journal of Operational Risk, as the leading publication in this area, aims to be at the forefront of these discussions, and we welcome papers that will shed some light on the issues involved.

PAPERS

In this issue, we have three technical papers and there is one paper in the forum section.

RESEARCH PAPERS

In our first paper, “Maximum likelihood estimation error and operational value-at-risk stability”, Paul Larsen presents a general framework for analyzing maximum likelihood estimation (MLE) errors on operational value-at-risk models as a function of sample size for five severity distributions commonly used in operational risk capital models. More specifically, the author studies these estimation errors across three dimensions: the choice of severity distribution, the sample size, and the heaviness of the underlying losses. He applies his results to model selection and explores the implications of his method for operational risk modeling. The challenge presented by small sample sizes to operational risk capital models fitted via MLE is well recognized in academia, and yet the existing literature provides general cautionary examples rather than a systematic approach to mitigating the problem. This paper brings an interesting contribution to the area.

In the issue’s second paper, “Operational risk measurement: a loss distribution approach with segmented dependence”, Xiaoqian Zhu, Yinghui Wang and Jianping Li note that in the loss distribution approach (LDA), which is the most widely used approach in operational risk measurement, the modeling of dependencies across different risk cells has been extensively studied. However, the authors observe that the dependencies between high-frequency, low-impact (HFLI) and low-frequency, high-impact (LFHI) operational risk losses are different. They propose a novel approach – the loss distribution approach with segmented dependence (LDA-SD) – that can model the different dependencies of HFLI and LFHI losses in the framework of LDA. LDA-SD divides the losses into two parts for HFLI and LFHI, fits their frequency and severity distributions separately, and models the segmented dependencies with a copula. In the empirical study, the proposed LDA-SD is applied to measure the operational risk of the Chinese banking sector based on the Chinese Operational Loss Database data set, which is the largest operational risk data set in China. The empirical results reveal that the dependencies are indeed different between HFLI and LFHI losses. The operational risk capital calculated by the LDA-SD is significantly smaller than that calculated by the LDA and considering the holistic dependence (LDA-HD), but larger than that simply considering tail dependence (LDA-TD).

Our third paper, “Introducing a novel system-of-systems axiomatic risk management technique for production systems” by Asif Mahmood,  presents a novel approach for managing risks in a large complex system of systems (SoS). The author’s methodology of system-of-systems axiomatic risk management (SoSARM) is an axiomatic risk-based decomposition for resolving hierarchical coupled risks. In the first part of the paper, the author develops a new methodological concept to understand and quantify the relationships between higher- and lower-level risks as well as the contextual countermeasures adopted for each individual risk. In the second part, he shows how SoS works in practice, offering a step-by-step hierarchical approach.

FORUM SECTION

In “An alternative approach for the operational risk assessment of a new product”, Andrea Giacchero, Jacopo Moretti, Francesco Cesarone and Fabio Tardella claim that the risk assessment of a new product is one of the most critical activities performed by a financial institution’s operational risk management. The authors note that a new financial product usually has very few benchmarks against which a risk manager may assess its operational risk levels, due to both a lack of historic operational loss data and the inexperience of process owners in handling the new operation. To try to overcome these limitations, the authors propose a method to identify and prioritize the riskiest operational risk events when introducing a new product in a financial institution. Their methodology starts with using a checklist based on risk factors (causes) to assess the operational riskiness of a new product before its launch. After the launch, and with particular reference to the management of the new product, they propose the use of the analytic hierarchy process to prioritize operational risk events, and the “80/20 rule” to allocate these events to their appropriate risk-rating classes. Finally, the authors develop two optimization models to minimize the total cost of capital required to cover all of the important risks and to study the relationship between the total cost of capital and the exposure coverage.

 

Marcelo Cruz

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here