Journal of Operational Risk

Marcelo Cruz

Welcome to the first issue of Volume 17 of The Journal of Operational Risk.

These are surely interesting times for an operational risk manager. Cryptocurrencies have seen an increase in popularity and now make up a significant share of investment portfolios, particularly for younger investors. This rise in popularity is pushing central banks to develop their own digital currencies and also to consider the best way to regulate this new industry. Cryptocurrencies bring significant operational risk challenges, eg, money transfers happen beyond the supervision of central banks and financial authorities, and transfer mistakes are most likely irrecoverable. As The Journal of Operational Risk is the leading publication in operational risk in the banking industry, we are planning a special issue on the subject to show what practitioners are doing to manage cryptocurrency risk.

We are also interested in receiving more papers on other subjects. Operational risk resilience is one of the key topics of interest in the industry right now, and we would welcome more submissions relating to it. In addition to resilience, we would also welcome more papers on cyber and IT risks – not just on their quantification, but also on better ways to manage them. We would also like to publish more papers on important subjects such as enterprise risk management and everything this broad subject encompasses, eg, establishing risk policies and procedures, implementing firm-wide controls, risk aggregation and revamping risk organization. As I have said before, analytical papers on operational risk measurement are still appreciatively received, and papers in forthcoming issues will focus on stress testing and actually managing these risks.

These are certainly exciting times! The Journal of Operational Risk aims to be at the forefront of discussions on such subjects and we would welcome papers that shed light on these topics.

In this issue we have four very interesting research papers. We present two interesting papers from teams from Greece, who use local public data to propose models to assess internal audit quality and to detect fraud, respectively. The other two papers are more technical and propose a new model to assess backtesting quality and a two-step approach to model g-and-h distributions.


In the issue’s first paper, “Revisiting the linkage between internal audit function characteristics and internal control quality”, Iakovos Michailidis, Kiriaki Alexandridou, Michail Nerantzidis and George Drogalas investigate the relationship between the characteristics of the internal audit function and internal control quality by using the responses of 48 chief auditing executives from Greek listed companies to consider a random polynomial-kernel, metabolized regression model that builds on the MATLAB approach presented in a 2018 paper by Oussii and Taktak. They demonstrate that the proposed random polynomial model is a valid, reliable and appropriate method for assessing internal control quality, and that its estimation performance is three times better than that of a linear regression model. Their findings suggest that their proposed model can serve as a starting point for companies and practitioners to improve internal control quality level through the assessment of certain independent variables. On that basis, their study offers insights to regulatory bodies, auditors and scholars in perceiving the contribution of the internal audit function’s constituents to internal control quality. Their approach is expected to inspire conclusive follow-on research on internal control quality assessment in other countries with similar settings.

In our second paper, “Preventing the unpleasant: fraudulent financial statement detection using financial ratios”, Michail Pazarskis, Grigorios Lazos, Andreas Koutoupis and George Drogalas investigate financial fraud in companies listed on the Athens Stock Exchange between 2008 and 2018. Pazarskis et al applied several statistical tests to both the primary and control samples to create a model that uses 30 different financial indicators as “forecasts” to detect possible fraud. The data used in the research were obtained from the financial statements of the listed companies, from reviewing auditors’ reports and from the data and information available from the reports of the Athens Stock Exchange. The proposed model achieves an accuracy of 78.4% in correctly classifying the total sample, showing that the model works effectively in detecting fraudulent financial statements when the economy is operating in crisis conditions. This model, with the use of financial ratios, signals red flags in the audit process and could be used as an effective tool by the banking system, internal and external auditors, tax authorities or other government authorities.

In the third paper in the issue, “Evaluation of backtesting on risk models based on data envelopment analysis”, Grigorios Kontaxis and Ioannis E. Tsolas compare different value-at-risk (VaR) models, which are used to measure market risk, under different estimation approaches, and they backtest them using an alternative strategy. The methodologies examined include filtered historical simulation, extreme value theory and Monte Carlo and historical simulation. Autoregressive-moving-average (ARMA) and generalized-autoregressive-conditional-heteroscedasticity (GARCH) models are used to estimate VaR. Selected VaR functions, marginal distributions and different horizons are combined over a set of extreme probability levels using time series of the Financial Times Stock Exchange Index (FTSE 100). Data envelopment analysis, which investigates the efficiency of VaR models using a number of different parameters, is carried out in lieu of standard backtesting techniques such as Kupiec’s test and the Basel traffic light test. A model that seems to be perfect under certain settings is commonly outperformed by another model when the parameters change, and no model is superior under all configurations. Thus, the standard data envelopment analysis model is used by Kontaxis et al to evaluate several market risk models over different sets of probability levels and horizons. They find that, for short horizons, some approaches underestimate VaR. However, a sufficient number of models present violation estimates that almost converge to the desired ones. Surprisingly, aside from historical simulation and some extreme value theory models, overlapping returns tend to yield conservative 10-day VaR estimations for most models; in the cases of nonoverlapping returns, the results of the data envelopment analysis are satisfactory.

In the issue’s fourth paper, “Modeling multivariate operational losses via copulabased distributions with g-and-h marginals”, Marco Bee and Julien Hambuckers propose a family of copula-based multivariate distributions with g-and-h marginals. After studying the properties of the g-and-h distribution, they develop a two-step estimation strategy and simulate the sampling distribution of the estimators. Their methodology is used for the analysis of a seven-dimensional data set of 40,871 operational losses. The empirical evidence suggests that a distribution based on a single copula is not flexible enough, so Bee and Hambuckers model the dependence structure by means of vine copulas and show that the approach based on regular vines improves the fit. Moreover, even though losses corresponding to different event types are found to be dependent, the assumption of perfect positive dependence is not supported by their analysis. As a result, the VaR of the total operational loss distribution obtained from the copula-based technique is substantially smaller at high confidence levels than that obtained using the common practice of summing the univariate VaRs.


You need to sign in to use this feature. If you don’t have a account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: