Welcome to the second issue of the sixth volume of The Journal of Operational Risk. As I write this letter, I have recently returned from two of the leading operational risk conferences, OpRisk USA and OpRisk Europe. It is interesting to see the level of attention that the industry has attracted and the progress that has been made. When these conferences started, in the late 1990s, attendance was no more than twenty people or so - now there are often more than three hundred attendees. This represents significant progress in terms of leveling the playing field between market and credit risk. In many firms, operational risk departments are now larger than those for market risk and credit risk in terms of the number of personnel.
However, a key challenge still remains with regard to developing a measurement and management framework that will allow senior management to use the outputs of an operational risk system to make business decisions on a day-to-day basis. Progress in this area has been slow, and we would really welcome papers on this subject. In this issue we provide a balanced view between technical and more qualitative papers. There are two technical papers that deal with popular topics in the industry: namely, robust estimation and single-loss approximation.We also continue our informal series on the status of operational risk across the world by showing how Tunisian financial institutions are handling operational risk disclosures. As the journal reaches a certain maturity after its five years of existence, the board, the publishers and I are now reflecting on the how the journal can be improved in terms of scope and breadth of research.
Regarding the state of operational risk research, I ask potential authors to continue to submit to the journal. Again, I would like to emphasize that the journal is not solely for academic authors.We at The Journal of Operational Risk would be happy to see submissions containing practical, current views of relevant matters as well as papers focusing on the technical aspects of operational risk.
In this issue we bring you two research papers. Both touch on very popular current issues in the industry. The first, “Robust estimation of operational risk”, by Nataliya Horbenko, Peter Ruckdeschel and Taehan Bae claims that robust estimation methods may provide stable estimates where classical methods have failed. They introduce optimally robust procedures (such as the optimal mean squared error estimator, the most bias-robust estimator and the radius-minimax estimator) to operational risk by applying them to parameter estimation of a generalized Pareto distribution using external data provided by a vendor. They provide a number of supportive diagnostic plots adjusted for this context: influence plots, outlier plots and QQ plots with robust confidence bands. The authors discuss a key issue that has been a struggle for most modelers: how to model at the 99.9% confidence interval when the events in operational risk themselves are large and rare and the distributions that best fit the data (such as generalized Pareto distributions) are very heavy tailed. This is a good and very practical paper.
The second research paper, “Can the single-loss approximation method compete with the standard Monte Carlo simulation technique?”, is by Christian Hess and it evaluates the single-loss approximation method for high-quantile loss estimation using external data. Given that banks are gaining momentum in terms of data collection and that the frequency of losses is increasing, the computing run time for aggregating frequency and severity distributions is also increasing. In a large bank, these calculations are usually run by supercomputers with more than a hundred servers. Of course, the cost of maintaining this framework is very high. This explains the interest in a “holy grail” type of solution, ie, a magic formula that can approximate the result of a large simulation in a few minutes. These approximations would never replace full-blown simulations but could offer significant benefits for a quick estimation of how capital would behave in different situations. In his paper Hess claims that the value-at-risk estimates from the single-loss approximation method are more accurate than the quantile estimates computed by a Monte Carlo simulation with one million losses. However, he finds a significant 99.9% value-at-risk underestimation for a medium-tailed gamma loss severity model. This is very interesting stuff!
This issue contains two forum papers, which serve to balance the very technical papers in first part of the issue. In the first paper, “Information technology at the forefront of operational risk: banks are at a greater risk”, Mohammad Ibrahim Fheili covers information technology (IT) risk, which is an important operational risk issue. Fheili claims that IT risks cannot be considered independently of other risks such as people risk, process risk and others. The introduction of any form of technology to a given production process or merely the modification of an existing IT environment necessitates a number of changes to staff skills, workflows, policies and procedures, and a host of other changes, which dramatically change the risk profile of a department or process. Recognizing these challenges and categorizing them under IT-related risks will put management in better control of the risks.
In the second paper, “The disclosure of operational risk in Tunisian insurance companies”,Wael Hemrit and Mounira Ben Arab continue our series on the status of operational risk across the world – not just in the main financial centers. This time the authors give us an interesting view of how operational risk has been established in the Tunisian financial system. I hope you enjoy these excellent papers and find them useful.
Can the single-loss approximation method compete with the standard monte carlo simulation technique?