Editor: Marcelo Cruz
Published: 30 Jun 2014
Papers in this issue
by Patrick McConnell
by Erika Gomes and Henryk Gzyl
by Christopher M. Cormack
by Lourenco Couto Miranda
I have recently participated in a number of operational risk conferences, the most important of which was OpRisk USA 2014. OpRisk USA has occurred annually for the past fifteen years, and I have been a speaker at every event. In the early days we would struggle to get twenty to thirty attendees, but this year we had over 350 participants from all over the Americas and even some from Europe. That increase in interest is an indicator of how much more importance has come to be placed on operational risk in the intervening years. Firms are definitely deciding to invest in sending their risk managers to see what is happening in the industry and discuss it with their peers. With that in mind, it is frustrating to see how lost the operational risk community still is compared with the market and credit risk communities. During the conference a number of speakers complained about their lack of opportunity to participate in and make inputs to strategic decisions, certainly compared with their market and credit counterparts, and they also bemoaned the disconnect between operational risk quants and risk managers. This issue is something that I have been very vocal about over the years. Operational risk will only evolve when models, like those in market and credit risk, become sensitive to the environment in which they operate. In The Journal of Operational Risk we have published a number of papers that can help the shift in this direction, and we would really like to see more of these types of contribution. I ask authors who would like to write regarding the state of operational risk research to continue to submit to the journal. Again, I would like to emphasize that the journal is not solely for academic authors. Please note that we do publish papers that do not have a quantitative focus, and indeed there are examples of this in the current issue. We at The Journal of Operational Risk would be happy to see more submissions containing practical, current views on relevant matters as well as papers focusing on the technical aspect of operational risk.
There are three very interesting research papers in this issue. One paper deals with measuring frequency when data modeling is in its initial stages and it is hard to separate data by risk types and sources. The remaining two papers tackle the issue of data thresholds in operational risk modeling.
In our first paper, "Disentangling frequency models", Erika Gomes and Henryk Gzyl tackle the common problem (particularly in firms that are in the early stages of operational risk data modeling) of when the data collection procedure does not distinguish between subpopulations of sources of risk when calculating the frequency of losses in a given time period. The authors' method consists of devising techniques to determine the appropriate model for the frequency of losses considering each source of risk. When considering frequency models of the type (a, b, 0), there are several possible ways of disentangling a mixture of distributions. Here, the authors present an application of the expectation-maximization algorithm as well as k-means techniques to provide a solution to the problem when the number of sources of risk is known.
In the issue's second paper, "Specification test for threshold estimation in extreme value theory", Lourenco Couto Miranda deals with one of the key issues in operational risk severity distribution modeling, which is to find where to divide the loss data set above and below a certain threshold for modeling purposes. The author assumes that above a certain data threshold, extreme loss events are explained by an extreme value distribution. For the purpose of determining the threshold, he applies the classical peaks-over-threshold method in extreme value statistics. Using that approach, a generalized Pareto distribution asymptotically describes data in excess of a certain threshold. Consequently, establishing a mechanism to estimate this threshold is of major relevance. Current methods to estimate the data thresholds are based on subjective inspection of mean excess plots or on other statistical measures like the Hill estimator, which leads to an undesirable level of subjectivity. In this paper the author proposes an innovative mechanism that increases the level of objectivity of threshold selection, departing from a subjective and imprecise eyeballing of charts. The proposed algorithm is based on the properties of the generalized Pareto distribution and considers the choice of threshold to be an important modeling decision that can have significant impact on the model outcomes. The algorithm introduced by the author in the paper is based on the Hausman specification test, which determines the threshold that maintains a specification such that the other parameters of the distribution can be estimated without compromising the balance between bias and variance. As is strongly encouraged in this journal, the author applies his method to real loss data so that readers can see a practical example of the improvements the process will bring. Results suggest that the Hausman test is a valid mechanism for estimating the generalized Pareto distribution threshold and can be seen as a relevant enhancement in the objectivity of the entire process.
In our third paper, "Fitting operational risk data using limited information below the threshold", Christopher M. Cormack presents a methodology to calibrate the distribution of losses observed in operational risk events. His method is specifically designed to handle the situation where individual event information is only available above an approved threshold and there is a limited set of below threshold information. The method uses the number of events and the sum of the total losses below thethreshold. Studies are performed that demonstrate the improved stability of the fitted distribution parameters as the number of events above the threshold is reduced. This is compared with the conventional truncated maximum likelihood estimator. To demonstrate the improved stability of the parameter estimation using the below threshold information, a series of fits are performed to m samples of N events drawn from a lognormal distribution, and the results are compared with estimates of the truncated maximum likelihood estimator. The below threshold information fitting methodology produces a better estimator of the population distribution parameters with reduced bias and dispersion. In addition, an estimation of the 99.9th percentile of the severity distribution suggests that the below threshold information provides a better estimator of the percentile with reduced uncertainty.
In this section we publish papers that report readers' experiences in their day-to-day work in operational risk management.
In this issue's forum paper, "Dissecting the JPMorgan whale: a post-mortem", Pat McConnell discusses the infamous "London whale" scandal from an operational risk perspective using the late Professor Barry Turner's framework for analyzing organizational disasters. The paper also makes suggestions as to how model risk may be managed to prevent similar losses in future. In many respects, the London whale scandal at JPMorgan Chase is similar to other "rogue trading" events, in that a group of traders took large, speculative positions in complex derivative securities that went wrong, resulting in over US$6 billion of trading losses for the firm. As in other rogue trading cases, there were desperate attempts to cover up the losses until they became too large to ignore and eventually had to be recognized in the financial accounts of the bank. However, the whale case, so-called because of the sheer size of the trading positions involved, differs in several important respects from other rogue trading cases, not least because the size and riskiness of the positions were well-known to many executives within JPMorgan, a firm that prided itself on having advanced risk management capabilities and systems. The role of model risk in this scandal, though not the primary cause, is also important in that at least part of the impetus to take huge positions was due to incorrect risk modeling. Various external and internal inquiries into the events have concluded that critical risk management processes in the bank broke down, not only in the Chief Investment Office - the division in which the losses occurred - but across the whole bank. In particular, deficiencies in the firm's Model Development and Approval processes allowed traders to deal while underestimating the risks that they were taking. Under Basel II regulations, losses due to process failure are classified as operational risk losses and this case therefore demonstrates a significant failure of operational risk management within JPMorgan.
Search the archive
Subscribe to gain full access to The Journal of Operational Risk and its archive.
Call for papers
Submit your work and we can offer you:
Please contact [email protected] for more information.