I have attended a few industry meetings in recent months and have left them really enthused to have seen a number of firms, particularly in the United States, working on the development of more risk-sensitive models. In general, these models use control environment information to influence frequency and/or severity distributions (as covariates in the maximum likelihood estimation of severity parameters or for the calculation of frequency using regressions, for example) and they bring new hope to operational risk modeling. I write a monthly column in a sister publication of this journal - Operational Risk & Regulation - and every time I cover the topic of using control environment factors for modeling I receive many emails and messages with comments and questions. These risk-sensitive models are crucial for the future of operational risk as a discipline as they help to explain why, for example, a large volume of trades impact operational risk and by how much. This subject is so interesting and popular that we are considering a special issue of the journal that will bring together accounts of practical experience of implementing these models. This special issue should take place in 2013: we would welcome submissions.
Regarding the state of operational risk research, I would like to ask potential authors to continue to submit to the journal. Again, I would like to emphasize that the journal is not solely for academic authors. Please note that we do publish papers that do not have a quantitative focus, and indeed there is a good example of this in this issue. At The Journal of Operational Risk we would be happy to see more submissions containing practical, current views of relevant matters alongside papers that focus on the technical aspects of operational risk.
In this issue we have two Research Papers and one paper in the Forum section.
In the first paper of the issue, "Estimating operational risk capital: the challenges of truncation, the hazards of maximum likelihood estimation, and the promise of robust statistics", John Douglas ("J.D.") Opdyke and Alexander Cavallo cover a very important and popular topic in operational risk: parameter estimation for severity distributions. The authors claim that maximum likelihood estimation (MLE) does not adequately meet this challenge because of its lack of robustness to modest violations of idealized textbook model assumptions. This is particularly true if the data is not independent and identically distributed, which is usually the case with operational loss data. Truncation also augments correlation between a distribution's parameters, and this exacerbates the lack of robustness of MLE. In order to overcome these known problems, the authors derive influence functions for MLE for a number of severity distributions - ones that are truncated and ones that are not - to analytically demonstrate its lack of robustness and sometimes counterintuitive behavior under truncation. Empirical influence functions are then used to compare MLE with robust alternatives such as the optimally bias-robust estimator and the Cramér-von Mises estimator. The results show that optimally bias-robust estimators are very promising alternatives to MLE for use with actual operational loss event data, whether it is truncated or not, when the ultimate goal is to obtain accurate (nonbiased) and robust capital estimates. This is a very interesting paper that I highly recommend to operational risk modelers.
In the second paper in the issue, "Asymptotics for operational risk quantified with a spectral risk measure", Bin Tong and Chongfeng Wu analyze (as the title suggests) the asymptotic results for operational risk measured with spectral risk measures (SRMs) for a single cell at very high confidence levels. An SRM is a risk measure given as a weighted average of outcomes, where undesired outcomes are, typically, included with larger relative weights. In effect, the SRM is a weighted average of the quantiles of a loss distribution, the weights of which depend on the user's risk aversion. SRMs therefore enable us to link the risk measure to the user's attitude toward risk, and we might expect that if a user is more risk averse, other things being equal, then that user should face a higher risk, as given by the value of the SRM. The authors also use SRMs to extend the results to the multivariate case, where the dependence structure between different cells is characterized by a Lévy copula. (This is the same approach used by Bocker and Kluppelberg in their paper published in this journal in 2010.) They derive first-order asymptotic results for operational spectral risk measures in various dependence scenarios. The asymptotic results documented in this study may give us further insights into the quantification of operational risk, and we hope they might also stimulate the interest of practitioners and academics in the field of operational risk.
We present only one paper in the forum section of this issue: "Systemic operational risk: smoke and mirrors" by Patrick McConnell. This paper tells an interesting story of events that took place in 2009 in New Zealand when large Australian banks settled a long-running case with the New Zealand Inland Revenue Department, the result of which was payments of unpaid tax and interest by the banks totaling some NZ$2.2 billion. The author argues that the losses to the Australian banks incurred as a result of the New Zealand tax scandal were, for the most part, a result of systemic operational risk - and, in particular, legal risk. Using examples from published court cases, the paper identifies some of the legal risk that arose using these transactions. The author then suggests proactive approaches to systemic risk management that should help to detect similar scandals in the future and minimize their impact.