Ruben Cohen is a London-based operational risk consultant. He holds a PhD in mechanical engineering and previously spent 10 years working in operational risk analytics at Citi
The proposed scrapping of the advanced measurement approach (AMA) for operational risk capital and its replacement with a simpler standardised measurement approach is nothing if not controversial.
In a recent article on Risk.net, an anonymous former regulator said: "Are you going to go to Nasa and tell them, 'The way you do all your rocket launches is really too complicated. The average American doesn't understand it. You need to simplify it so that we can all understand how you're launching those rockets?' That's insanity."
This analogy, which suggests the Basel Committee on Banking Supervision shouldn't question the complexity of the AMA, is completely irrelevant. Rocket launches are constrained by certain laws of nature, which are what they are, regardless of whether they do or do not follow common sense, or whether the average person understands them or not. The scientists and engineers behind these rocket launches obviously have a solid understanding of these laws and so manage to apply them successfully in designing rockets that are capable of performing such astonishing feats.
Financial and economic modelling is a different ball game altogether. There are no rules except the rules of common sense, which are most useful when articulated with the support of appropriate analytical tools. Operational risk should not be regarded any differently, which is why one frequently comes across the phrase, "if it doesn't feel right, it probably isn't". For this reason, the AMA models that have been developed over approximately the past 10 years are very much subject to the same litmus test.
I am a well-qualified and experienced analytics modeller, with a PhD in mechanical engineering. I served as a member of faculty in the same field, before spending 17 years in the financial industry, including 10 years in operational risk analytics. I am fully adept in reading and deciphering the mathematics behind very sophisticated models. But I must admit that in the case of the AMA, the models are so needlessly convoluted that, even to me, they are mind-boggling.
It isn't the mathematics that I struggle with, but the fairy-tale stories that are used to justify all the adjustable parameters, which are generously stuffed into the typical AMA model. I call these 'fudge factors', because that's exactly what they are.
At a minimum, a bank's AMA model must incorporate the following four elements: internal loss data, external loss data, scenario analysis, and business environment and internal control factors. Other inputs are usually included, a crucial one being correlation.
For now, let's concentrate on correlation and internal loss data. The current industry approach to dealing with internal losses is through the loss distribution approach, which works by segregating the institution into 'independent' units of measure (UoM). My observation is that the logic behind this segregation and segmentation process is highly subjective, inconsistent and far from convincing.
Determining the best choice of curve is a long and winding road, passing through an enchanted forest of thresholds, splices, truncations, statistical tests and confidence levels
Once the UoMs are defined and formally selected, the loss data in each is force-fitted into one of many available choices of theoretical distribution curves. Determining the best choice of curve is a long and winding road, passing through an enchanted forest of thresholds, splices, truncations, statistical tests and confidence levels, each backed up by a host of personal opinions and hand-waving arguments. Besides, any of these distribution choices would need a minimum of at least two, and sometimes up to five, adjustable parameters for fitting purposes. Therefore, for an institution that is made up of 15 UoMs, at least 30 fudge factors have to be introduced from the very beginning.
The fudging is not over yet. Next is the question of the correlations across the different UoMs, which must also be modelled. Given the very limited history of available loss data, the empirical correlations that one gets from these losses tend to be very unreliable. Whether they're aggregated on a monthly, quarterly or annual basis, or used as part of a moving average, they are governed almost exclusively by noise, which makes them completely worthless in their contribution to any AMA model.
To get around this situation, the correlations are replaced one-by-one by some number, typically a by-product of someone's creative imagination influenced strongly by discussions with regulators. For an entity with 15 UoMs, we could therefore end up with 105 cross-correlations, all of which are nothing but additional fudge factors generated by some ad hoc process, brought in under the guise of 'expert judgement'. All this for an entity with 15 UoMs.
In fact, 15 is among the lowest number of UoMs I've seen in my career. Some firms comprise 40 UoMs and others more. So I'll leave it to you to figure out the number of cross-correlations needed for an entity with 40 or more.
Then there is the choice of copula. Should one use a t copula or Gumbel copula, for example, or many of the other possibilities? If it's a t copula, you'll first have to explain why you chose it and, second, come up with the optimal degrees of freedom for it. The most popular defence for all this, which seems to be universally and unconditionally acceptable for reasons beyond my understanding, is "it's the industry standard".
By now, at a minimum, we are up to our neck in 135 fudge factors, not counting the types of copulas, statistical tests and confidence levels that are still to be decided upon – and all for a measly 15 UoM AMA model. And we still haven't moved beyond internal loss data, the first of the four required elements for the AMA. So imagine trying to incorporate the other three elements as well, each followed by its own closet full of skeletons, such as idiosyncrasies, complications and imaginative opinions from different people on how they should be formulated. Can you envisage the mess?
Every fudge factor added to a model effectively degrades the model's credibility by a certain amount. So by the time the final capital numbers are yanked out of an AMA model, their credibility has already diminished to nearly zero. Using all my common sense, technical abilities and past experience, I have tried very hard to make some sense of it all, but I keep falling flat on my face.
In my opinion, the problem with the AMA began when the Basel Committee set very firm rules at a stage when operational risk was not well understood and there wasn't sufficient data. This was further exacerbated by patchwork fixes, which were introduced by supervisors to close the gap, individually and one by one.
Now, it appears that the Basel Committee has finally decided to admit that the emperor has no clothes, by throwing away this mess, and replacing it with one consolidated equation. I agree with the philosophy of using a single, simple equation, although not necessarily the choice of equation itself. The simplicity it offers brings a breath of fresh air and should be greeted with open arms by operational risk analysts and managers.
Getting back to the rocket analogy, the reason for the Basel Committee's proposal isn't because the AMA has become too complicated for the average person to understand. It is because the typical AMA model doesn't make any flipping sense – either to me or anyone else. Logically and technically, there isn't a grain of truth or an ounce of common sense in it.