Don’t let the SMA kill op risk modelling

The SMA is not a good response to the AMA’s failings – but don’t throw the baby out with the bathwater

standardised measurement

The SMA is not a good response to the AMA’s failings – but don’t throw the baby out with the bathwater

This article is the second in a series focusing on recent criticism of the operational risk capital framework; the first can be found here.

A messy compromise that pleased no-one, one doesn’t have to look far for critics of the Basel Committee’s proposed standardised measurement approach (SMA) to calculating operational risk capital requirements.

Peter Sands, the former chief executive of Standard Chartered, recently claimed here that the SMA, as proposed, provides few – if any – incentives for banks to improve their management of operational risks.

The industry’s frustration with the operational risk capital framework is clear – and justified, given patchy implementation of the current advanced measurement approach (AMA), and the ill-conceived attempt to address that problem with the SMA. We agree with Sands’ characterisation of the proposed SMA as being too simplistic, exclusively backward looking and conceptually flawed. We documented our concerns when the first SMA draft was released by the Basel Committee; at the OpRisk North America conference; and in the Journal of Operational Risk.

But the current debate has become mired in confusion between the concept of capital for operational risk and the execution of that concept through the AMA. The fact that existing efforts have failed should not be misconstrued as an indictment of the concept of op risk capital itself; it is simply a failure of a particular approach.

Operational risk capital is necessary in order to address the negative externalities generated by banks’ risk-taking activities. The question of the right level of operational risk capital is a fair one; op risk modelling is still a relatively young discipline, and has yet to mature. Current models ignore risk drivers, and assume the risks banks face to be carved in stone – this is what many in the industry mean when they complain about models being backward looking.

But the flaws of the SMA, however numerous, are no argument against regulatory capital. The SMA was an ill-thought-out answer to the shortcomings of existing approaches. Regrettably, it appears regulators are minded to proceed with the SMA in a revised form: after several delays which encouraged rumours the SMA could be abandoned, William Coen, the committee’s secretary-general, surprised many by suggesting the approach would be included in the package of revisions to the regulatory capital framework dubbed Basel IV.

The still-to-be-agreed package would see a generalised curtailing of banks’ ability to use bespoke models to measure risk across different elements of the capital framework, with standardised approaches acting as an aggregate floor to the output achieved using an own-model approach. What the revised framework will look like – and when and whether it gets agreed by the central bank supervisory chiefs who oversee the Basel Committee’s work – remains to be seen.

Often, advocates of op risk management deny the benefit of modelling; modellers meanwhile look at qualitative risk managers as amusing distractions. But good op risk management is not an option: it is a mandatory requirement

We certainly hope that better regulation will be proposed that focuses on addressing the weaknesses of the current framework, without denying the benefits of models – as long as those are rooted in the observation of reality, rather than being seen an elegant challenge for mathematicians. To quote academic and author Nassim Nicholas Taleb: “When and if we model, we go from reality to models, not from models to reality.”

The AMA is by no means perfect; certainly it has been imperfectly and inconsistently implemented. It has suffered from regulators’ requirements that were too vague – around distribution choices, units of measurement and aggregation rules, to name just a few examples – which is part of the reason for the current observed inconsistency across the industry.

Vagueness was a necessary evil when the rule was written, given how little experience most firms had in quantifying and modelling the operational risks they faced. However, op risk practitioners now have more than a decade of research to rely on, and while our understanding of operational risk is not perfect, it is significantly better than when the AMA was written.

Often, advocates of op risk management deny the benefit of modelling; modellers meanwhile look at qualitative risk managers as amusing distractions. Others, like Sands or JP Morgan’s Jamie Dimon, imagine one can be substituted for the other: if you have good management, you shouldn’t set aside capital for operational risk – or vice versa, the argument goes. But good op risk management is not an option: it is a mandatory requirement. The BCBS’s Principles for the sound management of operational risk have been part of the Pillar 1 regulatory framework for operational risk since 2001, the same way operational risk capital is.

Op risk management and capital are not substitutes: they are complements

Even an institution that manages operational risk perfectly, if such perfection is achievable, should not dispense with holding dedicated capital – and no amount of op risk capital will be enough to spare reckless institutions from bankruptcy. Op risk management and capital are not substitutes: they are complements.

In the US, the focus has shifted from capital estimation to stress testing. Models developed for the Federal Reserve’s Comprehensive Capital Analysis and Review (CCAR) programme have refocused on the forecasting of losses that are more tangible than what is required under the AMA, which seeks to capture the worst one in a 1,000 cases of total annual losses. 

Stress testing requires institutions to estimate expected losses under adverse economic conditions, in addition to considering significant yet plausible events that could materialise in the stress window – the so-called idiosyncratic scenarios. The focus on expected losses rather than the extreme percentiles has made quantification significantly more useful from a risk management point of view. Not only is it extremely difficult, if not impossible, to estimate extreme percentiles to the 99.9th as required under the AMA, it also yields few risk management benefits, since such extreme loss levels are difficult to mitigate.

Expected losses, even conditional on a stressed environment, are the types of losses risk managers actively try to mitigate, and better understanding these losses will only help risk managers. In the world we live in, we cannot rule out the unexpected as our political and social environment demonstrate every day.

Addressing this shortcoming will be critical in producing more credible operational risk capital numbers but also in helping better establish a risk-return trade-off. Current quantification efforts for stress-testing purposes are moving in that direction.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Digging deeper into deep hedging

Dynamic techniques and gen-AI simulated data can push the limits of deep hedging even further, as derivatives guru John Hull and colleagues explain

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here