
Ridiculous, flawed, too diverse: op risk models under fire
Regulators want more consistency - and some are questioning AMA

If two banks were given the same operational loss data, and asked to calculate a capital requirement on the back of it, the results would be different – a natural consequence of the make-do-and-mend modelling framework that is the advanced measurement approach (AMA). But how different would they be – and how far can divergence go before the rules become meaningless?
Critics claim it has gone too far already, and the bigger question now is what to do about it.
“The diversity that exists in the employment of the AMA is ridiculous. It’s impossible to get any level of comparability between the models and datasets being used from one bank to the next – with the end result that two banks with the exact same datasets could have radically different capital numbers depending on the distribution they choose. It’s a mess,” says one operational risk capital expert at a large consulting firm in London. He claims to have seen the same dataset generate capital numbers that are billions of dollars apart when using different banks’ loss distributions.
One operational model risk manager at a US bank in New York agrees the approach is “fundamentally flawed”.
He puts the blame on the degree of precision banks are being asked to attain. The Basel Committee on Banking Supervision requires AMA models to calculate capital using a 99.9% confidence interval – tough to do when the industry has been racking up record-breaking multi-billion dollar regulatory penalties and settlements on an almost quarterly basis over the past couple of years (Risk March 2013).
When you try out two different distributions with different tails and plug in identical data sets, you can have massive differences in the final capital numbers
“Our view is that it is basically impractical to use the AMA because of the quantile required. You can’t reliably estimate at 99.9% because you don’t have enough empirical data. And because of this, when you try out two different distributions with different tails and plug in identical data sets, you can have massive differences in the final capital numbers,” he says.
Other critics say a bigger problem is a lack of consistency in how banks classify their losses, meaning that even if two banks build similar models and have similar risk profiles, they might still end up with vastly different numbers.
Even some regulators privately admit a rethink might be needed: “Operational risk models are no different from other types of models, but there is a lot of scope for gaming the system. There are certainly more than a few degrees of freedom and there are concerns around that flexibility and whether the AMA is still the right approach to operational risk modelling,” says one international regulator responsible for supervising operational risk models.
Rumours have been circulating for years that the Basel Committee, which is currently recalibrating the three standardised approaches, will revisit the AMA. To date, nothing has been said publicly. Mitsutoshi Adachi, chair of the working group on operational risk at the Basel Committee and director of the examination planning and review division at the Bank of Japan, declined to comment on the possibility of changing the AMA, but did say the committee is keen to ensure consistency between capital charges calculated by banks’ internal models, including the AMA (Risk June 2012).
That is a reference to benchmarking projects the committee has been running for market and credit risk modelling, in which banks were given identical test portfolios and asked to calculate risk-weighted asset numbers (Risk February 2013). The aim is to find out how much divergence there is between banks, and to pinpoint the source of the differences. The outcome could be a more prescriptive approach to modelling, or disclosure of standardised capital numbers in order to give analysts and investors an apples-to-apples comparison – and observers say something similar could work for the AMA.
“There needs to be more guidance, governance and direction around the modelling approaches. If a regulator can come up with a more defined framework and transparent industry-wide benchmarking, and eliminate the scope for game-playing, the AMA is a much better way of calculating operational risk capital requirements than any of the standardised approaches,” says Tim Thompson, partner at accounting firm Deloitte in London.
It’s an approach favoured by regulators themselves. “This is not easy, but we shouldn’t necessarily shut up shop because it’s difficult. There is still room for the AMA, but governance and process matters. There is scope for making this work. The kneejerk reaction is to go with something simplistic, but that would be a shame,” says the international regulator responsible for supervising operational risk models.
When designing the AMA, the Basel Committee did not specify an approach or distributional assumptions that should be used to calculate the capital requirements. It only required that banks capture severe tail-loss events, and meet a soundness standard comparable to that of the internal ratings-based approach for credit risk – a one-year holding period and a 99.9% confidence interval. Each bank’s model is also allowed to draw on four different inputs: internal data, external data, scenario analysis and business environment and internal control factors – which can be weighted according to the philosophy of each institution or the stance taken by national supervisors.
Most banks are using some variant of what’s known as the loss distribution approach (LDA), in which severity and frequency distributions are derived from a combination of internal and external data sets, which is then used to compute a probability distribution of the loss – using Monte Carlo simulation to get to a 99.9% confidence interval.
The key issue for all LDA approaches is how to draw the tail of the distribution, which contains sparse data. In some cases, banks extrapolate from the data they have in the body of the distribution; in others, they might use scenario analysis to populate the tail and historical data to draw the body. And a variety of techniques can be used in each case. These choices have a disproportionately large impact on the resulting capital number – hence the concerns that feeding the same data into two different banks’ models would produce vastly different capital numbers.
Banks also make more granular choices. As with all models, a lot depends on the data used, and the AMA gives banks the ability to mould their own data sets – or, putting it bluntly, cherry-pick the losses they use. This can take two forms. In the first, banks argue they should be able to ignore certain losses or adjust their severity – an argument that can be applied to both internal and external data.
Most regulators are dogmatic when it comes to use of internal data, in that banks should not be allowed to exclude any inputs, but are more lax when it comes to the use of external data – which institutions use to supplement their internal loss histories.
Banks argue internal losses are a direct reflection of the risk profile and risk management practices at the institution, while losses suffered by another bank might reveal something about the industry’s exposures but need to be used with discretion.
“We conduct similarity analysis of external data. It is important to select data points that are relevant and applicable to the business model we are running. If you don’t have a massive trading operation, the probability of incurring large fraudulent trading losses are minimised, so it makes sense to either exclude certain data points or at least dampen the severity or frequency,” says one head of operational risk modelling at a European bank in Frankfurt.
Conflict
That seems like a sound argument, but it instantly introduces subjectivity to loss estimations and an obvious potential conflict. “We’ve seen situations where the largest fraudulent trading loss at one large investment bank is many multiples of that used by another. And these are banks with similar-sized trading operations and arguably risk profiles – is that the right outcome?” asks Deloitte’s Thompson.
Regulators say they are aware of the dangers and claim to be taking a hard line. “We have a very high bar for exclusions of internal data, but are a little more lenient for external data. But it’s something we keep a very close eye on. It’s a big deal and we monitor closely how banks choose data and justify those choices,” says Robert Stewart, risk management consultant for supervision and regulation at the Federal Reserve Bank of Chicago.
When banks do exclude internal losses, the reason is often that controls have improved, reducing the risk of a repeat. Regulators say they take a dim view of this. According to the international regulator, one bank that tried to exclude a major fraudulent trading loss from its data set – using exactly this argument – was told it could not.
“We hear arguments from banks that the controls are improved and are spectacular, but we don’t buy that. It happens a lot, and while we appreciate the enthusiasm, it is horribly deluded. It is not right that a bank completely ignore a large fraudulent trading loss,” he says.
Banks see it differently, and argue that if a trading loss has prompted the organisation to improve its risk management processes and controls, those improvements should be recognised in the modelling process.
“It is certainly possible that you can demonstrate effective controls of prevention. This is legitimate. Otherwise you would punish the loss, but undermine all the positive effects. There should be a forward-looking element to the modelling process. Some data should be excluded – or, if not, the severity or probability of the loss should be tweaked to reflect the reduced exposure,” says the head of operational risk modelling at the European bank in Frankfurt.
The second way in which banks can filter or shape their operational loss datasets is by putting the losses into a different category altogether. As a benign example, a bank’s mortgage lending business might suffer a loss because it was not able to lay claim to the property securing the loan – which might in turn be because of a clerical error when the loan was made. One bank could argue that the clerical error is the primary cause of the event, and classify it as an operational loss; another might decide the default is the principal risk and throw it into its credit model.
A more worrying example might see a loss categorised as the result of a business risk instead – an exposure type that does not attract regulatory capital.
“This is a major problem area,” says Mike Finlay, chief executive at consulting firm RiskBusiness International. “I could go to a bank, apply for a mortgage, and put in a forged letter of employment. If I then default on the loan, then that is a credit loss to the bank. But the reality is that the credit decision was based on fraudulent information. The fraud occurred prior to the credit loss. Now, some firms will classify it as a credit loss and some as an operational risk loss. But you have to get this right, because if you classify this as a credit loss, no corrective action will likely be taken, and it becomes a self-perpetuating problem.”
Banks recognise there might be an incentive to play games with the classification of losses if either operational or credit risk is capitalised more punitively, but they say there is little room to wriggle out of capturing the loss altogether.
“The room for manoeuvre is limited, especially when it to comes to classifying losses as risks that don’t fall under the capital framework. We have heard of it happening, but the Basel rules are pretty strict when it comes to the definition of operational risk losses,” says one operational risk modelling expert at a US bank in New York.
Regulators say they give short shrift to arguments that a loss should be classed as a business risk, but admit work needs to be done on the boundary between operational and credit losses. “I’ve heard the arguments, but they are misguided – banks shouldn’t try to class losses as the result of business risk if they want to maintain credibility with their regulator. They don’t stand up for very long. But one of the big areas is the boundary between credit risk and operational risk, and while the areas are well-defined, you don’t need to scratch the surface hard to find a lot of operational risk losses being classified as credit losses. A lot of work still needs to be done here,” says the international regulator.
Wrapping everything up is the problem of precision, which the operational model risk manager argues regulators can solve easily – by lowering the confidence interval and applying a brute force multiplier to the modelled numbers to make them sufficiently conservative.
The current approach is like trying to determine the salaries of the US public using a database of 1,000 people, he says. “If you use a confidence interval of around 90% you would be able to observe the data, and could determine with reasonable assurance you have captured the broad base. But if you’re asking for a 99.9% interval – depending on whether or not you have Bill Gates in the distribution – you can be very far from the truth. And the same applies to operational risk data. It is a limited set and you don’t know how heavy the tail is. Requiring a 99.9% quantile is just pretending to tell the truth. Regulators should instead lower the confidence interval, and employ the use of multipliers. It is statistically more relevant, and more transparent,” he says.
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe
You are currently unable to print this content. Please contact info@risk.net to find out more.
You are currently unable to copy this content. Please contact info@risk.net to find out more.
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. Printing this content is for the sole use of the Authorised User (named subscriber), as outlined in our terms and conditions - https://www.infopro-insight.com/terms-conditions/insight-subscriptions/
If you would like to purchase additional rights please email info@risk.net
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. Copying this content is for the sole use of the Authorised User (named subscriber), as outlined in our terms and conditions - https://www.infopro-insight.com/terms-conditions/insight-subscriptions/
If you would like to purchase additional rights please email info@risk.net
More on Operational risk
Power play: how geopolitics is shaping op risk at G-Sibs
Op Risk Benchmarking: Geopolitics is a top five fear for G-Sibs, but most banks lack specialist risk staff and classical tools
Automating regulatory compliance and reporting
Flaws in the regulation of the banking sector have been addressed initially by Basel III, implemented last year. Financial institutions can comply with capital and liquidity requirements in a natively integrated yet modular environment by utilising…
No tick-the-box approach to compliance risks
Op Risk Benchmarking: G-Sibs share fear of regulatory run-ins, but lack common stance on modelling, KRIs and more
Bread-and-butter op risks at the top table
Op Risk Benchmarking: As G-Sibs are forced to do more, how can they avoid doing more wrong?
Op Risk Benchmarking: Inside the G-Sibs
New initiative scrutinises op risk measurement and management practices at the world’s largest banks
Sizing cyber: banks split on who owns and measures hack threats
Op Risk Benchmarking: G-Sibs split on risk modelling and management for IT disruption and infosec
Banks frequently breach appetite for top op risks
Op Risk Benchmarking: Five G-Sibs breached appetite in past year across four risk types, new research reveals
Investment banks: the future of risk control
This Risk.net survey report explores the current state of risk controls in investment banks, the challenges of effective engagement across the three lines of defence, and the opportunity to develop a more dynamic approach to first-line risk control