Skip to main content

Party’s over as more banks drop internal models for market risk

At least three systemic banks in Europe intend to ditch IMA for capital requirements

IMA party over
Credit: Risk.net montage

  • Banks are planning to ditch internal models for calculating market risk capital requirements once new trading book rules kick in. Instead, they will rely on regulator-set standardised approaches, whilst using their own models for internally measuring risk.
  • Banks explain the shift by arguing that the potential reduction in capital requirements don’t justify the extra costs of internal modelling.
  • However, some risk experts are concerned that dumping internal models will spark a divergence between the way that banks calculate capital requirements and internal risk management.
  • “I don’t agree with the regulator thinking that the standardised model is safer,” says a senior risk manager at a global investment bank with internal model ambitions. “Yes, it gives more capital. But if you make it too simple, how do you check it’s not missing risk?”
  • Others feel that the move to standardised models can improve risk management, arguing that the own-model approach is “overengineered” and unnecessarily complex.

In the risk management nightclub, there’s a VIP room at the back reserved for the select group of banks that are allowed to use internal models for regulatory capital. But the lights are going up and the crowd is thinning, as more of the exclusive occupants join the rest of the partygoers on the main dancefloor, in the standardised section.

Last year, Risk.net reported that it knew of one European global systemically important bank (G-Sib) that was planning to stop using its own models to calculate market risk altogether. Now, that number has risen to three.

One risk technology vendor says from client conversations, they expect the number of banks using the internal models approach (IMA) for market risk capital requirements to shrink from around 80 to fewer than 20, once new trading book capital rules fully apply.

However, as the trickle of banks walking away from IMA starts to turn into a flow, banks are questioning the alternative, the regulator-set standardised approach. Some say this method is imprecise and could promote distorted incentives. Other warn that ditching the IMA could lead to a decoupling of internal risk measurement and capital requirements.

“I don’t agree with the regulator thinking that the standardised model is safer,” says a senior risk manager at a global investment bank that does still have internal model ambitions. “Yes, it gives more capital, but if you make it too simple and you don’t have the onus of validating it and recalibrating, what if it goes wrong? How do you check it’s not missing risk?”

Those who have already chosen to leave behind the IMA, though, say the standardised method offers improved risk measurement.

“There is a lot of what I would consider overengineered stuff in the IMA,” says a senior risk modeller at a European bank that is not a G-Sib and is not planning to seek internal model approval. “I don’t think that the IMA model is suitable for internal risk management because of all these strange moving parts in the model.”

There will always be new market dynamics, new products, new underlyings in the market. Can the standardised approach really capture this?
Risk manager at a European G-Sib

There’s an irony to the debate because in the US, market risk is the only category that could still be capitalised with internal models under the so-called Basel III endgame proposals by regulators last July. By contrast, US banks will be forced to ditch their internal models for credit risk under the same plans.

Central to the Basel III reforms is the Fundamental Review of the Trading Book. The latest version of FRTB imposes stringent tests on banks wishing to use internal models, while the fallback standardised approach has been recalibrated with less punitive risk weights than previously.

As a result, a growing number of banks are finding that entry to Club IMA is too expensive, and the drinks are watered down. In other words, the costs of passing the tests and maintaining models simply don’t justify the small savings in capital requirements that IMA offers versus the standardised approach.

“What we will get is a lot of complexity and extra cost,” says the senior risk modeller at the smaller European bank. “For us, it was almost an easy choice [to use the standardised approach].”

A senior risk modeller at a European G-Sib, which is abandoning the IMA, suggests that those staying in Club IMA are doing so to avoid the project becoming “sunk cost” from preparations already made to meet the FRTB’s model changes.

Hitting the (dance)floor

The process of applying FRTB’s internal model approach is complex and time-consuming. IMA banks are “expected” to ensure they continually meet their market risk capital requirements, “including at the close of each business day”, in the words of the Basel standards. The European Union’s implementation of the regime stipulates that the calculation must occur “on a daily basis”. Planned UK rules are worded similarly.

The daily grind of calculating regulatory capital is just one element contributing to the overall workload of internal modelling. A regulatory expert at the first European G-Sib says the IMA makes demands on banks in three areas: computation frequencies, expectations around controls, and accessibility to eligible data.

The complexity of the operation manifests itself in reams of red tape from authorities. In Europe, overarching laws are written by the three EU legislative bodies. Those lawmakers mandate Europe’s finance watchdogs to further specify the technical details of certain aspects of the rules. For capital requirements, the relevant watchdog is the European Banking Authority, whose own rules for IMA span 193 pages – counting only the pages containing articles. That’s aside from almost a hundred more pages of rules from the main legislators and the EU’s primary supervisor.

Banks are already feeling the heavy hand of EU regulators in the form of recent proposals by the EBA to force model validators at banks to perform a backtest on expected shortfall models. Meanwhile, European banks have complained that EBA rules on non-modellable risk factors are too prescriptive.

The extra rules in the EU’s implementation of the FRTB were the last straw for another G-Sib.

You don’t want two separate worlds in your trading activity where the front office looks at a certain way of risk-managing to a certain set of metrics, and the entire official chain for capital purposes is running on something different
Risk manager at a European G-Sib

For all the extra work, the capital rewards are uncertain. The introduction of an anticipated output floor for banks will cap the amount of capital savings that internal models can produce. The new floor will require internally modelled risk-weighted assets – the key input into final capital requirements – to go no lower than 72.5% of those produced by alternative standardised approaches. Internal ratings for calculating default risk within banks’ lending businesses usually take up most of the available room.

The output floor was the primary reason cited by one G-Sib for ditching the IMA.

Another bank says its decision to drop the IMA was rooted in non-modellable risk factors, or NMRFs. A risk factor is deemed to be non-modellable if a bank fails to have enough observations to pass one of the Basel Committee’s tests. Banks must capitalise NMRFs with a surcharge.

The senior risk modeller at the smaller European bank says they were forced to split their trading portfolio into two, owing to the number of risk factors that were found to be non-modellable. Desks with few NMRFs were capitalised on the IMA whilst those with many were capitalised under the standardised approach. As a result, there was less diversification within the trading book: risk factors that previously offset each other ended up on different sides of the internal-versus-standardised divide, thereby pushing up capital requirements.

“You will lose a lot of diversification between the two parts and that eliminated all the benefits that we got for using IMA,” says the senior risk modeller.

Narrowing down IMA desks can also make capital requirements more volatile as internal models must use a stressed period that uses the most severe losses for the bank’s IMA portfolio. A reduced IMA perimeter makes it more likely that new volatility events will become a candidate for the stress period.

Fear of missing out

Widespread adoption of the standardised approach is a concern for some, however. Bank risk models are meant to provide a more accurate view of risks than the standardised approach. But, sceptics argue, the standardised approach was calibrated by regulators based on observations during the 2008 financial crisis, which makes it less effective for internal risk management and potentially capital allocation.

“There will always be new market dynamics, new products, new underlyings in the market,” says a risk manager at a second European G-Sib, which does have internal model ambitions. “Can the standardised approach really capture this? Can it capture potential correlations if you have a new hedging regime? The standardised approach is hard coded; it won’t be able to capture that.”

A further difference between the IMA and standardised approach is the use of backtesting, the process of checking model results against real life outcomes, and subsequently correcting the model if its estimates prove to be inaccurate. Banks routinely backtest their internal models whereas the same does not apply to the standardised approach.

“The very fact that it is not backtested means that banks rely on ideas that were theoretically well founded at some point in time, but they are not subject to a reality check any more,” says Carlo Acerbi, a consultant at risk advisory Larix and a lecturer at Swiss university EPFL.

Banks can perform this backtest and update process relatively quickly. Changing standardised models set by regulators takes much longer. Regulators need to agree to perform an update, run the analysis, consult on the change and publish their final standards at the Basel Committee on Banking Supervision. Jurisdictions then need to implement those rules into their own laws. All of which can take years.

Critics say the standardised approach is a blunt tool, lacking the necessary sensitivity that internal models can offer. They fear the regulator-set approach may underestimate or overestimate risk, meaning capital requirements aren’t set appropriately. Banks may also be tempted to arbitrage any exposures that are supposedly underestimated in the standardised approach.

“You may discover the capital charges for some asset classes are underestimated, so banks fill their portfolios with those asset classes, which are exactly riskier than the model penalises,” says Acerbi of Larix Consulting. “When one is wrong, you only realise it when it’s too late.”

Overestimating risk can have a nasty side-effect, too. Activities that attract a higher capital charge mean banks will allocate more capital to that desk. With increased headroom, the desk may then take excessive risks that are not adequately capitalised.

“If you give a desk high limits, then they could in some of the directions where the model is not that conservative get a lot of real risk into the organisation,” says the senior risk modeller at the smaller European bank.

However, regulators are confident that recent updates to the standardised approach introduce sufficient sensitivity to the model.

“The standardised approach is a significant improvement in terms of risk sensitivity compared to the previous approach,” says a European regulatory source. “There has been a very deliberate effort by the Basel Committee to provide more sensitivity into the standardised approaches. That is why it is a lot more credible alternative today to using the IMA.”

A split in the room

A shift to the standardised approach for calculating regulatory capital under FRTB does not signal the death of internal modelling. Banks will still use internal models to measure and manage market risk. Indeed, this is one of the reasons for banks to apply the IMA: to keep all areas of the business looking at a single metric.

Differences between the output of models for risk management and capital requirements are small today. Ideally, banks would like them to stay that way.

“You don’t want two separate worlds in your trading activity where the front office looks at a certain way of risk-managing to a certain set of metrics, and the entire official chain for capital purposes is running on something different,” says the risk manager at a third European G-Sib. “How do you then officially risk-manage? The second line of defence or capital, which of the two do you use? Or do you have a third flavour in between the two.”

There have been cases in the past where differences resulted in costly mistakes: not least, JP Morgan’s infamous London Whale episode in 2012. Here, the bank manipulated its internal models in an attempt to reduce risk-weighted assets, resulting in losses of more than $6 billion.

 

A widening gap between the outputs of risk management and capital requirements models could lead to banks exposing themselves to risks that the regulator hadn’t originally intended, suggests Thomas Obitz, director at consultancy RiskTransform.

“There is a danger that there will be increasing divergence between the risk measurement and the regulatory capital measurement,” he says. “In such a situation, you are running into a different set of skewed incentives which might not be what the regulator wants.”

Obitz sees the reluctance to adopt IMA as an ominous sign that banks are not necessarily willing to invest in enhancing market risk modelling. For example, FRTB makes greater use of a measure of risk known as expected shortfall instead of the classic risk metric, value-at-risk. For a firm to switch its internal models to expected shortfall would entail a sizeable cost – a cost that may be beyond a lot of banks.

“Many people are still in the VAR world and they’re not even thinking about moving to expected shortfall for management decision making and limit setting,” says Obitz.

As a result, Obitz is pessimistic over whether FRTB will improve internal market risk measurement.

The senior risk modeller at the smaller European bank does not rule out his firm adopting expected shortfall in the future for internal risk measurement. But he sees advantages in the pre-FRTB model framework, which uses two separate measures – VAR and stressed VAR. This allows the bank to set two limits: one based on the actual current environment, and another based on a stressed period. The bank can then set a first lower limit based on the current environment and then a second higher limit in case the environment suddenly becomes riskier.

New bouncers

Market risk managers may derive some comfort from changes to the operational risk capital regime. The latest Basel III capital rules will require all banks to use a standardised approach for op risk, and has led to many banks abandoning the so-called advanced measurement approach.

Anecdotally, Obitz says op risk managers have noticed greater freedom to tailor their risk management models now that the regulatory environment is less restrictive. For market risk, banks can separate their internal models for risk management away from FRTB, which should give them more autonomy over the quality of their models.

“Who controls the quality of that internal model? That’s in a way our main motivation behind considering ourselves an internal model bank,” says the risk manager at the third European G-Sib.

FRTB requires banks to pass a series of test. One is the so-called profit-and-loss attribution test, which consists of two statistical measures comparing the P&L values generated by front-office and risk systems. One measure assesses the correlation between the two P&Ls, the other assesses the similarity in their distributions.

“The fact is the checks and balances are becoming more rigorous. You have to do P&L attribution. It’s extremely hard to pass it,” says the senior risk modeller at the global investment bank.

The second test is a VAR backtest.

Although these tests are more rigorous, banks do still find fault with them, including possible perverse incentives. For example, the P&L attribution test is harder to pass for well-hedged desks purely by its design. The tests also don’t directly check the quality of the expected shortfall model used for capital.

Whichever route banks choose, the senior risk modeller says the model risk function has gained in importance and independence in recent years.

“Model risk management as a risk discipline has developed quite significantly during the past five to 10 years,” says the risk modeller.

The numbers at the IMA party may be dwindling, but the music is still playing – for now at least.

Editing by Alex Krohn

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here