Basel group shake-up has banks hoping for FRTB changes

Barger and Durand replaced by BoE's Nesbitt; banks want fresh look at P&L test

The Bank for International Settlements, Basel
Banks hope the new Basel Committee group will be more "open-minded"
Photo: Ulrich Roth

Banks are hoping to get a fresh hearing for their complaints about new market risk standards, after a change of leadership at the regulatory group that drew up the regime.

Top of the industry’s wish list for the new rules, known as the Fundamental review of the trading book (FRTB), is a review of the profit-and-loss (P&L) attribution test that acts as a gateway to the use of internal models. Banks spent much of last year arguing the test was badly designed and would prove almost impossible to pass, leaving many trading desks stuck on the tougher standardised approach to regulatory capital.

A long-awaited FAQ document published by regulators in January addressed few of the industry’s concerns – but dealers hope the change of leadership on Basel’s market risk policy body could see them reappraised.

“The change of leadership has created a slight wind of optimism that maybe we will have an opportunity to discuss the practical issues with the rules and hopefully lead to us being able to find a better place for them – somewhere where we’re clearer on what the rules are and where they can actually be implemented,” says a source at one European bank, who has been involved in negotiations with regulators over the regime.

Four industry sources confirm the Basel Committee on Banking Supervision’s trading book group, which drafted the rules, has been merged with its sub-group, which focused on implementation. The new chair is Derek Nesbitt, head of market and counterparty risk policy at the Bank of England (BoE). He replaces co-chairs Norah Barger of the Federal Reserve and Philippe Durand of the Banque de France. The merged group is called the market risk group (MRG).

The Basel Committee and BoE declined to comment. The Banque de France’s Durand did not respond to an email requesting comment.

The lead is now the BoE, which should be positive in terms of openness to negotiate with the Street
Market risk specialist at a European bank

Banks have welcomed Nesbitt’s appointment: “The lead is now the BoE, which should be positive in terms of openness to negotiate with the Street,” says a market risk specialist at a second European bank.

A risk policy source at a third bank describes Nesbitt as “a pragmatist”, adding: “In our discussions with him, he’s been less dogmatic, less set in his views. I hope the MRG will be more open-minded under his chairmanship and will try to find a solution that works.”

A first meeting between banks and the MRG is scheduled for late March, sources say.

For banks, the stakes are high. Industry studies have found the FRTB’s standardised approach would produce a 2.4-times jump in capital, relative to current numbers – with the increase far higher for certain asset classes, such as foreign exchange, where standardised capital would leap 6.2 times. Under the internal models approach (IMA), capital would be 1.5 times higher than current levels.

Concerns over P&L attribution test

Criticisms of the P&L attribution test were a feature of discussions between regulators and banks after the FRTB was finalised in January last year. Any trading desk that wants to use the IMA to calculate its capital has to pass the two-part test, as well as a daily backtesting regime. The aim is to ensure a bank’s risk models closely track the actual performance of its trading business, and can therefore be trusted as the basis for regulatory capital numbers.

The P&L attribution test does this by asking banks to compare two measures of a desk’s daily performance – the hypothetical P&L and the risk-theoretical P&L, with the former generated by a bank’s front-office pricing models and the latter generated by its risk models.

Even at this level, the test has caused some confusion. While the glossary to the final rules describes the test as the comparison between the outputs of these two different models, the appendix – where the test is described in full – defines risk-theoretical P&L as the product of a bank’s front-office models if they used the more limited set of risk factors found in the risk model. In other words, two different sets of inputs are fed into a single model, and the outputs compared.

Big banks believe this would be easier to pass, and have come to know the appendix definition as the ‘risk factor coverage’ approach. One of the industry’s hopes last year was that this approach would be officially endorsed in the FAQ document. In the end, the FAQs were silent on the point, and the European Commission’s draft version of the FRTB text, published in late November, went the opposite way – telling banks to produce risk-theoretical P&L with their risk models.

Understandably, the industry’s hopes of Basel-level backing for the easier approach have subsequently dwindled: “It would be challenging to go back to risk factor coverage now. I think that’s lost,” says the third bank’s risk source.

Some are hopeful that more technical concerns around the workings of the tougher of the two forms of P&L attribution test could now be reviewed, however. The second test takes the variance of the so-called unexplained P&L – the difference between hypothetical and risk-theoretical measures – and divides it by the variance of the hypothetical P&L. If the ratio of the two exceeds 20%, the desk suffers a breach; four breaches in a 12-month period will cause the bank to lose IMA approval.

In practice, banks say, this means a desk will only pass the test if its front-office and risk estimates of desk-level P&L are very close.

Over-sensitivity

Exactly how close is shown in recent analysis by the market risk team at Intesa Sanpaolo, which simulated a 1,000-year time series of the two P&L numbers in order find the implied level of correlation required for a desk to pass the variance test. The analysis found that a desk’s chances of success are practically zero at less than 90% correlation. Once it reaches 90%, a desk would suffer 8.9 breaches a year, on average, but might suffer as few as one. It was only at a correlation of 97% that the average number of breaches dropped below the failure threshold of four.

The work is referenced by the first European bank source: “The 97% correlation level isn’t actually the test metric, but everyone knows what it means – it’s an easier way to convey just how hard it is to pass the test.”

Because of the test’s sensitivity, banks have argued they might fail because of differences in the time at which the two P&L numbers are calculated – risk figures tend to be calculated globally once per day, while front-office P&L is calculated at the end of each region’s trading day.

Another worry is that differences in the underlying data used by the two separate models could also cause a desk to fail – alignment in terms of data and systems between front-office and risk is rare, banks claim, and the consequence of the P&L attribution test is that the industry will have to spend a lot of time and money trying to make these things match up.

Any small operational differences that have been OK in the past are likely to cause you to fail, just because it is so sensitive to small deviations
Source at a European bank

“To get to that level of correlation, it’s less a question of how well risk is being measured and managed, and simply becomes a huge alignment exercise, because you need to have the exact same data, the same sources of data, the same timings. Everything has to be perfectly aligned and any small operational differences that have been OK in the past are likely to cause you to fail, just because it is so sensitive to small deviations,” says the first European bank source.

The design of the test means it is also more difficult to pass if a desk’s portfolio consists of offsetting positions – as would be typical for a market-maker. Variation in the P&L on each side of the hedge is captured and magnified by the variance test, producing ratios far in excess of 20%. In one simplified industry test, shared with regulators last year, a portfolio made up of bought and sold call options produced variance ratios of up to 1,025.4%.

The third bank’s risk policy source says these problems are recognised by the MRG, and claims the industry has been told a review will take place. The regulators’ options, though, are limited.

“They are trying to get a closer match between risk models and front-office numbers. Intuitively, that makes a whole lot of sense – it is laudable – I think everyone agrees on that. But how much can they change in the confines of a rule that is written in black and white? We hope they throw away the variance test, or put new ratios in there. They could go back to basics, but that kind of fundamental change would be embarrassing for them. I’m just hopeful we will see more flexibility introduced into the test,” says the source.

The Intesa Sanpaolo analysis suggests one option that would preserve the mechanics of the test. Switching to annual sampling of the variance ratio, rather than monthly, produces a higher chance of success, the bank found – a 70% success rate could be achieved with correlation in the low-90s, rather than the high-90s. 

  • LinkedIn  
  • Save this article
  • Print this page  

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact [email protected] to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: