The P&L attribution mess

FRTB model approval regime dogged by confusion and controversy

Litter bin

  • Model approval will be granted at the level of individual trading desks when the Fundamental review of the trading book (FRTB) comes into force in 2019.
  • That requires each desk to show its risk models closely track the desk’s actual performance – via backtesting, and the so-called profit-and-loss (P&L) attribution test.
  • When the FRTB was finalised in January, it gave conflicting instructions for the P&L attribution test. One would be harder to pass and require more systems changes, banks claim.
  • In a March conference call, banks were told to use the hardline version; their protests resulted in an April meeting with regulators in London, and at least one more call took place in June.
  • An FAQ document – tentatively slated for September – will provide the definitive answer. A regulatory source expects it to insist on the tougher version of the test.
  • Banks are primarily worried about whether they can align the disparate models and systems that regulators want to compare. One current focus is on valuation adjustments.

At first, it looked like a simple mistake. When the Basel Committee on Banking Supervision published new trading book capital rules in January, after five years of work, a crucial test was described in two different, contradictory ways. The strong version of the test – which the industry had been lobbying against – was replaced with a weaker version, but the change had not been copied across to the three-page glossary at the end of the document.

Bankers assumed there was a mundane explanation.

"My impression was they must have forgotten. They did some work on the main text, but didn't get round to changing the glossary," says the head of quantitative analytics at one dealer.

Compliance efforts can't be founded on impressions and guesses, of course, and the profit-and-loss (P&L) attribution test is a big deal. It has to be conducted for each of a bank's trading desks, with failure barring the desk from modelling its own capital requirements. Instead, the desk would have to apply a standardised approach, which generates a capital bill anywhere from two to 6.2 times higher than current levels; the revised internal models approach (IMA) would see a smaller 1.5-times jump, according to industry estimates.

So, at least one bank asked its regulator for clarification. At that point, the apparent mistake became a very real mystery – one that still hangs over the Fundamental review of the trading book (FRTB) today, six months after the rules were supposedly finalised.

A conference call was organised in March between the industry and the regulators who drew up the rules, the Basel Committee's trading book group (TBG). Banks were told – "to our horror", according to one participant – that the hardline definition in the glossary was correct.

"We all felt a bit ambushed by that. And misguided," says the head of risk analytics at a second dealer.

It felt like an ambush to some banks because, in the weeks leading up to the publication of the rules, they had been given a steer by contacts within their domestic regulators that the watered-down test had prevailed. After publication, messages from regulators were mixed.

"Different people heard different things from their regulators: 'Yes, they had meant to change the text; no, they had not meant to change the text'. So a call was arranged, but it wasn't the world's most productive meeting because on the banks' side, we entered the meeting hoping regulators had meant to make this change, and we found out they had not," says a market risk manager at a third bank.

That surprise set tongues wagging. Many in the industry see the mix-up as evidence the TBG itself is split, while others paint it as the result of industry lobbying gone awry (see box: Who made the change?).

The Basel Committee secretariat did not respond to a request for comment.

The March conference call was followed by a face-to-face meeting in London on April 12, where the industry made its case in detail – and at least one more call between banks and regulators happened in late June. The list of grievances is long: banks have already complained the test would be impossible to pass for hedged portfolios, for example.

There are a lot of reasons why front-office and risk numbers might diverge. That is the real concern – that you might fail the test because of reasons that have nothing to do with the quality of your modelling
Bank head of risk analytics

But all of this has taken place out of the public eye, between a relatively small group of insiders – one attendee estimates 60 to 80 people attended the April meeting, split evenly between regulators and bankers.

The TBG is now working on an FAQ document that will provide a definitive answer, but one industry source and one regulatory source with knowledge of the process expect the strong version of the test to prevail – possibly with some kind of concessions on timing, and more forgiving thresholds about what would count as a failing grade.

"The version in the glossary is currently the way this is going. It hasn't been fully and formally confirmed yet, but I believe we're in the process of going that way," says the regulatory source.

Publication of the FAQs is expected in September, but as conversations between industry and regulators have continued over the past six months, the simple question about which version of the test to apply has been joined by more complicated ones about its scope. The regulatory source describes this work as "messy" and says there is currently no obvious solution.

"Am I surprised we're still talking about this? Yes. We knew there were some open areas, but this language had been in the text for a long time, and some of the discrepancies we're now looking at were first mentioned to us in April. You can't necessarily forecast that kind of thing," the source says.

The answers will determine how easy it is to pass the test, with billions of dollars of capital – and potentially years of IT and model development work – riding on the outcome.

Right way, wrong way

In outline, the test requires two different measures of P&L to be compared: hypothetical, and risk-theoretical. Both reflect the profit or loss generated by revaluing yesterday's portfolio using today's end-of-day prices. The measures are then compared in two different ways, looking at the gap between the two, and the variance of that gap: too big a gap, or too unpredictable a gap, and a breach is counted. Four breaches within any 12-month period will force a desk on to the standardised capital approach.

In both the strong and weak versions of the test, hypothetical P&L is calculated by the bank's front-office pricing models, which contain more risk factors and are generally more precise. The difference between the two versions lies in how the risk-theoretical P&L is calculated: the strong version tells a bank to use its risk models, while the weak version requires a bank to use the front-office model, but applying the more limited set of factors that exist in the risk models.

FRTB chart

To put it another way, the strong version involves comparing the outputs of two different models, each using their own sets of inputs, while the weak version requires those inputs to be run through a single model. Either way, the test is supposed to reveal whether modelled capital accurately reflects the factors that drive P&L for a desk.

Whether the strong version really is the tougher of the two depends on who you ask. Smaller banks have largely been absent from the FRTB debate, and may be using off-the-shelf pricing models in which the set of risk factor inputs is hard to modify, says Jonathan Berryman, senior vice president for risk strategy at software vendor FIS in London: "In a front-office system today, nobody envisaged a need for flags you could switch off to remove a certain set of risk factors. You wouldn't think of that in advance. You want pricing models that produce the most accurate number possible."

Some large banks also see the weak version of the test as the most challenging: "If you are a bank with one system for your front-office valuations and another for the risk, the glossary tells you to use them as they are – it's then a matter of aligning the underlying data as much as possible. But the body of the text says you somehow have to do the risk-theoretical calculation using the front-office system. I think that requires quite a bit of change. I might be in the minority, though," says the quantitative analytics head at the first bank.

He is correct: the five other large banks that spoke to Risk.net for this article prefer the weak version of the test.

"There are a lot of reasons why front-office and risk numbers might diverge. That is the real concern – that you might fail the test because of reasons that have nothing to do with the quality of your modelling. And to pass by design, you would really need to bring everything together. You might be modelling in a certain way because you think it's the most prudent thing to do – well, that doesn't matter, because you have to converge with what the front office is doing. It may not be correct, but at least you can pass the test," says the second bank's risk analytics head.

One potential source of divergence is timing: a big bank will typically calculate a global risk number once a day, while trading desks will calculate P&L at the end of the day in each region. The P&L calculated at the end of the Asian trading day will therefore be different from the risk number calculated at the end of the US day, critics of the test claim.

Another problem is the data that sits behind the numbers, dealers add. Because the risk and front-office systems are separate, they may use data from different sources – another potential cause of divergence.

These problems could be addressed by applying the weak version of the test, bankers argue. If the two P&L measures are calculated in the same system, using two sets of risk factors, there would be no need to try and align models that currently run separately and have separate priorities and uses.

In fact, depending on the scope of hypothetical P&L, the work required of the industry could be even greater, says the third bank's market risk manager. One of the concerns aired at the April meeting – and later fleshed out in a call between regulators and industry in late June – is whether hypothetical P&L should also include valuation adjustments that are in some cases calculated by product controllers, potentially dragging in a third system.

"Why would anyone explain front-office P&L with their capital model? No-one does that, so it means you've got to start building some really complicated system that aligns the front-office numbers – and even the adjustments placed on top by product control – with the capital model. And these things typically belong to three different departments, live in three different systems, with different people looking after them – and while they need to align for backtesting today, they don't need to align perfectly," says the market risk manager.

He adds: "There are huge questions here about which of these systems you even build the infrastructure in. Is it the case that we need better capabilities to explain P&L, coupled with a small amount of capital model improvement? Or do we need a completely new infrastructure, front-to-back, that aligns all these different quantities that have never had to align before? That's why the industry is flapping about this. Those two or three words that vary between the strong and weak versions of the test make a huge difference."

The TBG has got the message, but will not be bounced into acting, says the regulatory source: "It's a chicken-and-egg situation at this point. The industry wants the test to be watered down before they make the necessary investment in overhauling their systems. So we need to understand that – and the question is do we really have good data on the impact, and can we expect to get good data? There is a possibility we may look at a staggered approach, so maybe there is initially a more forgiving threshold than the one in the rules text."

Time to adjust

Valuation adjustments are add-ons that might not be included in either the risk or front-office systems. They are managed by different groups and appear at different stages in the life of a trade. Examples include independent price verification (IPV) – tweaks made to trade valuations by product controllers after checking third-party pricing sources – as well as concepts such as funding and capital valuation adjustment, which are handled differently across the industry, and the prudent valuation adjustment required of European banks as a reflection of pricing uncertainty.

"These adjustments have their own sets of controls and criteria," says the second bank's risk analytics head. "The adjustment might be made on a monthly basis, to ensure valuations are correct in month-end books and records. But if you are calculating risk on a daily basis, then the two sets of values may start to diverge, and it's not clear how the risk calculation could capture a valuation adjustment that is not part of the risk management process."

To illustrate the challenges, the industry spent time after the April meeting surveying banks on which adjustments they apply, and whether they are currently included in P&L forecasts. The survey found around half of the participating banks incorporated IPV, for example, and that European banks tend to make IPV updates more frequently than US banks. Results were discussed in late June with the TBG and are now being considered as part of the FAQ work.

Excluding valuation adjustments from hypothetical P&L would essentially leave it as a pure measure of market risk, which is how the industry would like to treat it. The regulatory source says that "may be going a bit far. The problem is that actual P&L includes all of these things, so if that is the only place you see these adjustments, then you're less likely to get a backtesting breach. Ever."

The industry wants the test to be watered down before they make the necessary investment in overhauling their systems. So we need to understand that – and the question is do we really have good data on the impact, and can we expect to get good data?
Regulatory source

There is a sort of precedent, the industry source points out, in the form of backtesting rules in the US, which require the comparison of 250 days of actual trading data with the corresponding market risk measures on each day "excluding fees, commissions, reserves, net interest income and intraday trading". Valuation adjustments would be included under 'reserves', he claims. The TBG is co-chaired by one regulator from the Board of Governors of the Federal Reserve System – Norah Barger – and a second from the Banque de France, Philippe Durand.

Ultimately, regulators have a few options. They could rule that all adjustments should be included; or that all should be excluded. A third response would be to select adjustments that should be in scope: "There is a spectrum of possibilities and one of those is to roll up our sleeves and pull out the particular adjustments we want included," says the regulatory source.

The decision has implications for the IT work banks will need to do. The message in the June meeting was that banks did not like the idea of including adjustments in hypothetical P&L, but if they were forced to "there's no way they could get it done by 2020. So, it's possible, but it would take more time," says the regulatory source.

The FRTB text calls for national regulators to finalise their own versions of the regime by January 2019, with banks due to start reporting under the new rules in December that year.

Once the TBG has made its decisions on how the P&L attribution test should work, it will need approval from a separate oversight body – the Basel Committee's policy development group – says the regulatory source.

That could delay attempts to clearly define the test. The FAQ document was initially expected to be published in August. Sources on both sides of the debate now say the document is more likely to appear in September – and the regulatory source predicts one FAQ will not be enough.

"It will probably have to be an evergreen document, refreshed every six months or so," he says.

Who made the change?

Once regulators have taken an official position on the profit-and-loss (P&L) attribution test, life will continue: banks will have a rule to follow, technology changes to make, capital implications to calculate.

But the final text of the Fundamental review of the trading book (FRTB) will still contain the contradiction that triggered months of controversy and confusion.

In trying to understand what happened, both banks and regulators make the point that the glossary definition – the hardline version of the test – is in line with instructions given to the industry in July 2015, when carrying out the last impact study before the text was finalised (see below: What regulators have said so far). The appendix was a departure from what had been the official line, but given it sat within the nine-page section describing the testing regime, the change in language appeared to indicate a change of mind.

When regulators on the trading book group (TBG) denied that during a conference call in March – insisting the glossary was the correct version – it answered one question, but raised another. Where had the amended text come from?

One possible answer is the industry itself. With regulators in a rush to get the rules out, a draft copy of the FRTB text was sent to a working group of big banks convened and organised by the International Swaps and Derivatives Association, with instructions to review the language and suggest changes where necessary. One regulator with knowledge of the process says this step was taken in lieu of a fresh round of consultation: "We were highly reluctant to go back and ask the Basel Committee for another consultation. Any time you open something up for consultation, it's tricky in itself, so the choice was taken not to open it up."

The industry group ran through the text, marked changes and sent it back to regulators, who then reviewed the industry's wish list. During this process, then, suggested changes to the workings of the P&L attribution test may have been inadvertently accepted into the final text.

Two other bits of information seem to argue against that. First, a source on the industry working group says the wording submitted to the regulators was far more complex than the final text: "When I read the final description of the P&L attribution test, I thought it was brilliant – they managed to achieve so much more than we did by only changing two or three words. I was impressed with the way they edited it."

Second, some banks claim to have been given advance notice from their domestic regulator that the softer version of the text would appear in the final version of the FRTB.

If true, it suggests the change to the definition of risk-theoretical P&L was not inadvertent; it also suggests the change was not approved by the TBG as a whole.

This is where the trail runs cold. Two bankers claim the editing and approval of the text was done in cloak-and-dagger fashion by a regulator who was sympathetic to the industry's complaints about the test. "You can't do that. This text has been the product of very heavy negotiation to get to this point. You can't just come in and make changes without consultation," says the global head of market risk at one European bank.

The regulator with knowledge of the process speculates the change might have been made by someone not on the TBG itself: changes to the text were handled by a variety of different TBG members and then aggregated and finalised by the secretariat of the Basel Committee on Banking Supervision. "It could have happened above our heads," he says.

Either way, it indicates a difference of opinion within the group. Five industry sources agree regulators were split earlier this year on how the P&L attribution test should work; the question is whether those splits still exist, and which camp will prevail. "At the moment, they are at pains to resolve those issues themselves and it's why, I think, they are not commenting on it in meetings. They are in lockdown until they have a common view," says the industry working group source.

What regulators have said so far

July 2015 impact study instruction, page 109:
The calculation of the risk-theoretical P&L should be based on the pricing models embedded in the firm's ES [expected shortfall] model and not front-office pricing systems.

Final FRTB text, appendix B, page 71:
This 'risk-theoretical' P&L is the P&L that would be produced by the bank's pricing models for the desk if they only included the risk factors used in the risk management model.

Final FRTB text, glossary, page 87:
Risk-theoretical P&L: The daily desk-level P&L that is predicted by the risk management model conditional on a realisation of all relevant risk factors that enter the model.

  • LinkedIn  
  • Save this article
  • Print this page  

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an indvidual account here: