Solving the FRTB puzzle

Sponsored FRTB forum: IHS Markit

FRTB puzzle
cgering

With the final rules published by the Basel Committee on Banking Supervision at the start of last year, envisaging a start date of January 2019, the standards still need to be transposed into national domestic rule books, which the European Union made a start on in December, including the FRTB text and its second capital requirements regulation. This regulation proposes a three-year transition period for EU banks during which capital requirements would be scaled down to 65% of the new total, which leaves many unanswered questions. In Europe, this three-year period can be extended, to which some members of the European Parliament have already voiced their opposition. Other jurisdictions may decide to copy the Safe Harbour deal between the US and the EU that allowed for the easy transfer of personal data. Banks in those countries will certainly urge them to do so and it’s not certain that the US will decide to implement it at all under the current administration, which is perceived as being hostile to international regulation and indeed to regulation in general.

Meanwhile, institutions are expected to proceed with implementation despite significant concerns about the functioning of some of the regime’s key components, particularly the profit-and-loss (P&L) attribution test, which acts as the gateway to the internal models approach (IMA) and to the application of non-modellable risk factors (NMRFs) within the IMA

 

THE PANEL

Andrew Aziz, Global Head of Markit Analytics, IHS Markit

Steven Jamieson, FRTB Programme Manager, Nomura International

John Mitchell, FRTB Lead, Credit Suisse

Risk: What are your FRTB implementation priorities for 2017 and what do you see as the biggest challenges?

John Mitchell, Credit Suisse: The priority for 2017 is building standard rules. We see some clarification and calibration issues with the sensitivity-based approach (SBA), but these are not really preventing us from working on the strategic build.

For the internal model, it’s about what to do and where to do it. The biggest challenge is caused by uncertainty around P&L attribution, the specifics of that test and how it will be calibrated. Banks are worried a substantial number of desks would be unlikely to pass the current test, so why build it if you do not think you can pass?

Steven Jamieson, Nomura International: The focus this year is going to be on building out our underlying risk infrastructure to enable us to support the delivery of FRTB, which introduces a lot of complexities, including new models we have to implement and additional data requirements. Fundamentally, it will require a more agile and flexible infrastructure to support that. At the same time, we are also focusing on the development of the standardised model – something we are quite comfortable with at this point – and delivering regulatory compliance as a result of that. There are still a number of challenges we have to deal with along the way, but on the implementation side we have a clear way forward.

 

Risk: Is this typical of what you’re seeing from your clients?

Andrew Aziz
Andrew Aziz, IHS Markit

Andrew Aziz, IHS Markit: Yes. When FRTB first became a serious issue, the questions people had were about the overall capital impact – more high-level issues, such as what does it mean to support two environments, the standardised approach (SA) and the IMA – and a lot of emphasis on the 63 expected shortfall calculations and the associated compute requirements. Then the discussions evolved to governance and operational issues, such as the requirements of desk-level reporting and the performance tests. Recently there has been much more focus on the specific decisions that can materially impact overall capital, most notably around risk factors such as the trade-off between P&L attribution and NMRFs, and even backtesting.

 

Risk: Are there any specific businesses or products where you are worried about ending up on the SBA?

Steven Jamieson: I would take more of a macro view and say that one of the things we’ve noticed within the standardised model is a degree of additional complexity, which we might initially not have expected. That poses some challenges from a systematic perspective, a model perspective and for the methodology. We also see some unexpected economic behaviour, such as long gamma attracting additional curvature charges.

Andrew Aziz: Isn’t there still a perceived bias towards currency pairs denominated in US dollars that wasn’t addressed in the recent FAQ?

John Mitchell: It’s an issue where the most liquid currency pairs – which tend to be dollar crosses – get a lower liquidity horizon versus the others. This creates the bias and strange results from a foreign-exchange triangulation point of view.

 

Risk: Looking at some of the IMA issues, one of the things the banks need to do – which is incredibly important in the new framework although it was immaterial in the old one – concerns the desk structure. That’s because model approval will be granted at the desk level. Where are you on this?

John Mitchell: We are working our way through it. We have gone through the rules, looked at things like Volcker, looked at other requirements on desk structures. We see quite a lot of overlap with regulations like this, but of course the scope is different. Maybe not for a large US bank, as presumably most of their business is on Volcker, but for other banks the Volcker rule will only apply to a subset of their risk. You need to look at the requirements for a different scope. FRTB also talks about a policy per desk, which might limit flexibility versus Volcker because once you have set a desk’s trading mandate in a policy it is probably harder to change.

 

Risk: Would you end up with dozens of desks?

John Mitchell: We think it will be similar to Volcker, and studies showed somewhere between 50 and 100 was the sweet spot for most of the Tier 1 and Tier 2 investment banks. We are expecting to see similar numbers as a baseline. If you’re on standard rules, we do not think that would change a lot, but to get to internal models – that’s where it gets interesting. If you thought that half of what you do would pass P&L attribution on a desk and you don’t think the other part would pass these tests, it might lead to splitting the desk in two and restructuring – that will increase the desk count past the Volcker baseline.

Steven Jamieson: Nomura is at a slightly earlier stage. We have thought about it a lot and picked up ideas about changes we might need to implement as a result of our capital analysis. We haven’t been subject to Volcker, which puts us at both an advantage and a disadvantage in that we are not constrained by any existing regulations and requirements to conform to a similar structure, but at the same time we are very much starting from scratch. As to where exactly we will end up, that is something we are still trying to figure out.

 

Risk: With a view across the industry – are you seeing anybody with a very clear sense of how they are going to structure themselves at desk level?

Andrew Aziz: We are more at arm’s length from these discussions, but I have the sense that people are still thinking about it a lot. In some of our capital impact studies with clients, they have looked at the consequences of restructuring desks. But I can’t really say I have determined any trend one way or another.

 

John Mitchell
John Mitchell, Credit Suisse

Risk: Regarding the uncertainty over deadlines in different jurisdictions, and the known deadlines for implementing FRTB at the national level, is it your expectation that other jurisdictions are going to follow the European Commission’s (EC) proposal and offer a three-year transition period, or do you think others will stick to 2019?

John Mitchell: There are going to be different approaches. January 2019 is when the rules have to be finalised in law according to the Basel text, and banks should start reporting by end-2019,

which only leaves one year to implement post-finalisation. The EC’s proposal gives banks two years from when the rules are finalised to implement. We think the European approach is pragmatic. It makes sense to anchor to a fixed period from when the law is finalised to give banks time to code this up, test it and put it live.

 

Risk: Have you seen people delaying or postponing spending or building decisions as a result of uncertainty around implementation?

Andrew Aziz: It’s a bit of a mix; there is much uncertainty about what local regulators will do. I have seen some organisations pause their FRTB programmes, or at least vendor selection. Others say: “No, let’s just carry on. We’re down this road now, there are still issues we have to sort out, so we’re going to continue doing it.”

 

Risk: Is that specifically due to the uncertainty introduced by the EC?

Andrew Aziz: Exactly, and whether local regulators will follow suit.

 

Risk: What do you make of the P&L attribution test? Have you been able to look at pass/fail rates for desks under it?

Steven Jamieson: We have and the pass rate is low. While there are very few desks passing today, we have a fairly good understanding of why, and that has presented us with a substantial book of work. At its core, we are looking at some fundamental data inconsistencies; historically we’ve been mandated by regulations to have this independence between risk and the front office, which has led to the sourcing of different data and the construction of different models, but we’re now being asked to reverse that to some extent.

Andrew Aziz: We have completed a few studies with clients on P&L attribution and what it would take to pass. The results were fairly similar to what has been reported, in that differing snap time was a big issue, although that has been mitigated by the recent FAQ to some extent. We found that, even as you align the granularity of risk factors between the front- and middle-office models, you still have challenges trying to reconcile. We found reasonable pass rates for the mean-standard deviation test, but not very good pass rates at all for the variance-variance test, particularly when you have hedge positions. It would make sense to look at whether these tests are too onerous and how the variance-variance test could be restructured.

 

Risk: This brings us back to the tension between NMRFs and P&L attribution testing – if you have more granularity in the risk models, do you face the issue of higher NMRFs?

Andrew Aziz: Exactly. The focus of most of the capital impact studies with our client base has been to look at those trade-offs. The P&L attribution test pushes you to more risk-factor granularity, while the NMRF criteria push you to higher buckets – perhaps that is a desired tension. I’m not sure if that was considered, but it creates challenges for institutions to quickly assess the decisions they have to make, to determine what the materiality of this impact is and, ultimately, to demonstrate to your regulators why you’ve made such decisions.

 

Risk: If the P&L attribution test and the variance test are issues, let’s discuss the prospects for change. The rules are final, but there are national implementations to be passed, so there will be some further debate. What are your hopes?

John Mitchell: The door is slightly ajar at the Basel level, so we hope to go back and explain some of the remaining issues. The problems are solvable, but timing is another issue once we have something that is workable. Banks need clarity in terms of which P&Ls will be compared, exactly what hypothetical P&L is – whether it is the same thing as backtesting or if it’s a cleaner version – and clarity is needed around the definition of the risk-theoretical P&L. Once you know these, you need to select test metrics that actually give meaningful information and do not fail for some of the reasons that have been identified previously. It shouldn’t be the case that a desk fails to have internal model approval because it hedges too well, for instance.

Andrew Aziz: Perfect hedging is good. Almost perfect hedging is bad.

Steven Jamieson: Timing is one of the key issues. Given all the time in the world, we could solve these problems – but of course we don’t have that long. In addition, we hope there is some room for flexibility in the thresholds themselves in the results of the test. One concern for us is the binary nature of the passes and fails.

John Mitchell: We ran some studies on this very issue. It’s very concerning because of the in-out switch and the instability in the test metrics – at any month-end any number of desks could lose internal model approval and move into standardised. The portfolio under IMA or SBA has lots of netting and diversification benefits within it at the group-wide level. If half of your desks move out of one model and into another, you have split your portfolio from one portfolio into two smaller portfolios and that can have a very wide range of outcomes, some of which exceed standard-rules capital.

 

Risk: On the subject of NMRFs, there’s an issue here of observability criteria – is it 24 observations for a particular factor over a 12-month period with no more than a one- or two-month gap?

John Mitchell: One month.

 

Risk: As an outsider, somebody who doesn’t trade, that doesn’t sound too bad. What is it like in practice?

Steven Jamieson

Steven Jamieson: From the experience we’ve had and the analysis carried out, the 24 observations are generally not much of an issue, but it’s the one-month gap that often catches us out. We are finding in specific jurisdictions the concept of seasonality, where the liquidity goes down quite significantly, and going forward there might be an obligation to maintain that liquidity during these down periods.

Andrew Aziz: We have seen exactly the same seasonality effect in some of the analysis and surveys we have carried out. We have also seen the benefits of data pooling initiatives, potentially providing as much as 30–40% capital benefit by reducing the number of risk factors that are classified as non-modellable.

 

Risk: The finding for the IMA was a 1.5-times increase, so for foreign exchange under IMA, do things look okay, as long as you can get a reading?

John Mitchell: Yes, generally speaking SBA is much higher than IMA. The regulators seem to be basing their commentary on a presumption that most desks get internal model approval; for instance, you mentioned a 50% increase for IMA – the same study has a 140% increase for SBA across the board. It just shows the absolute importance of banks maintaining their IMA approvals and, to achieve that, we need resolution of these issues with NMRFs and P&L attribution. Then you can at least move towards solving the FRTB puzzle.

Andrew Aziz: With a couple of pieces still missing.

 

Risk: What are the benefits of going through all this work?

Andrew Aziz: There is a dichotomy in the industry in how firms approach this. It can be viewed as an opportunity to do something transformational in terms of risk architectures, around aligning the front and middle offices, putting a better data infrastructure in place, evolving to newer technologies and, thus, better managing risk overall. However, while some may take this approach, others take the view that we do not know what is really going to happen, so how far can we stretch what we already have in place today? I think the reality may be a blend of both, transforming yet working around the existing ecosystem.

 

Risk: You may not have chosen to do this work but do you think it’s going to leave you in a better place?

Steven Jamieson: Absolutely, and some of the work we have already scoped out as a requirement for FRTB is part of our risk strategy. We have intended for some time to build out our capability to create better systems for risk management, so this certainly supports that delivery. The challenge we have is the timing and the obligation to deliver complex new models at the same time as we are delivering some fundamental infrastructure changes. Doing everything at the same time is going to be very difficult.

Andrew Aziz: To some extent, the way the regulations have evolved, the way technology has advanced, means there is now more of an opportunity to offload some of the complexity of this effort, whether through pooling initiatives, big data technology or leveraging more standardised solutions or utilities. For the most part, the analytics required to comply with the regulation is not really IP that people need to hold onto tightly, thus requiring them to build everything themselves. Today, there is a real opportunity to outsource much more.

 

Risk: Is there consensus that going live in 2020 is not achievable? Obviously it depends on whether you are going for the SBA or the IMA, but are you committed to going IMA as far as possible? If you end up with getting 20% of your desks on to it, for example, would you still bother?

John Mitchell: We will go live with FRTB when we are told to go live, and the key point is how much IMA we have when we do. SBA build-out is already in flight, so the worst-case scenario is that you go live at the end of 2019 or 2020 with only standard rules in place. The question then is how much IMA you can maintain given budgets and time frames versus the infrastructure upgrades needed to pass P&L attribution. For an end-2019 or end-2020 date with the uncertainty in the rules we currently have, I think you would see a large amount of internal model desks moving to the SBA across the industry.

 

Risk: Are you committed to IMA at all costs?

Steven Jamieson: We have a bigger balanced trading book rather than a banking book, so have more of an incentive to go with an internal model. We are committed to delivering regulatory compliance on time according to the Basel deadline, but how much we can put on to IMA at that point is still an open question.

 

Watch the full webinar proceedings, Solving the FRTB puzzle

  • LinkedIn  
  • Save this article
  • Print this page  

You must be signed in to use this feature.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an indvidual account here: