Sponsored by ?

This article was paid for by a contributing third party.More Information.

The FRTB data management challenge

Sponsored forum: Asset Control

The Fundamental review of the trading book requires dramatic change to the banking industry’s existing market risk management practices. One of the underlying challenges is the collection and management of market data and other information, especially for banks that want to use internal models. In a webinar convened by Risk and sponsored by Asset Control, a panel of market risk practitioners and data specialists examines how much work the industry has to do in the coming months, and offers advice on how to collate, proof and secure the integrity of risk factor data sets

THE PANEL

(Left to right) Britta Achmann, Director, Corporate & Investment Banking Risk, RBS

Ed Duncan, Director, Risk, Barclays

Paul Burnett, Head of Traded Risk Analytics, Global Risk Analytics, HSBC

Martijn Groot, Vice President, Product Management, Asset Control

Moderator: Duncan Wood, editor-in-chief, Risk.net

The Fundamental review of the trading book (FRTB) requires dramatic change to the banking industry’s existing market risk management practices. One of the underlying challenges is the collection and management of market data and other information, especially for banks that want to use internal models. In a webinar convened by Risk and sponsored by Asset Control, a panel of market risk practitioners and data specialists examines how much work the industry has to do in the coming months, and offers advice on how to collate, proof and secure the integrity of risk factor data sets.

After a struggle of some years, the FRTB was finalised in January. Now, the implementation race begins. The clue is in the name – it is a fundamental overhaul of the existing market risk capital rules, which were always intended to be a ‘quick fix’ for problems that emerged during the financial crisis. Unlike those rules, the FRTB is intended for the long term, so the industry will be living with this framework for some time to come. It overhauls the standardised approach to market risk, forcing big banks to calculate and report it for the first time, radically alters the way that modelling approval is granted and policed, replaces value-at-risk (VAR) with expected shortfall (ES) as the standard risk measure, redefines the boundary between banking and trading books, and affects many other areas that require data management solutions.

Risk: What are the biggest changes in terms of the data requirements being introduced by the FRTB?

Paul Burnett: I would start with the non-modellable risk factors (NMRFs) concept, which has been introduced to address a concern that models have been used where there is either insufficient data or data with inadequate quality. The NMRF looks to address that concern by putting down some requirements around the sufficiency of the data and the quality criteria, most notably that firms need to demonstrate that the data going into the models is real and derived from actual transactions. That will introduce a big step change in terms of how firms are looking at the data that is currently going into their modelled frameworks. And we’re not just working with an internal model-based approach (IMA) – we’ll also be working with an upgraded and enhanced standardised approach. With the standardised approach, the structure of the data – in particular, sensitivities – is defined and determined by the regulators. So there’s a question mark in terms of what more a firm needs to do to enhance the scope and form of the sensitivities it currently produces. Finally, with approvals being granted at desk rather than entity level, there is likely to be a material increase in the amount of data being produced, analysed, validated and reported. 

 

Risk: What is at stake for the industry here?

Ed Duncan: Poor data will lead to higher capital requirements, and where data quality suffers could determine whether a desk survives or not depending on its ability to run efficient risk-weighted assets (RWAs) under the new framework. 

In both the sensitivity-based approach (SBA) and the IMA you have many more inputs into the regulatory capital calculation and, because you’re running many more calculations, efficient data processes are critical to avoid long queues delaying your calculations. You want the data accessible at all times to enable calculations – at many different levels of the firm. It’s about calculation efficiency, and bottlenecks and missing data are obviously going to slow you down. When these move day on day, week on week, month on month or quarter on quarter, you’re going to need to be able to explain what has happened to your ES and to your sensitivity-based method. 

 

Risk: Can it all be done in time?

Britta Achmann: With all the challenges we have, and even though reporting on the FRTB at the end of 2019 seems far away, the timeline is actually very tight because the regulator is going to need some time to approve all the desks for the IMA, which is what the larger banks are hoping to do. 

The timeline is very challenging for all of the institutions – data projects are not usually a fast turnaround. They are time-consuming because you need to ensure that the correct data is feeding all of the systems. We will all face different challenges within our respective institutions, but everybody will aim to have a single-source, consistent use of data, although having reference data in one place and ensuring it is all high quality is challenging. 

 

Risk: Where do you think the biggest gaps are at the moment? 

martijn-groot-2Martijn Groot (left): From a data management perspective, finance and risk are the most interesting places because that is where all the different threads come together across product silos, organisational entities and various risk buckets. What we see is the lack of a common data foundation, a consistent basis on which to create or derive the various risk factors. Part of this is process – different parallel threads of data collection and verification across different risk categories and product silos – and part of it is the availability of information. There is some over-the-counter (OTC) information available from some of the trade repositories that our clients use, but it’s not on the same scale as you could have expected from the Markets in Financial Instruments Directive II (Mifid II). Part of the process that needs to be improved – which happens in the data space as soon as you start storing something multiple times – is the introduction of ambiguity, another need for reconciliation and a drag on the process. So that is one. Then there is the availability – not only trade prices but, in some cases, the historical information, sufficient breadth and depth of history. 

 

Risk: What are ‘real prices’? Are they actual transactions to which you are a party – firm, committed quotes – or data on those that are provided to you by a data vendor?

Paul Burnett: That is correct. The big challenge is for each firm to then sweep over their current data sources and ask ‘I believe I have sufficient data, but does it meet that criteria?’ More likely than not you’ll find there are some gaps, which would need to be plugged somehow. 

 

Risk: Is there a minimum number of sensitivities required in the standardised approach? Is it an easy exercise to do a sweep of what you already have and to see where the gaps are? 

Britta Achmann (right): For the larger institutions, I don’t think it is a huge problem.britta-achmann Looking at the sensitivities, they are defined maybe differently, but we have enough internal data, resources and systems that can produce it according to the regulation. But I can see this is where a challenge could come in for a small bank that, for example, doesn’t really have a trading book properly defined in the FRTB sense and may have to go from having everything in a banking book to having a trading book with this SBA because the standardised approach that we’re currently using is vastly different. 

 

Risk: Can you give us a sense of what that means in practice? 

Ed Duncan: It’s sheer volume of data we’re talking about, required as inputs into your regulatory capital calculations. So big banks are dealing with a data volume challenge, data that will need to be stored every day and maintained for a period of time because they are the inputs into regulatory capital calculations. These are not specifically the same sensitivities that we report for risk purposes, they are prescribed and defined very clearly by the regulation and there is the potential for the sensitivities to differ ever so slightly from our own internal sensitivities. 

Paul Burnett: What is going to be a consistent challenge with large or small institutions is: who within the organisation is going to perform this calculation? In terms of the current standardised approach, inputs are based on balance-sheet items, such as mark-to-markets and notionals, which are typically readily available within finance departments. As such, we see many banks employ their finance functions to run these standardised approaches. The new standardised approach is a model; it requires risk information and pricing calculation components. Therefore, there is a genuine question about whether that best resides within the finance function. Within the larger institutions you could see a reallocation of responsibility, with the standardised approach being run by the risk function. And, within the smaller banks, there will be a question around who is qualified to run this. 

Britta Achmann: We looked at whether it would be better positioned in finance or in risk when we started thinking about FRTB. We decided it should come to risk and we have now transitioned it into risk. 

Martijn Groot: It might work for smaller banks. Like a lot of regulation, it raises the barrier to start trading certain products and the proportional cost of complying for a small bank with this is higher. A lot will depend on what can be done cost-effectively, which, in turn, will depend on the availability of sets of data that may not currently be present in those institutions. 

Ed Duncan: There is also the challenge of the calculation itself, where many of the smaller banks are not necessarily going to have the teams of quantitative analysts larger banks have that can simply build the calculators and verify, analyse and run them on a regular basis. 

 

Risk: Is the door now open to a simplified standardised approach? Is this something we might still see?

paul-burnettPaul Burnett (left): I’m not aware of anything that has been confirmed, but I expect there will be an appetite for a simplified approach. Developing a risk-sensitive standardised approach was key to delivering a credible framework; however, there is inevitably going to be a degree of complexity built in. But there is still a feeling that, although that complexity is necessary, it is going to create a significant hurdle for some of the smaller institutions. Could there be room for a simplified approach? My expectation is that Basel will look to introduce something.

Ed Duncan: Go back six years to when the FRTB was set up and the objectives for the framework were restated through a number of consultation papers. They were looking for enhanced comparability and transparency but, as you layer in additional calculations, you lose an element of comparability across banks and transparency in your financial reporting, where it gets more and more difficult to work out how market risk RWAs are being calculated. It’s a challenge in terms of how they can introduce another layer of calculations while retaining – or achieving – some of the objectives from the outset of the framework. 

Britta Achmann: ‘Simplified’ in regulatory speak could mean ‘more expensive’ because, when we looked at the approaches for credit valuation adjustment, the costliest was the simple approach. While I would concur that the current standardised approach could be quite difficult to implement for a lot of smaller players, a simplified approach – if it comes again at an increased cost – could be just as unattractive. It boils down to your decision on how you allocate your resources. Are you going to do this spend on getting the better calculator and getting a risk-sensitive calculation? Or do you opt for cheaper implementation and higher regulatory capital costs?

Paul Burnett: You could argue that they’re creating the right incentive. If you want to get the benefit then you need to work harder and demonstrate you can get that with better data and increased quality of calculation. It’s a question of prioritisation, and it depends on the firm. Having only these two calculation options on the table could prove challenging for some. 

Ed Duncan: You’d expect smaller organisations to have smaller, more straightforward trading books. Therefore, the complexity of the SBA in those cases isn’t so complex that you couldn’t imagine being able to run it. Complexity really comes with the multi-asset products, the index products and the curvature charges and add-ons that came to the FRTB late in the day. If you have a relatively vanilla portfolio, then the SBA shouldn’t be too complex. 

 

Risk: When it comes to the SBA, what should banks be aiming to get done this year, realistically? What are the near-term things they should be trying to get out the way?

Martijn Groot: Get the foundation straight, get the underlying reference data sorted out and harmonise specifications for products. 

Ed Duncan: I would agree. This year, it’s all about data, all about the inputs, and you need to get those sorted, locked down and stored. You need to be able to run the impact analysis more rapidly through 2017 and beyond, so this year it’s all about plugging the gaps and having the ability to source it accurately. 

Paul Burnett: Improve the data to enhance the impact analysis and, with the topic of standardised floors looming large, there will be appetite from the firms to better understand and assess the impact of the standardised approaches. 

 

Risk: What proportion of RWAs will be affected by the FRTB? What proportion is market risk or traded risk? 

Britta Achmann: For a bank with a large investment banking trading business, 25%–30% sounds about right. 

Paul Burnett: It depends on the business. For HSBC it is lower but, for some of the more investment banking-heavy firms, I think you get a slightly higher proportion.

Ed Duncan (right): I would agree. You can see the proportion of market risk that makesed-duncan up our total RWAs, but it’s the RWA consumption at a desk level and how that drives activity in the capital markets that we need to focus on because it will drive liquidity in certain directions. The FRTB has the potential to penalise less liquid markets, so it is going to have a dramatic effect on banks’ activity in capital markets, and I think that is more of a concern for us than the overall proportion of RWAs that the market risk contributes. 

Paul Burnett: It’s not the absolute number, it’s the return. Senior management is probably asking which activity generates the best return. 

Britta Achmann: It also depends on whether you count the credit valuation adjustment under the market risk umbrella because it is going to be altered by the FRTB at some later stage as well. So, even though that’s a counterparty credit risk charge, it is linked to and impacted by the FRTB so that could bring your proportions up. 

 

Risk: Looking at the FRTB rules, there are lots of different qualitative standards. How difficult is it going to be for banks to satisfy these qualitative elements of the IMA?

Martijn Groot: There has been much more focus on the supply chain into the risk models – so, having all the keywords like ‘data lineage’, ‘verification’, ‘clarity on the provenance’, whether you have clear ownership of data definitions, single data dictionary, and so on. That is quite difficult to put into practice. For some of the processes, you can organise your departments to put in place certain workflows, and then bringing the organisation and the underlying risk data infrastructure up to par is harder. So I would say, pretty difficult. Consistency at the end of a complex process is no mean feat to achieve. 

 

Risk: When it comes to IMA, what can and what can’t we model? And, with very prescriptive standards around that, much depends on the desk structure that you choose because model approvals will be granted at the desk level. Do you have a desk structure already, and will it be different for the FRTB?

Britta Achmann: We all have desk structures already. We also recently ‘Volckerised’ all the desks, so there was a recent transformation of the set-up of all the trading desks. But the challenge is: are they fit for purpose, for the FRTB? We need to see that these desks in their current set-up can pass the profit and loss attributions, which is where the qualitative problems come in. 

The FRTB is really a multi-dimensional problem, and not just for risk; it’s really a business optimisation problem because it’s the business that needs to decide the structure that is the best fit and can provide the most stable capital charge. Making these decisions early on – whether the desk can qualify for an IMA approval or it may be on the cusp – and if it’s looking too volatile, is there an advantage to actually applying for it? There are a lot of business decisions that need to be taken. We’re all working on it. 

Paul Burnett: It really speaks to the increasing dichotomy between the risk and capital frameworks, to the extent that the Volcker rule is being configured to risk. Desks are configured to the mandates and the risk that is being run. You don’t necessarily get the same outcome if you look at it from a capital view in the FRTB. Therefore the desk structure that is optimal for capital could be very different to the desk structure that is optimal for risk. I think we’ll see an increasing divergence between risk and capital under the FRTB.

 

Risk: Will capital efficiency and capital stability drive the decisions that are made around desk structure?

Britta Achmann: If you end up switching from IMA to SBA on an ongoing basis with your desks, you then have a ‘cliff effect’ of capital increase, which is less beneficial for the business because you need to start pricing that into your trades. What kind of customer service are you providing if one day you’re quoting this price because you are on internal model approval, and a week later, your price is vastly different because your capital charge just went up? That’s why, in my opinion, stability gives you a better footing, at least from a customer service perspective.

Ed Duncan: I see it as well – it’s extremely challenging. The FRTB is much more prescriptive in terms of its definition of a trading desk than the Volcker regime was, and you need a lot of paperwork to support your concept of a trading desk in the FRTB. So there isn’t a great deal of flexibility here, but model performance is going to be a critical element to the decision-making. 

 

Risk: What proportion of your risk factors are modellable?

Paul Burnett: We are a UK-based firm and we operate the risk-not-in-VAR framework. There is still analysis to be done to determine to what extent that covers the non-modellable component of the FRTB. My expectation is that there will be a significant portion of data that is currently going through our modellable framework that may be captured by these non-modellable criteria. I couldn’t say exactly how much, but I certainly feel there’s work to be done here. There is a gap and we need to reduce that gap over the coming two to three years. 

Britta Achmann: That, again, is where the data challenge comes in because none of us individually, as banks, will have sufficient data to identify whether they are modellable or non-modellable because you only have the data you trade on and we have ours.

Paul Burnett: Even without a percentage, we could have a sense of where the burden might fall in terms of the businesses and the types of activities. For example, emerging markets could struggle to meet the criteria. 

 

Risk: Real data supporting all of this is supposed to be transactional data. As an individual institution, you may not have quite enough to clear the threshold but, as an industry, presumably there is enough out there to model many things. Perhaps more than any one institution can do. What is your view on the role? How much promise is there for this kind of solution?

Ed Duncan: It’s critical; this is my biggest data gap at the bank. So, as a risk function, I don’t routinely store real price data today and it’s anyone’s real price data that qualifies as evidence for modellability. It’s a big gap that needs to be filled by a utility-type solution because, if all of the banks seek to solve this data gap individually and independently, then costs will spiral and I don’t think any of us will end up with as much data or coverage as we would if we were to do it together. We’re looking at all of the vendors and data providers today. I think they sense a commercial opportunity, and it’s important all of the banks are on board. We need to solve the problem as an industry, rather than as individual banks. 

 

Risk: It seems like there’s an obvious need and a somewhat obvious solution. What would prevent it?

Paul Burnett: There is a need to look at this collectively as an industry, not just for cost, but because it is best resolved collectively. There are going to be some challenges. In particular, there needs to be a certain degree of harmonisation and some consolidation in terms of how we look at and how we represent our risk. If we are pulling our information together, then we need to be able to compare data on a like-for-like basis. I need to be able to interpret data provided by another firm, understand it and consume it within my own institution.

Martijn Groot: There were some data points in the November Quantitative Impact Study (QIS) on the proportion of total market risk capital due to the NMRFs. That would give some indication as to the potential upsides there. I don’t think it could be prevented, but I think it could be slowed down. It’s often hard enough to get an internal consistency of definition so, across the industry, standardisation, a joint definition or common understanding of risk factors are the complicating factors. Some of the OTC trades are available publicly, so there is some data and certain product classes available and there are integration services that speak to that, but that is a sort of subset of asset classes.

Ed Duncan: We need to keep the problem small, because there are other challenges involved in adhering to the new modellability critieria, and the smaller we define the problem the more likely we’ll solve the problem within the allotted time frame. Barclays is not necessarily looking to harmonise risk factor definitions in the internal model space. We have our risk factors defined today; we have them mapped to time series and so forth, and each bank will do it slightly differently, but I would expect them to be in a similar position. I don’t think having banks agree on definitions of risk factors is necessarily achievable or even desirable at this point. 

Martijn Groot: It is indeed a mapping issue. 

 

Risk: Could the IMA prove too much hassle for some banks? At the margins, could some of the second-tier, regional and national banks decide to switch to the SBA? 

Martijn Groot: Probably, yes. On that threshold it’ll be a cost-benefit analysis on the potential gain versus the extra cost of complying. 

Britta Achmann: This is a big problem. The barrier to entry into a modellable permission has just risen by a lot. If we are struggling, and these are banks with a lot of quant, IT and change resources, how are smaller players meant to be able to adapt to this? It could be very challenging for them.

Ed Duncan: There’s still an incentive, so, from a capital perspective, I think banks will still see the appeal of the IMA. That said, we haven’t yet seen how the Basel floor discussion is going to play out, so they could easily remove or reduce that incentive to the point where it no longer pays for what is described as a significant investment. 

And the complexity is again ratcheted up under the FRTB. We’ve seen an incentive at the moment that can be reduced by the outstanding piece, which are the Basel floor discussions to play out for the market risk space. Although it’s labelled the IMA, there are lots of rules in there that will make it very different from our internal model, so we’re starting to see a divergence from what we’ll model internally for risk management versus what we’ll model for regulatory capital, and we’ve just been talking about things like real price data. We are going to start to see a divergence, which we don’t have as much today with VAR modelling. That will be interesting in terms of where your incentives lie because where you want to still invest in the risk model process – because that drives risk management – depends on the capital incentive left after the Basel IV discussion as to whether you’ll want internal modelling under the FRTB. 

Paul Burnett: Agreed. We could see firms think this is too much hassle and it might depend on where the floors end up.  

Read/download the article in PDF format 

The panellists were speaking in a personal capacity. The views expressed by the panel do not necessarily reflect or represent the views of their respective institutions.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here