Compliance preparations amid uncertain rules

Compliance preparations amid uncertain rules

A forum of industry leaders discusses how banks will define individual trading desks under FRTB, whether BCBS 239 compliance projects can help banks meet FRTB risk data challenges, which model validation obstacles banks still face and other key topics

The panel

  • Martijn Groot, Vice President, Marketing and Strategy, Asset Control
  • Adolfo Montoro, Director, Market Risk Management and Risk Methodology, Deutsche Bank
  • Shaun Abueita, Partner, Financial Services Risk, EY
  • Anna Holten Møller, Senior Analyst, Market Risk, Nykredit

How will the Basel changes affect profit-and-loss attribution (PLA) and other validation tests for internal models?

Shaun Abueita, EY
Shaun Abueita, EY

Shaun Abueita, EY: The March 2018 Basel Committee on Banking Supervision consultation on FRTB introduced a number of welcome changes to the framework – in particular to the prescribed eligibility tests. For PLA, the alignment of risk-theoretical and hypothetical profit-and-loss (P&L) data inputs, treatment of valuation adjustments, revisions to the frequency of the tests, the penalty function, and proposals for new and more consistent test metrics were all very warmly received.

In other areas, however, uncertainty remains – particularly around the calibration of PLA thresholds and, in the case of non-modellable risk factors (NMRFs), the treatment of certain market transaction frequency and seasonality characteristics. So, while elements of the eligibility tests have been refined and made clearer under the consultation, the bar to use internal models at trading desk or risk factor level remains high.

Adolfo Montoro, Deutsche Bank: The changes will enhance the chance of desks passing the revised eligibility test. The regulators did a good job improving the framework in the new proposal included in the consultative paper that was published in the first quarter of 2018. The proposal to introduce the new ratios was a step in the right direction to fix a methodological flaw that was present in the January 2016 framework, and increase model stability and related output. The new penalty function will also hopefully smooth the capital volatility derived from the PLA. The amber zone means banks have a period in which they can still fix any problems. Now it’s a calibration issue – the red and amber thresholds need to be adequately calibrated by means of using results from real portfolios, and banks will need to have enough time now to build the newly established risk-theoretical P&L framework.

Anna Holten Møller, Nykredit: Typically for FRTB there are outstanding questions and uncertainties around PLA tests, but the proposed changes are definitely a step in the right direction, and the Basel Committee co‑operating with the industry is very positive. The chances of internal models consistently passing validation tests have gone from impossible to difficult. One difficulty is passing tests on a much more granular level and, since the consequences of failing can be significant, banks have to be on top of their models at all times, even though the test frequency has been lowered.

 

What is the outlook for data pooling and model validation?

Martijn Groot, Asset Control: The outlook for data pooling and model validation is the sourcing of an increasing breadth and depth of market data, especially for banks using the internal models approach (IMA). The best way to limit the impact of NMRFs is to cast the widest possible net in terms of sourcing real price information.

This can include a bank’s own trades combined with industry solutions. In terms of industry solutions, Asset Control expects a much richer and more diverse market data landscape, with traditional data providers looking to extend their products to fit more closely with the FRTB NMRF use case and new data products owned by contributors entering the market. Independent from FRTB requirements, many banks are starting to look for ways to pool their market data to lower data costs, which can include end-of-day valuation marks or trade-level information.

Because of post-trade transparency regulation – the revised Markets in Financial Instruments Directive (Mifid II) this year and the Securities Financing Transactions Regulation next year – there has also been an increase in the amount of public data to work with.

Industry solutions are going to remain at raw-trade or risk-bucket level (with some mapping and cleansing) – which is not a complete risk factor assessment push service, because there is too much variation between the IMA banks in how they have set up their risk factors. What is important for firms in preparing market data for models and risk factor modellability assessment is to not only have the sourcing capabilities, but also integration and deduplication capabilities, and to provide an easy approach to link the trade data and/or buckets to a bank’s risk factors. Real trade data can also play an important role in complying with prudent valuation requirements.

Shaun Abueita: While, conceptually, data pooling is a sensible way to maximise risk factor modellability, a number of practical hurdles need to be considered to fully realise its benefits. 

These include obvious confidentiality considerations and constraints around sharing data, however sanitised, and the economics of pooling, particularly when considering the size of certain firms’ trading activities relative to others, and the divergence between the contributions they would make to and benefits they would reap from a data pool.

There are also concerns that firms may compromise competitive advantage through broader disclosure of transaction data. This is particularly the case where a firm may dominate trading activity in a specific product type or market segment, whereby sharing transaction data may serve only to benefit competitors.

Notwithstanding these challenges, the industry needs a clear, collective understanding of the specific transaction data to be pooled for mutual benefit. Achieving this requires each bank to map its universe of FRTB risk factors to traded instruments, and in turn to existing repositories of transaction data to identify residual modellability gaps. That process is complex and time-consuming, with many firms yet to move beyond tactical and targeted analyses, to strategic identification of modellability gaps.

 

How much preparatory work can be completed while the rules remain unclear?

Adolfo Montoro, Deutsche Bank
Adolfo Montoro, Deutsche Bank

Adolfo Montoro: A lot can be done. If you look at the IMA, at this stage it is very important to prioritise the development of the risk-theoretical framework. Such development is going to generate a feedback loop within an organisation, enabling front-office quants, model developers and market data teams to work together closely to work out why, for example, a certain model configuration is not producing a sufficiently accurate outcome to pass the tests. Additionally, you could make a start on building expected shortfall, reduced sets and a full set for various configurations. 

The foundation can also be built for the standardised approach (SA). Having a full valuation framework in the risk model that is fully aligned with the pricing model used for P&L, for example, as well as improvements in front-to-back-office alignment could start before the rules are complete. For FRTB, it is critical that institutions focus heavily not only on what to build but also on laying down the foundational business procedures and business models that will define how the various old and new suites of risk models, part of the FRTB framework, will communicate with each other on a daily basis – this is known as ‘FRTB interplay’.

Anna Holten Møller: The changes banks are required to make to be FRTB‑compliant are quite comprehensive. With uncertainty around PLA, NMRFs, and so on, the business case for internal models is not crystal clear. When you add capital floors and implementation timeline uncertainty to the equation, it is clear why banks might await clarification before building fully fledged internal models. But the dust is slowly settling on an SA, so efforts spent calculating and understanding those numbers are not wasted.

Shaun Abueita: While there are elements of the framework yet to be clarified and finalised, many banks are progressing with strategic implementation and compliance initiatives. These are either in the form of dedicated FRTB programmes or aligned initiatives that will support FRTB compliance once rules are clarified.

While most firms are prioritising strategic SA development and implementation, wider foundational initiatives typically include acceleration and broadening of full revaluation coverage, PLA and NMRF-failure driver analysis and remediation, improving risk factor granularity and coverage, and data consolidation, cleansing and remediation efforts.

In the context of a rules set that is subject to change and potentially different jurisdictional implementations, such initiatives have been viewed as ‘no regrets’ and a benefit to businesses regardless of rule outcomes.

 

Are regional banks likely to opt to scrap plans for internal models?

Adolfo Montoro: FRTB is a business decision based on the risk capital of the bank, so some will scrap it while others will see value in building an internal model. 

Shaun Abueita: This remains to be seen; however, the capital impact of the IMA, which currently includes a material anticipated NMRF component at many firms has certainly prompted broader internal debate regarding the cost benefit of becoming, or remaining, an IMA firm. That being said, there may be a regulatory expectation that firms engaging in complex and sophisticated trading activity adopt an IMA framework to align with the sophistication of their businesses.

Anna Holten Møller: I don’t think size will necessarily decide whether banks go for an internal model – rather their existing risk modelling ‘machinery’ and infrastructure. It is a given, though, that banks without internal models today will not build internal models under FRTB. Even for banks that do, FRTB is a big challenge and an investment that has to be carefully considered.

 

How can banks reduce the number of NMRFs?

Martijn Groot, Asset Control
Martijn Groot, Asset Control

Martijn Groot: By expanding their scope for sourcing real prices. This includes internal information sourced from the bank’s trading systems, public data from trade repositories and from post-trade transparency obligations, and industry solutions created by firms pooling their trades. Cross-referencing this to risk factors, combined with a frequent (re-)evaluation of the NMRF metric to have appropriate dashboards and an early warning system for looming non-modellability, must also be a part of the approach.

Solution providers such as Asset Control help the market data management infrastructure to source, integrate and master the market data required for FRTB compliance. Its business rules and user interface allow for quick configuration to derive additional data, including curves, proxies, stress scenarios and modellability scores. Stress scenarios can include historical, regulatory and hypothetical scenarios created by the bank. When it comes to modellability assessment, clients control the granularity of their risk factors and can set up links between them and ‘real prices’. Early warning is given on, for instance, looming non-modellability or changes in the liquidity horizon due to developments in market caps or credit ratings.

Adolfo Montoro: If you look at the spirit of the NMRF charge, you could say certain products should just be traded more frequently, so we can then leverage off the marking process and pricing of the various products for end-of-day purposes to extend the risk factor coverage within risk models. This would increase the chances of passing the risk factor eligibility test, aligning the risk factor to product mappings used for end-of-day purposes with the one used for other functions, such as risk and finance. There is also an expectation that banks will undertake price recovery – hoping to recover historical trading prices – so there are clear links between the marking process and producing a historical time series. This particular requirement will require institutions to rethink their market data strategies and strictly align them to the requirements of PLA and NMRF.

Shaun Abueita: Firms must first understand the drivers of modellability failure. This may be due to risk factor definition, the firm’s risk factor to a traded instrument-mapping approach or assumptions, or as a result of the scarcity or seasonality of underlying transaction data currently available to them. 

Having a clear understanding of modellability-failure drivers will enable firms to take corrective action, by refining their risk factor definition or mapping approaches and assumptions, or by sourcing data externally – from vendors or a future data pool.

As a last resort, firms may choose to exit certain positions or activities if deemed costly from a modellability perspective.

Anna Holten Møller: Data pooling is the answer that comes to mind, and by convincing regulators that seasonality is real.

 

What is the outlook for regimes for banks with small trading desks and non‑banks?

Shaun Abueita: While non-banks and firms with small trading desks will typically have simpler systems and infrastructure, and a basic trading book structure and framework, adoption of FRTB may not be straightforward for them. This is particularly the case for currently standardised firms evolving to the new FRTB SA or IMA frameworks, rather than the FRTB simplified approach. 

At a bare minimum, the adoption of SA would necessitate the computation of risk factor sensitivities, availability of attribute and metadata to process the calculation, and analytics to assess and manage model outcomes. While vendor solutions are available, implementation would be a non-trivial undertaking for most firms. 

 

Will national jurisdictions be able to align implementation?

Shaun Abueita: It is hard to say, as most national jurisdictions are waiting for final Basel Committee rules to be published before starting transcription into their own legislative frameworks. That being said, a lack of jurisdictional alignment would certainly complicate implementation for impacted firms. Most importantly, however, it would undermine one of the primary objectives of the FRTB initiative: to drive broader global standardisation and remove scope for cross‑jurisdictional capital arbitrage.

 

In which areas will banks need more guidance from national supervisors?

Adolfo Montoro: A lot has already been covered in the published FAQs. Banks will need to bring regulators on the journey while they implement FRTB, and if an interpretation issue arises they will be there to help. Regulators will need to be co‑operative and proactive in making sure whatever is implemented shapes the expectation.

Anna Holten Møller: In cases where local market issues are not significant enough to make lobbyists’ agendas and are therefore not raised as industry-wide concerns. FRTB aims to standardise market risk and capital calculations, and to do so a lot of things are bucketed and classified in a one-size-fits-all manner. However, not everyone suits the same size, and hopefully national supervisors will allow some latitude to accommodate local banks in these special cases. 

 

Can compliance projects for the Basel Committee’s principles for effective risk data aggregation and reporting (BCBS 239) help to meet the FRTB risk data challenge?

Martijn Groot: BCBS 239 has forced banks to think differently about data quality. Similar to developments in other industries – for example, life sciences data becoming findable, accessible, interoperable and reusable, known as FAIR – financial services firms have started to look more critically at the quality, comparability and accessibility of data. BCBS 239 introduced principles for risk data aggregation, including data governance, architecture and infrastructure.

In terms of accuracy, integrity, completeness, timeliness and adaptability, if banks comply with the spirit of BCBS 239, they should have the data infrastructure in place that gives them a clear grasp of current data inventory and quality, prepares them for evolving regulatory requirements and makes it possible to cut different reports by looking at risk from various angles. Prerequisites for this include a grasp of data inventory, streamlined sourcing and distribution processes, a clear change process for data standards, and a common understanding of terminology and required critical data elements for different reports.

It also includes audit and bi-temporal capabilities to reproduce and analyse results and support root‑cause analysis on data issues for internal audit and regulatory reviews. Data lineage and traceability provide the ability to explain valuation and risk metrics. Apart from BCBS 239, banks in the eurozone are subject to the European Central Bank’s Targeted Review of Internal Models (Trim) process. Trim goes into further detail regarding data lineage, traceability and specific data-quality metrics, such as on the prevalence and suitability of proxies. The Trim timelines fall in between BCBS 239 and FRTB.

If banks have reacted to BCBS 239 requirements by adding manually created reports, they have done themselves a disservice. However, if they improved their data infrastructure, they will be much better prepared for future risk regulation, including FRTB and the market data requirements of annex D in the most recent consultative paper.

Asset Control believes in flexible but rigorous market data management to take the risk out of risk data. Our solutions are often brought in to replace internally built systems that can no longer cope with volumes and/or audit or lineage requirements. Often they are too costly to maintain and, in many cases, simply do not scale. Asset Control specialises in market data management solutions for the middle office and operations. Our highly scalable solutions deliver quick return on investment, can be on-premises or cloud-deployed, satisfy the BCBS 239 principles and focus on business user enablement, easy access and workflow integration.

Adolfo Montoro: Yes, as there are significant links in terms of improving the quality, accuracy and timeliness of the data in the output of the framework.

Anna Holten Møller: Complying with BCBS 239 will definitely have some positive side effects on FRTB projects. But, since FRTB is so specific on what data is needed for internal models and SAs, a focused FRTB effort is necessary.

 

What model validation challenges remain for banks?

Anna Holten Møller, Nykredit
Anna Holten Møller, Nykredit

Anna Holten Møller: Depending on the quality of banks’ current setups, there is a need to align front-office and risk management models. Some alignment may come as a side effect of other FRTB activities, such as aligning data, but ultimately it comes down to the models. For some banks, that means tweaking existing models, but for many it means building new models and even a new infrastructure. At the same time, banks have to rethink their trading desk structures. That the FRTB framework is still subject to change makes it even more challenging.

Adolfo Montoro: The model validation effort will increase. If banks are aiming to push various desks towards the IMA, they will probably need to move to a full valuation framework, which brings the challenge of tightly aligning the valuation models used for pricing and P&L purposes with the models to include a risk perspective. FRTB will raise the bar in terms of the accuracy of internal models. More risk factors will need to evaluated, included in the risk models, backtested, and so on. The FRTB PLA test will encourage closer alignment between price valuation for P&L and the valuation for risk, so there could be some synergies in their activity. The challenge will be to make sure validation becomes an opportunity not an obstacle.

Shaun Abueita: Effective model risk management requires new or modified models or methodologies to undergo rigorous independent validation, and FRTB is no exception. Robust validation is also a prerequisite for a firm’s regulatory model application and approval process. The finite timeline and resources required to process the volume of new and modified models and methodologies under FRTB will be a real challenge at most institutions, and particularly in the context of already stretched enterprise model risk management functions.

 

To what extent are banks embracing the opportunity for transformational change?

Shaun Abueita: Given the broad business impact of the proposed FRTBrules, and their potential material capital impact, a number of banks are using compliance efforts to drive transformational methodology and infrastructure change. This is evidenced by efforts around enterprise-wide methodology and risk factor coverage and alignment, data consolidation and remediation, and broadening of the use of full revaluation for risk measurement.

In addition, a number of firms have sought to leverage innovative new technologies to optimise their measurement and management of market risk. Efforts have included deployment of machine learning and artificial-intelligence techniques to analyse and cleanse market data at source, cloud technology for rapid incremental compute capacity, and the development of intelligent predictive analytics capabilities to better manage risk and capital outcomes. This includes the prediction and anticipation of PLA and NMRF test outcomes such that remedial action may be taken to avoid test failure, where feasible to do so.

Adolfo Montoro: A lot of banks will use this as a catalyst, but different institutions are at different stages. FRTB is not just a change in how market risk capital is calculated, it’s a catalyst for moving into risk models – for example, the valuation functions used within them or their data input – with tighter alignment with the front office in terms of architecture, business processes streamlining process controls, and so on. It’s a catalyst for aligning the market data strategy and related controls front to back. For example, end-of-day data used for pricing a product or calculating the P&L of a desk should be reused for the purpose of building historical time-series-entering risk models. The level of accuracy and transparency required means you don’t want people doing a job machines could be doing. The feedback loop to the developer or the bank will need to be quicker than today in explaining why the PLA model isn’t in line with expectation.

Anna Holten Møller: By now, everyone understands that smaller adjustments and fixes will not do the job and that a transformational change is needed. The mentality seems to be that something good may as well be made of this; perhaps even turning it into a competitive advantage.

 

Read more articles from the 2018 FRTB Special Report

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here