# We need a different approach to supervisory stress-testing

## Confusing processes turn tests into template-filling exercise, says Garp’s Jo Paisley

If you Google ‘stress test’, you’ll most likely see a person on a treadmill, hooked up to electrodes, with an ECG monitoring his or her heart. A stress test for a financial institution is similar: it puts the firm under strain and tests whether it is healthy enough to cope.

From a risk management point of view, stress-testing is a good thing. It’s forward-looking and provides insights into the risk-taking of a bank in a way that is impossible using static financial accounting data. Moreover, when done at an enterprise level, it brings together different disciplines within a bank – risk and finance, for starters – to look at their business in a holistic way.

But banks are subject to a glut of stress tests. These exercises ask for a variety of information in a variety of formats at a variety of times. A lack of co-ordination between regulators is heaping greater demands on banks, and producing results that cannot be easily compared across jurisdictions.

To remedy this, we would encourage supervisors to create an agenda for developing more harmonisation across stress tests. A common set of standards would benefit banks and regulators alike, and help create a more resilient financial system.

Often, the conversations that stress tests yield are as useful as the numbers themselves. A bank’s board should never, in fact, sign off on the corporate plan until they see how it behaves under stress.

But banks haven’t always proved very good at this. In the run-up to the financial crisis, the Basel Committee on Banking Supervision found that banks’ stress-testing practices were deficient, being poorly executed and neither sufficiently comprehensive nor severe. Consequently, the BCBS created a set of principles to guide banks and supervisors on how to organise and execute stress-testing.

These stress-testing principles, recently updated, remain as pertinent now as they were in 2009.

It’s hard to argue against them – indeed, they are excellent.

According to the BCBS, stress tests must: have clear objectives and effective internal governance; be used by banks as a risk management tool to inform business decisions; cover material and relevant risks; and be based on scenarios that are sufficiently severe.

###### What is most striking has been the intensification and proliferation of supervisory stress-testing both across and even within jurisdictions

The principles also note that the stress-testing process should be built on a foundation of sufficiently granular data and robust IT and that the models/methodologies used in stress-testing must be fit for purpose. Furthermore, stress-testing practices and findings should be communicated between authorities/supervisors, both within and across jurisdictions.

So, how close are we to achieving these objectives? Certainly, stress-testing has come a long way since that first Basel report. Banks have spent a lot of time and money improving all aspects of their stress-testing, including modelling, data, governance and execution.

But what is most striking has been the intensification and proliferation of supervisory stress-testing both across and even within jurisdictions, each with its own approach and operating details. The most resource intensive of these are the so-called concurrent capital exercises – such as those run by the US Federal Reserve, the European Banking Authority and the Bank of England – where multiple banks must run the same scenario at the same time.

Global banks operating in multiple jurisdictions have felt the brunt of this, having to meet multiple unco-ordinated demands across different regulators. Even beyond this inefficiency, there are some undesirable consequences for both supervision and risk management.

### Negative ramifications

The fact that supervisory stress tests are on completely different bases makes them extremely hard to compare and, more importantly, undermines the supervisors’ ability to communicate with each other about the risks that they see in their jurisdictions. The differences are too numerous to cover in detail, but we can offer a few examples.

Stress-testing scenarios cover different horizons, from nine quarters (CCAR) to three years (EBA) to five years (BoE). Each test, moreover, rests on different assumptions about how the firms’ balances evolve in response to stress: prescribed lending paths (BoE), static balance sheet (EBA) and constant or no prescription (CCAR, depending on whether the stress test is company-run or modelled by the Fed) are among the various approaches.

The EBA takes things a step further by building in many constraints (such as various caps and floors) that do not feature in other tests. What’s more, the treatment of management actions and hurdle rates differ across the tests. Indeed, it is probably easier to say how the tests differ than how they are similar.

The final Basel principle encourages the sharing of results and insights across supervisors, but this inability to ‘join the dots’ between stress tests is a missed opportunity; in an increasingly dynamic and interconnected world, supervisors need to able to understand the risks across institutions and at a systemic level. Moreover, analysts trying to understand how these tests relate to each other are more likely to misinterpret them, which hardly helps market discipline.

As each stress test has its own instructions/templates/methodology, banks are in danger of focusing more on template filling than on the risk insights from the stress tests. Accordingly, stress-testing becomes a compliance exercise, rather than a risk management exercise.

The scale of documentation required, and the granularity of projections asked for, is often very high. What’s more, the resources that are spent on these template-filling exercises are then not available for stress-testing for firms’ internal risk management; the regulators, in effect, ‘crowd out’ risk management, with firms then using the regulators’ scenarios rather than designing their own.

Overall, there are higher costs and a lower quality of outputs for banks and supervisors than if there was better co-ordination and harmonisation of approach between regulators.

### Starting a dialogue

The purpose of stress-testing, of course, is to see how resilient firms are in the face of stress. We must not forget, however, that any exercise that involves looking into the future is inherently uncertain.

Take, for example, the significant range of uncertainty (even at a two-year horizon) in the Bank of England’s inflation projection ‘fan chart’. That chart, keep in mind, is for a projection of something that we expect to happen.

Forecasting the impact of extreme events is even more uncertain, as there may be little history to act as a guide. Indeed, the fan chart on a stress test would be enormous.

So, we think it’s time to inject a bit more common sense and order into the world of stress-testing.

Recently, the Garp Risk Institute published a code of practice for supervisory stress-testing, which provides a framework to promote the co-ordination and harmonisation of supervisory stress tests.

The goal is to start a dialogue between risk practitioners and regulators. But how can we reach a more unified approach? I have a few ideas.

For starters, supervisors could publish the schedule for the stress tests that they intend to run and discuss this at the college of supervisors. This would help banks facing multiple supervisory demands to plan their resources accordingly. Of course, it’s up to regulators themselves to plot the best course with respect to harmonising their regimes; Basel could play a role in co-ordinating a global calendar.

Supervisors should also be wary of requiring highly granular projections, which arguably raise the risk of banks being ‘precisely wrong’ rather than ‘roughly right’. The granularity required for regulatory stress tests should depend on the materiality of the risks, the time horizon of the projections and the costs and benefits involved.

Almost as important as what steps firms should take is what they should avoid. To a certain extent, stress-testing is educated guesswork, so banks and regulators should not take comfort from asking for an inordinate amount of documentation. Since plausibility and reasonableness is probably the most one can hope for, it also doesn’t make sense to talk about ‘accuracy’ of projections.

This is not about weakening standards. Rather, it is about being proportionate and coherent, organising stress-testing in a way that adds meaningfully to both supervision and risk management.

Jo Paisley is the co-president of the Garp Risk Institute, part of the Global Association of Risk Professionals. She worked as global head of stress-testing at HSBC from 201517, and has served as a stress-testing adviser at two other UK banks. As the director of the supervisory risk specialists division at the Prudential Regulation Authority, Paisley was also closely involved in the design and execution of the UK’s first concurrent stress test in 2014.

Editing by Alex Krohn