Modelling interrelated shocks will improve stress tests – research

Call for regulators to ditch standard scenarios for more sensitive approach

models - gears - web - Getty.jpg

Regulators should scrap standard stress scenarios in favour of a set of tests that measure how economic shocks interact – an approach that would also help them model the likely effect of their own intervention, two researchers say. Although the approach is computationally intensive, the academics say rapid advances in technology should enable this more comprehensive method to model real-world stress events.

In a paper due to be published in the Journal of Risk later this year, Dror Parnes of Texas A&M University and Michael Jacobs of PNC Financial Services argue that regulators should use historical data to derive the sensitivities of macroeconomic indicators to each other in order to produce more realistic parameters.

The research raises doubts over the existing calibration of the US Federal Reserve’s stress test regime, suggesting banks may be inadequately capitalised to withstand market shocks. “Within these supervisory bank stress tests…the degrees of economic shocks are arbitrarily selected,” the authors note.

At present the Fed’s supervisory stress tests consist of three scenarios – baseline, adverse, and severely adverse – each described in terms of changes in 28 macroeconomic variables, such as GDP growth, unemployment and interest rates, over a nine-quarter period. But Parnes and Jacobs believe these scenarios underestimate the degree to which crises tend to spread over time, with different variables affecting each other.

In a hypothetical example, Parnes and Jacobs use two macroeconomic metrics with asymmetrical dependencies of 0.4 (A to B) and 0.7 (B to A). A shock that decreases metric A would therefore lead to a shock to metric B that was 40% as large; it would then reverberate back on to metric A (via the 0.7 B to A dependency). The end result, they calculate, would be a 47.2% capital loss – from an initial shock of just 20%.

The same approach can also be used for sensitivity analysis, to discover which shocks would reduce bank capital below acceptable levels once knock-on effects were taken into account. And regulators could use the same technique to model the effects of their regulatory interventions, for example under the annual CCAR supervisory regime for the largest US banks.

The models used by regulators such as the Fed are not publicly available, the authors point out, but Jacobs adds: “The consensus is that the quantitative components of the Fed models are rather primitive as well as highly inaccurate and prone to severe over-fitting…and it is unlikely that they mathematically model interventions, although it is possible that they bake that in subjectively in their process for overlaying their models.”

In practice, the authors say, their method would require considerable effort: it should use a matrix of the dependencies between all 28 of the Fed’s macroeconomic indicators – which would not be straightforward to produce. Though historical data on the indicators is available, “collecting reliable figures on their dependency structure is tricky”, Parnes says, adding that it would require significant computer power.

But using mathematical techniques to process subsets of the matrix which had minimal interaction with the rest of the matrix would make the process easier. Parnes and Jacobs suggest using techniques from the manufacturing industry to simplify the design of complex products like gas turbines. By drawing up ball-and-stick “molecular” diagrams or matrices of dependencies between components, it becomes easier to see which components are most closely related – where a change in one will mean changes in others – and which can be dealt with in isolation.

Jacobs says: “These [techniques] sometime bump up against issues with interpretability. That said, the technology is out there, and now with the advent of alternative data sources and advances in computing power, there are opportunities to leverage high-dimensional data to calibrate models for stress testing.”

Improvements in available processing power and AI technology would also be vital if regulators were to use the matrix method to increase the number of stress scenarios used. “Even today stress tests run for some significant time on the Federal Reserve Bank machines,” Parnes admits, and the burden would be greater if stress tests involved 20 or 30 rather than three stress scenarios. But, Parnes adds, “overall we strongly believe that as long as a practical model can be formalised and coded – and likely tuned over time – in our age computers will take care of the rest”.

Editing by Alex Krohn

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here