Banks should account for – and capitalise against – the risk of the models they use to forecast losses on loans being wrong, says forthcoming research.
In Quantification of model risk in stress testing and scenario analysis, the SAS Institute’s Jimmy Skoglund proposes a method of quantifying model risk using stressed transition probability matrices for credit loss impairment forecasting, in a bid to put a dollar value on the amount banks should set aside against the risk of their models offering inaccurate forecasts.
Two major strands of post-crisis reform – the shift to expected credit loss accounting, and the introduction of stress tests that rely on macroeconomic loss forecasting – have pushed banks to develop more accurate credit loss forecasting models. Most obviously, the new international accounting standard, IFRS 9, and its sister US standard, current expected credit loss, ask banks to provision against the risk of loans turning sour based on forecasted future losses, rather than waiting till losses are incurred.
Regulatory stress-testing regimes, meanwhile – most obviously the US Federal Reserve’s Comprehensive Capital Analysis and Review programme, as well as Europe-based ones run by the European Banking Authority – require banks to measure their capital adequacy against a series of adverse scenarios over a given period of time.
That makes the assumptions that underpin these models crucial from a capital standpoint – but it is rare that banks actively capitalise against the risk of their outputs being wrong, says Skoglund.
“Current analysis of model risk does not translate into actual numbers that allow explicit quantification of a model risk buffer,” argues Skoglund. “Even under best-estimate impairment forecasts, stressed macroeconomic scenarios are frequently being included to mitigate overly positive scenario selection bias in impairments estimation.”
Banks have complained that regulatory guidance on the new expected credit loss modelling under the new accounting regimes has been vague. IFRS 9, for instance, asks banks to base their estimates of a loan’s lifetime expected credit losses on ‘reasonable and supportable’ information – a requirement that banks argue lacks a clear definition. Others have pointed out this leeway could encourage banks to pick and choose the forecasts they feed into their models – for instance, by shortening time horizons to avoid including data where downturns are expected.
Current analysis of model risk does not translate into actual numbers that allow explicit quantification of a model risk buffer
Jimmy Skoglund, SAS Institute
With a focus on loss underestimation, Skoglund’s research seeks to account for the model risk he argues is in inherent credit loss projections, demonstrating how portfolio loss forecasts can change when this is considered. Skoglund posits that, since financial crises are – statistically speaking – rare events, models developed for stressed-loss forecasting are prone to significant model risk.
Proposing a ‘robustness approach’, the paper says that model risk can be broadened out beyond considering parametric uncertainty; by applying relative entropy techniques, loss forecasts are “tilted” to consider model risk.
Skoglund uses proportional hazard models to generate macroeconomic and idiosyncratic loan path-dependent state transition probabilities in a delinquency state transition. Using relative entropy techniques, the “distance” between a set of models and a base model is measured. Model risk is given as the worst case out of all models within a given distance for a given payout. This allows model mis-specification robustness to be numerically quantified using exponential tilting towards an alternative probability law. Without specifying explicit alternative models, robustness is sought against alternative models within a certain distance.
Skoglund says: “The worst-case obtained represents in general an upward scaling of the term-structure consistent with the exponential tilting adjustment. The relative entropy approach to model risk we use has its foundation in economics with robust forecasting analysis, and has recently started to be applied in risk management.”
Skoglund uses a robust estimator of the portfolio loss tilted away from the plain average loss using a loss ratio, controlled by a parameter determining the degree of deviation from the plain average loss.
Because a model generates a number of outcomes, posits Skoglund, it is prudent to consider not only average loss predictions, but also some measure of the worst outcome. Relative entropy provides such a prudent estimate of expected loss using the exponential tilting of the loss estimate.
Because the model loss estimates that are tilted away from the original model are measured in currency amounts, they can be used for quantification of model risk buffers under the relative entropy framework.
Skoglund emphasises that his robustness approach is intended to complement, rather than replace, traditional model risk management practices – the use of sensitivities-based analyses or challenger models, for example – and does not negate the need for qualitative model risk management approaches, such as the use of conservative assumptions.
Skoglund’s paper will be published in the March 2019 issue of the Journal of Risk Model Validation.
Editing by Tom Osborn