Covid chaos spurs on search for model risk aggregation

Many models failed in pandemic, but analysing them in clusters easier than whole-bank view

  • Many bank models failed due to the unprecedented economic events during the Covid pandemic, leaving firms struggling to separate those that are still sound from those with fundamental flaws.
  • The experience has intensified efforts to analyse and validate models in bulk, spurred on by regulator expectations that board risk committees should have an aggregate view of model risk.
  • However, second-line validators say it is virtually impossible to understand the inner workings of thousands of models, let alone arrive at meaningful metrics for their aggregated risks.
  • Banks are instead focused on assessing clusters of models that are used for similar functions, such as front-office pricing or supervisory stress-testing.
  • The potential for failures in one model to spread through the modelling ecosystem is another important focus for banks and regulators.
  • Risk modellers are divided on whether a separate validation function can adequately understand models, or if the original developers need to be involved in quantifying model risk.

The Covid pandemic tested bank risk modelling beyond breaking point. The sudden and near-total economic shutdown in 2020 was accompanied by unprecedented levels of government support that cushioned the impact, and then followed by the equally sudden recovery as lockdowns were lifted in 2021.

All those events fell outside the boundaries of most model assumptions, even accounting for realistic tail risks. But banks don’t want to throw out all the models just because they could not keep up with the pace and severity of the pandemic.

“All of a sudden, you had all these models that were not working because of the economic environment we’re in. And that was not foreseeable – it was no wrongdoing of the modellers,” says Evan Sekeris, head of model validation at PNC Financial Services Group.

“We need to start capturing the reason a model falls in this in-between category – is it because of problems with the model itself or is it because of environmental problems?”

The whole experience points to the need for smarter and more streamlined approaches to model validation across potentially thousands of models in each bank, to identify in bulk those that are working, and to fix or discard those that are not.

Simple measures – such as the percentages of models approved, approved with caution, or rejected – proved unreliable during the Covid crisis, when the number of approved models dropped precipitously, while the number approved with caution rose sharply.

Regulators are already on the case. In a June 2022 consultation paper, the UK’s Prudential Regulation Authority stipulates that firms need to establish a model inventory, and a risk-based tiering approach to categorise models. The paper specifically instructed board risk committees to be able to take a view not just on the risks around models individually, but also in aggregate.

This echoes the US approach. The US Office of the Comptroller of the Currency’s model risk handbook specifies that model risk be reported to the board for individual models and in aggregate, as does the Federal Reserve’s supervisory letter, SR 11-7.

But more than a decade after SR 11-7 was first published, aggregate model risk assessment is a summit that banks are still struggling to scale.

Olga Collins
Olga Collins, Morgan Stanley

“Model risk does not yield itself well to a traditional bottom-up risk aggregation approach, because we have so many different model categories that cannot possibly be aggregated into a single measure in a way that can be easily conveyed to senior management, boards of directors and regulators, while at the same time be transparent and actionable,” said Olga Collins, global head of model risk infrastructure and reporting at Morgan Stanley, during a panel discussion at the Model Risk Managers’ International Association in June.

Practitioners and academics have tried to come up with template approaches – for example, a paper published by two academics and a risk modeller at Santander in 2017. This advised the industry to focus on “the data used for building the model, its mathematical foundations, the IT infrastructure, overall performance and (most importantly) usage”. But other practitioners say the implementation of such an approach is not straightforward.

“The paper presents a framework for assessing model risk, but may not be applicable to a real situation,” says a senior risk methodology manager at a European bank.

“Some years ago, I tried to develop something myself, and didn’t get anywhere with it because anything you put into an automated way of assessing model risk is subjective.”

However, there are a couple of themes emerging around overarching approaches to model risk. First, the focus on model usage is helpful. Even if it is not realistic to assess all models across the bank in aggregate, it is more practical to analyse groups of models that are used for similar purposes. Secondly, both banks and regulators are increasingly focused on problems that could cause cross-contamination between models. And there is also a potential source of controversy: who is best placed to validate models accurately and sensibly?

Keep it in clusters

Most models are used for a specific purpose – front-office pricing, regulatory capital, Current Expected Credit Loss (CECL) accounting, or stress tests such as the annual US Comprehensive Capital Analysis and Review (CCAR). Banks are trying to segment models by category and use type, and tailor risk metrics accordingly.

For example, validators could look for any anomalies in the loan-loss projections generated by a group of stress-testing models for a broad segment such as commercial and industrial loans or mortgages.

“You could come up with a measure of model risk such as a stressed loss for CECL or CCAR models,” says Michael Jacobs, head of wholesale first-line model validation at PNC Financial Services Group.

“Another way is to shock inputs, or perturb assumptions, and measure how much a model output is off according to some loss measure, and then you could roll that up within or across different portfolios.”

Nomura employs a hybrid approach wherein it creates quantitative measures by business lines and by regions. In addition, it runs less granular metrics, such as the number of model breaches, and reports those metrics to senior management.

Slava Obraztsov
Slava Obraztsov, Nomura

“[The cluster] shouldn’t be too large, but is not one or two [models]. Depending on business activity, those models could be grouped across different model classes and risks identified for that model type,” says Slava Obraztsov, global head of model risk at Nomura.

“It’s quite a challenging exercise, but this is the future of model risk management.”

The next challenge is figuring out what to do with the results of this analysis. Many of the results reported to senior management are soft metrics, such as the number of high-risk models, or the percentage of models validated. However, the actual management of model risk often gets overlooked, compared with the detailed metrics and business-line limits applied to other risk types such as market or credit risk. To apply a similar approach to model risk management would require harder quantitative metrics.

“Model risk management is not just validation, it’s the whole governance around development, validating, approval, performance monitoring. This is what we do at Nomura for some model types, enough so that when those metrics are in place – for example, for pricing valuation models – metrics could be done by model type and segregated by asset class type and extended across all valuation models,” says Obraztsov.

Stop the infection spreading

Banks tend to rank models by risk factors such as their materiality to financial results, and whether they are inputs to other models or have the potential to infect the wider model ecosystem if they fail. This second point is an element subject to increasing regulator focus as well.

The OCC handbook states: “Model risk can increase from interactions and dependencies among models, such as reliance on common assumptions, inputs, data, or methodologies.”

Similarly, the PRA consultation states: “Overall (aggregate) model risk increases with larger numbers of inter-related models and interconnected data structures and data sources.”

If one model is fed by others, it creates a conditional probability chain: what is the probability that the main model fails if one or more sub-models fails?; what is the probability of the eighth, ninth and 10th models in a chain failing, even if earlier models in the chain appear to be working?

At one bank, it would be one model, and at another bank, it could be 200 models. How you define hierarchies depends on how the banks define the models

Evan Sekeris, PNC Financial Services Group

Tackling these risks means analysing dependencies and associating probabilities of failure with each branch in the chain.

“Even if you map out the structure, assessing probabilities in each link is incredibly difficult. These things ought to be done by long-term trials, but mostly you can’t do that,” says the senior risk methodology manager at the European bank.

Depending on how banks define models, they could end up with thousands of models spread across multiple business functions. Some banks have elected not to develop hierarchical schemes, preferring instead to tag models individually by risk type, inputs and whether the model has been approved.

Evan Sekeris
Evan Sekeris, PNC

As a result, what is considered a single model with five regressions at one bank could be classified as five separate models for each regression at another bank.

“I know of a bank that has a different model for each scenario, then they have another model which is scenario selection in combination with the regressions. At one bank, it would be one model, and at another bank, it could be 200 models. How you define hierarchies depends on how the banks define the models,” says Sekeris.

Who’s in charge?

With so many models to handle, it is exceptionally difficult to build a validation function that has sufficient expertise to understand every model effectively.

Boards and regulators are interested as much in qualitative measures of model risk as in hard numbers. They want to see a broader portfolio view of model owners attesting to the accuracy of models, and to know that the second line is going through a discovery process holding the first line accountable for performance monitoring.

Nomura’s Obraztsov says that, in most banks, model risk is initially quantified by model developers, and then subject to challenge by validators. By contrast, Nomura performs the bulk of model risk analysis in an independent model validation function, not by model development. However, he believes the exact team responsible for model validation is less important than ensuring that an overall picture of risk is in place when a model is developed, approved and in production.

“All firms are running model performance monitoring on the suitability of models under changed market conditions and changed portfolio composition,” says Obraztsov.

“That performance-monitoring process might indicate some further model limitations which have to be considered. It’s a dynamic process which involves development teams around validation, periodic review, and performance monitoring.”

Other bankers have a stronger view on who should be in charge of model validation. The senior risk methodology manager at the European bank argues that only developers have the expertise to do the job properly. If someone who knows exactly how the model is supposed to be working still cannot validate the outputs, then that is the clearest possible signal the model is flawed.

“I’ve seen validators validate models which are absolute nonsense, and I’ve seen them question models which are perfectly sound. The reason is that they don’t understand what’s going on,” says the senior risk methodology manager.

Editing by Philip Alexander

  • LinkedIn  
  • Save this article
  • Print this page  

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact [email protected] to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: