Sponsored by ?

This article was paid for by a contributing third party.More Information.

Machine learning and AI in model risk management: a quant perspective

Machine learning and AI in model risk management: a quant perspective

Statistical risk models face issues of validity as unprecedented events and social phenomena occur. However, artificial intelligence (AI) and machine learning can assist models in maximising accuracy. By Tiziano Bellini, head of risk integration competence line, international markets, at Prometeia

Statistical risk models face issues of validity as unprecedented events and social phenomena occur. However, artificial intelligence (AI) and machine learning can assist models in maximising accuracy. By Tiziano Bellini, head of risk integration competence line, international markets, at Prometeia

Model risk is usually interpreted as the potential loss an institution may incur as a consequence of decisions based on the output of models. Errors might occur across the entire model lifecycle, and can arise from two main, broad sources: misspecification and calibration. Financial institutions rely on a growing number of models informing their day-to-day decision processes. These models usually attempt to mimic economic processes and are deeply affected by social phenomena. Therefore, sudden and profound macro- and microeconomic changes undermine model effectiveness, mimicking economic environment dynamics. This prompts the questions: Is the model capable of effectively performing outside of the context where it was trained? Do machine learning and AI help quantify model risk?

The unprecedented Covid-19 pandemic and energy crisis have threatened model developers and validators, as traditional statistical models fail to capture events not experienced in the past. Furthermore, climate change and environmental stress-testing put the credibility of certain financial models at risk.

In all of these cases, data availability plays a key role. What is the plausibility of models relying on scarce historical data? How could statistical models help in cases of unprecedented phenomena? Students of econometrics agree that “the past is not the future”. Nevertheless, it is common practice to look back on history to grasp what will happen next. One may be tempted to discard statistical models when completely new economic conditions kick in. Alternatively, one may think of enhancing models’ usage by improving their economic foundations. The integration of statistical modelling and economic theory paves the way for a new generation of model management, where it is paramount to assess how a model behaves beyond the ‘environment’ in which it was trained.

Tiziano Bellini, Prometeia
Tiziano Bellini, Prometeia

One may think of relying only on historical observations for assessing a model’s uncertainty, for example, via bootstrapping techniques. A major advantage of this approach in estimating model risk is to rely on inputs and outcomes that are already available. On the contrary, the main disadvantage is that these inputs and outcomes are limited to past events. While aiming to assess the uncertainty beyond the conditions in which a model was developed, the crucial answer is to test the model under new circumstances. How can we tackle such an issue?

A method would be needed for consistently simulating inputs as well as model outputs. A few alternatives are available in the statistical literature. Techniques such as generative adversarial networks or similar may constitute the peak achievement. Prometeia has been implementing solutions to deriving error distributions based on various machine learning and AI approaches. Consequently, Prometeia has designed model risk appetite frameworks and monitoring processes by identifying extreme events causing model risk tolerance breaks. Such approaches have also been applied in cases of hypothetical scenarios (such as climate risk) where historical data is not robust enough for supporting effective projections.

The possibility of performing analyses based on simulated settings where historical structures (correlation and interdependencies, among others) are used or tested constitutes a new paradigm for model developers and validators.

Machine learning and AI have widely been used in the recent past because of big data. Their statistical properties are now becoming crucial to supporting processes where historical data is not necessarily representative of future settings. Prometeia is pioneering with applications in the model risk space and, more generally, in risk, having recently been acknowledged as a global leader in the Model risk quantification and Capital optimization categories in the 2022 Chartis RiskTech100® rankings.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here