Bank quants have been preoccupied for several years by the Fundamental Review of the Trading Book, which will transform many aspects of market risk modelling. While there is no equivalent ‘big bang’ for credit risk, a steady flow of incremental rule changes in Europe are also transforming how bankers think about the internal ratings based (IRB) approach to credit risk capital.
Some of those rule changes are substantial in their impact, including the European Banking Authority’s introduction of a harmonised definition of default, at 90 days overdue. Banks and consultants talk of having to redevelop entire credit risk models to incorporate the new definition.
In aggregate, the changes make credit risk modelling in Europe more demanding, subject to tighter supervision, and yet with lower potential capital relief at the end of the process. Banks are torn between enhancing model performance to avoid supervisory censure and cutting modelling costs to reflect the smaller benefit to return on capital.
Outsourcing looks like the obvious route. If there’s less competitive advantage in credit risk models, a shared utility could be the way to cut costs while maintaining analytical rigour. But there’s a catch: to obtain IRB approval, it is still the bank that has to show its capabilities, not the outsourced modelling provider.
That means no matter how sophisticated the work done by the outsourced utility, its mechanics must remain clear to the banks that use it, and to their supervisors. David Botbol, chief executive of credit risk modelling start-up Algosave, talks about the need for a “white box” that allows banks to isolate and analyse any of the scenarios in the Monte Carlo simulations his model runs on corporate borrower balance sheets. And he’s been warned by bankers to stay away from machine learning techniques, because his product would lose the advantage of transparent explanations for model outputs.
That’s a sentiment echoed by a model developer at a second-tier eurozone lender, who says his bank is exploring artificial intelligence for credit underwriting decisions, but not for regulatory purposes.
“One or two vendors suggest they have AI models approved by the regulator, but I remain fairly sceptical. It can be a struggle to get regulators to approve some pretty vanilla models, so I don’t think there’s a huge opportunity, though it’s worth exploring,” he says.
However, regulators are also pushing banks to knit together their IRB models and their credit risk management more closely, including origination, underwriting and the whole risk governance framework.
So if an AI model implies a change in lending policy to comply with the bank’s assigned risk appetite, those changes also need to feed through into expected probability of default and loss given default in IRB models. And then the changes to the IRB inputs need to be explained to supervisors.
That all adds up to the need for what one banker calls “big models” – co-ordinated modelling capability drawing on multiple data sources to fulfil several different purposes across the whole lending workflow, from setting risk tolerance to calculating capital requirements.
This is ambitious because, at the other end of the scale, banks are still grappling with some very low-tech challenges. Payment flows usually feed through to automated systems that enable the bank to track arrears, and those systems in turn feed data through into probability of default calculations in IRB. But once a loan goes into arrears, the collections process of recovering money or realising collateral is generally much more manual – and the resulting recovery forecasts can jump around a lot, with knock-on effects for loss-given-default estimates.
The regulators’ way around that is simply to require margins of conservatism on IRB calculations, pushing up overall capital requirements wherever data quality is poor. In a world where machine learning meets the bailiff’s knock on the door, those margins of conservatism are probably here to stay for a while.