Cutting Edge introduction: No more shortfalls?

Academics develop expected shortfall backtest to compare standardised and internal models

techtree2

While the Basel Committee on Banking Supervision adds finishing touches to its review of trading book rules scheduled to be released this month, the move from value-at-risk to the more risk-sensitive expected shortfall is increasingly becoming a reality. However, banks are far from prepared.

Critics complain expected shortfall is computationally demanding and tough to backtest due to the absence of a mathematical property called elicitability. Although some backtesting methods have recently been proposed to assess the quality of individual models, comparing any two methods has been tricky.

In this month's first technical, Expected shortfall is jointly elicitable with value-at-risk: implications for backtesting, Johanna F. Ziegel, an assistant professor at the University of Bern, Tobias Fissler, a PhD student in her group, and Tilmann Gneiting, a professor at the Karlsruhe Institute of Technology (KIT) in Germany, offer a new way of backtesting the risk estimates of a bank's internal model – calculated using expected shortfall – by comparing them with those produced by the standardised approach.

"That was the last outstanding hurdle," says Nick Costanzino, an investment analytics quant at AIG Asset Management in New York. "In a comparative method, one can tell which of the two models is better, whereas in a traditional backtest, one can only determine, given one single model, whether it should be accepted or rejected."

Over the years, quants have argued that expected shortfall can be backtested only if it satisfies an important property called elicitability. The property allows a measure to have a scoring function, which in turn makes comparisons possible between different models. In 2011, it was shown that expected shortfall, calculated as an average of losses exceeding a given VAR level, lacks this property, which sparked a huge debate as to whether it could be backtested at all. Despite the industry's objections, regulators insisted banks use the measure for capital calculations, but carry out backtests using their VAR model – many found this to be an odd practice.

The debate seemed to have been put to rest in 2014 when MSCI's Carlo Acerbi and Balázs Székely published a paper in Risk that proposed three ways to backtest the measure, and argued elicitability was not required for testing individual models. It was a major breakthrough, but they admitted model selection would still require elicitability.

The authors of this month's first technical have a solution. They design hypothesis tests that can compare the risk estimates based on the standardised approach against those of the internal model, by using a scoring function they developed in an earlier paper to test VAR and expected shortfall jointly.

"I would hope the word ‘elicitability' loses its catchiness or explosiveness that it has acquired," says Ziegel. "The debate, in my opinion, should rather be on the question of what type of backtest is most beneficial: a traditional one or a comparative one? Now we have a middle ground – we have a joint scoring function."

Thanks to the scoring function, the authors are able to do two comparative tests in their paper, one more conservative than the other. The less conservative test checks whether one could accept or reject the hypothesis that the internal model estimates at hand are at least as good as the ones from the standardised approach. The other test checks if the internal model estimates are at most as good as the ones from the standard procedure. The quants use the Diebold-Mariano test to assess the difference in the ‘scores' of the internal model and the standardised approach, calculated using the scoring function.

This allows a traffic-light approach – similar to the Basel VAR tests most banks currently use – to be used for backtesting. "One of the things we learnt while writing the paper is that this gives rise to a natural traffic-light approach, which is only possible because of the kind of null hypotheses we use," says Gneiting.

Calculating the risk estimates of both the standardised and the internal model might sound daunting, but some say it isn't so. "It is something you can sit down and simply implement. If you have an internal model running, and we have one based on historical simulations, layering this on top is not that difficult," says AIG's Costanzino.

In our second technical, Diversification benefit of operational risk, Roberto Torresetti, head of risk management at Banca Carige in Italy, and Giacomo Le Pera, a senior quantitative analyst in the model validation team at the bank, explore the diversification benefit arising out of the granularity of operational risk classes.

In this month's investment technical, Hedging error measurement for imperfect strategies, Jack Baczynski, a finance researcher at the National Laboratory for Scientific Computing in Brazil, and Allan Jonathan da Silva and Estevão Rosalino Junior, both PhD students within the team, introduce a new way of measuring hedging error.

  • LinkedIn  
  • Save this article
  • Print this page  

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact [email protected] to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: