See the error of your VARs

The commonly-used value-at-risk estimation method can underestimate risk, but there is a solution, writes one of this month’s technical paper authors

errors

For all its limitations, value-at-risk remains a key measure in risk management and regulatory capital calculations, and forms the bedrock of many strategic business decisions. So it is disconcerting when a metric that played a part in debacles such as JP Morgan's $6.2 billion London Whale losses is estimated incorrectly.

VAR, or the maximum loss that would not be exceeded given a confidence level over a certain holding period, is one of finance's most widely used gauges of risk. To estimate it, firms need to assume a certain underlying distribution and define a specific time horizon over which it is estimated, which in turn requires the standard deviation of the distribution.

Typically, firms use a sample standard deviation – calculated from a selection of returns over a window of time, as opposed to the whole population of returns – as an estimator of the variance of the distribution. The sample standard deviation is considered to be the so-called ‘unbiased' estimator of the variance.

This, however, raises alarm bells for some. Using the sample standard deviation – itself an estimator of another quantity – to estimate VAR, causes it to be biased downwards; that is, the risk is underestimated. This can cause serious errors.

"While some firms may be quietly adjusting for this bias, the surprise is that it wasn't recognised and addressed a long time ago," says David Frank, a quant in the portfolio research team at Bloomberg in New York. "A sample standard deviation estimate is used because it is an unbiased estimator of variance, and yet here we are using an unbiased estimator of something – in this case, the variance – to estimate something else: the tail. Using an unbiased estimator of X to estimate Y is less than ideal."

In this month's investment technical, Adjusting VAR to correct sample volatility bias, Frank introduces a VAR adjustment, or bias correction, in a bid to avoid underestimating the metric.

When firms calculate VAR without adjusting for the bias introduced by the sample standard deviation, the number of exceedances above VAR they actually end up with is larger than the exceedances they would expect based on the pre-set confidence level. For instance, for a 95% VAR, there would be exceedances 5% of the time; but because of the bias, 95% VAR would behave more like 93% VAR, underestimating risk.

"You are estimating VAR using volatility, and that volatility is a point estimate; it is not a true value, so it gives you an estimation bias on your VAR. So your VAR is not a 95% VAR, but it is a 93% or 94% VAR – which means the exceedance probability of that VAR is much higher than you would expect," says Winfried Hallerbach, a senior quantitative researcher at asset management firm Robeco in Rotterdam.

Bloomberg's Frank argues that to estimate a 95% VAR, one would have to look at a smaller part of the tail, so instead of looking at the 5% tail – which seems straightforward at first glance – one might have to look at 4% or 4.5% to account for the bias.

This is not to say some firms aren't already aware of the bias in standard VAR estimation – many are, and already correct for it in simpler ways.

For instance, Robeco uses empirical benchmarking, looking at its backtesting results after the fact in a bid to assess what volatility multiplier it should have used in the empirical distributions to get the correct number of exceptions.

Frank's paper, on the other hand, proposes using an algorithm to come with up what he calls an adjusted alpha: the exceedance level for which one needs to compute the adjusted VAR such that the number of VAR exceptions is at a level required by the firm. In this case, firms can calculate a bias-corrected VAR and use that for risk management and backtesting.

There has been a recent push towards more tail-risk-sensitive, and arguably more complex, risk measures such as expected shortfall – partly because of the known limitations of VAR, and partly because of regulatory reforms such as the Fundamental review of the trading book. But backtesting expected shortfall is still a practical challenge, despite a recent string of academic papers that show it can be done. For that reason, it pays to get the simpler and more widely used VAR measure right; JP Morgan's London Whale losses are a testament to that.

See also this month's other technical papers:

Gap risk KVA and repo pricing

‘Hot-start’ initialisation of the Heston model

 

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here