VAR limits: dislocations put focus on other lines of defence
Big dislocations in the Swiss franc and the US Treasury market have highlighted – again – the limitations of standard regulatory capital models. But banks insist their other lines of defence are up to the task
There are many different ways to calculate value-at-risk – the measure regulators require banks to use when setting aside capital for trading losses – but none of them would have prepared an institution for what happened on January 15, when the Swiss National Bank stopped trying to enforce its Swiss franc floor, and the exchange rate fell almost 40% in a matter of minutes.
"Regulatory models are designed to manage day-to-day risk. They are not suitable for tail risk in any way," says Jon Danielsson, an associate professor of finance at the London School of Economics (LSE).
In a numerical study, Danielsson – who is also director of the LSE's European Systemic Risk Centre – tried to predict the Swiss franc move with six different statistical risk models that are used in regulatory VAR calculations. The best-performing model, extreme value theory, guessed such a shock would happen once in a hundred years; the Student-t Garch model reckoned it would happen once every 2.1 million years. Two of the others effectively deemed it impossible.
A similarly ‘impossible' move was seen just three months earlier, when 10-year US Treasury yields collapsed by more than 30 basis points after the US market opened on October 15. Again, any firm relying on a regulator-endorsed VAR model would have been flying blind.

Many firms, like us, have explicit capital buffers designed to absorb such losses
None of this will surprise banks or supervisors – VAR is not supposed to be a coalmine canary, they say – but these two big dislocations in the space of three months are raising questions about whether markets have become more fragile and focusing attention on the industry's other lines of defence. That means stressed risk measures, event-specific stress tests and capital buffers.
Opinions are split on how these should be deployed and whether they are sufficient, with critics arguing the tests – like the prescribed models – are often too rigid and backward-looking. In contrast, some buy-side firms relied more on experience and judgement to inform their management of Swiss franc risk (see box, "Common sense tells you the floor could change...").
"Capturing tail risk means one needs more information specific to the asset or state of the market. People with sophisticated models – people who are not bound by regulatory requirements – will factor those in," says Danielsson.
Banks claim they still had the tail risk covered – just not through VAR. A managing director at a global bank says specific risks such as peg-breaks can be accounted for in many ways: the stressed VAR measure that bolsters standard regulatory risk estimates, buffers to both Pillar I and Pillar II capital – that is, standardised and discretional capital requirements capital – and stress tests.
"Peg breaks are familiar to most of us," says the managing director. "We do have Pillar II capital in excess of our Pillar I minimum capital requirements to reflect the kind of moves we can get if such a break happens, and we look at it across all pegged currencies. In addition, many firms, like us, have explicit capital buffers designed to absorb such losses."
Although the stress test itself does not directly feed into capital calculations, it is an important part of ensuring the capital buffer covers the losses. "You look at a point in time, figure out how much you would lose in an extreme scenario and see if you have still got sufficient capital. You can check that with and without the explicit buffer – that is one way of testing the capital add-on," argues the managing director.
Two sources say the peg-break-event buffer held by some banks was as high as twice the VAR of their portfolios. Both declined to share details about the nature of the portfolio considered.
Everything depends on the scenario used, of course, and market participants say few would have anticipated the violence of the Swiss franc move. Aaron Brown (pictured), a risk manager at hedge fund AQR Capital, says many firms applied individual stress tests to their exposures, using them to reprice a portfolio of assets.
"Every time there is a discrete major market event like a peg being put in place, people immediately add a stress test for it being reversed. People knew euro/Swiss franc was much riskier than its recent volatility would indicate," he says.
How much riskier, though? One way of estimating the severity of the event would be to trawl through the history of the currency and use its biggest moves as a starting point; another would be to ask what fair value for the Swiss franc would be, absent the SNB's floor.
Based on conversations with other risk managers, AQR's Brown says firms did both. As a result, scenarios used for the Swiss franc jump varied from a 10% one-day spike for those looking at historical peg-break data, to 25% for those who modelled it based on the reversal of the Swiss central bank's decision to place a peg in the first place.
Those counting on the 25% scenario may have been closer to the actual move on the day, but most firms modelled it as a move over 10 to 15 days and not over a matter of minutes. "That version of the stress test was not too bad at predicting how things ultimately turned out, but didn't catch the chaos in the first half hour of trading. Normally there are pressures and rumours, and often temporising measures like moving the peg or switching to a band or basket," Brown says.
Some argue this is all too hit-and-miss – an unavoidable consequence of the historical models that still have to be used to generate stressed outcomes: "Stress testing is done through models that are statistical, which is a paradox if the next crisis is different from the last one – and that is most probably going to be the case," says Alexander Denev, founder of consulting firm, Graphrisk.
He argues the two market breaks should encourage banks to move away from statistical and historical models and focus instead on cause-and-effect-based models. Causal models – as they are called – combine Bayesian techniques to view a major event as a result of causal relationships among smaller sets of events rather than as a result of correlation among thousands of market variables that may not have anything to do with one another, because correlation does not imply causation.
Denev illustrates by pointing to what he sees as the main culprits for the financial crisis: the Federal Reserve's decision to raise rates, the bursting of the mortgage bubble, structured credit downgrades, the freezing of the wholesale lending market and the failure of a few systemic players. Some of these factors are not macroeconomic in nature, he says. Standard statistical models might look at the correlations between thousands of names in a loan pool, for example, and macroeconomic variables, without trying to understand the relationship between them. Causal models try to reduce the dataset to a few key variables and look at cause and effect.
"Causal models are forward-looking, as each component can be calibrated to a different source that reveals the expectations of the agents in the economy as of today, such as market prices, Twitter analytics, expert surveys and so on. It may not be a crystal ball, but it can at least avoid obvious pitfalls of classical econometric techniques, which reduce the richness of the world we live in to a set of macroeconomic aggregates and mystical error terms," Denev says.
He adds that one major central bank is currently exploring the benefits of causal models – a project Denev is part of.
The US Treasury move, while just as alarming as the Swiss franc dislocation, had a very different trigger. Last December, Risk revealed it was flight to quality and heavy short gamma positions that caused the yield on the 10-year US Treasury to take a 10-minute dive to 1.87% from its previous close at 2.21% (Risk December 2014). That has implications for risk managers.

"If a dislocation is caused by trading activity, models can capture that," says Robert Jarrow, a professor of finance at Cornell University.
The trick is to make sure the models are taking their cue from the right market variables – which, in the case of the October 15 event, might have been options markets. In the weeks preceding the dislocation, some market participants noticed there was something strange happening with the skew on US Treasury options, which flipped round to make calls more expensive than puts.
Robert Almgren, co-founder of execution firm Quantitative Brokers, and a fellow in financial mathematics at New York University, says his firm believes the implied volatility skew may be useful as an indicator for fixed-income moves in the future.
"You have to find out where things are building up. You have to ask, if banks were short gamma, then what would that cause in the market, and then look for signs of that," says Almgren.
This kind of modelling could be powerful, but it might also produce extremely volatile capital numbers, making it potentially unsuitable as a prescribed approach for banks.
"Traditionally, regulators have been looking at historical information. They have refrained from option information, which is the only information that has real content as far as forecasting the future," says Claudio Albanese, London-based chief executive of advisory and analytics firm Global Valuation.
He says the reason implied volatility, derived from option prices, has not caught on in VAR calculations is its high reactivity to the market, which would cause capital to be too pro-cyclical. As an illustration, the one-month implied volatility of foreign exchange options on the euro/Swiss franc exchange rate jumped from 2.7% to 55% on January 15, Albanese says. If this jump had to be reflected immediately in market risk capital numbers for an options portfolio, he estimates there would have been a 20-times overnight jump.
"Common sense tells you the floor could change..."
Buy-side firms are not subject to regulatory capital requirements, prescriptive modelling regimes or supervisory stress tests. Given that extra flexibility, judgement played a far bigger role in the way some firms approached Swiss franc risk.
In the case of Systematica Investments – a quant fund with $9 billion of assets under management – it got out of the market altogether. The fund calibrates its models to price history, so after the Swiss National Bank imposed its floor in 2011, it decided the models would no longer work.
"We stopped trading the Swiss franc in our systematic models in September 2011, when the peg to the euro was introduced. The dynamics of the history would be very different to the future price evolution. In effect, the models would have no way of understanding the nature of the peg, so we removed it from the programme," says David Kitson, the fund's Geneva-based chief investment officer.
Many other models have the same weakness, says Donald Van Deventer, chief executive of Honolulu-based Kamakura: "The models that would miss are models that ignored the fact that markets are being manipulated by central banks. They use historical volatility and get surprised when the move is outside the bounds of history."
Other funds carried on trading the franc – but were doing things they felt would give them a better grasp of the possibilities.
Matthew Beddall, chief investment officer of Winton Capital in London, says looking at short, fixed periods of history, such as the one year of data prescribed for stressed value-at-risk, might cause a firm to hold too narrow a view of the risk of sharp currency moves.
"We use 40 years of data. When we look back, we know you do see those types of large moves. In the case of the Swiss franc, because of the floor on the currency, volatility at certain points reached a very low level, which did not mean risk has suddenly disappeared. Common sense tells you the floor could change and you shouldn't trust the volatility of a market like that. So we expected it to increase," he says.
The fund also does not assume normal distributions in its models and looks at many different risk measures at the same time, to avoid falling victim to the weaknesses of any specific measure. In addition, it adds caps or hard limits to all metrics to stay within reasonable exposure bounds.
Stressing over stress tests
In the banking industry, regulators and quants may never agree on the best models to use, so stress testing looks like a good compromise. In Europe and the US, supervisors are now using widescreen, standardised testing to get a second opinion on the robustness of risk models and capital adequacy – to plaudits from some observers.
"There has long been a debate in the regulatory world over whether to prescribe the risk calculations or oversee them to ensure integrity and quality. I think what the Federal Reserve has done in its stress-testing regime is a much more sophisticated approach. They specify the parameters, leave the calculations to the institutions and audit them heavily. They are not claiming to know more than the banks they are regulating," says Donald Van Deventer, chief executive of Hawaii-based software vendor, Kamakura.
Banks also continue to conduct their own, specific tests of businesses and portfolios, but practical limitations mean it is not a panacea.
"There are methodological complexities around consistent enterprise-wide stress testing that have not been fully recognised or resolved yet," says Satyam Kancharla, senior vice-president and chief strategy officer at software vendor Numerix.
The firm sees a lot of labour-intensive Excel spreadsheet operations for stress testing at banks, he says, which institutions are now seeking to automate. The current process can limit the number of possible events included in the tests - peg-break scenarios are often lower on the priority list.
The limitations arise from the fact that the systems within banks are not connected, meaning they have to extract positions and run calculations with a lot of manual intervention. "These are typically run in spreadsheets, so we are talking about a few dozen scenarios at most. Alternatively, approximations can be made in order to run more scenarios, but these may reduce the accuracy of the results," says Kancharla.
Most stress guidelines are based on historical events that need to be translated into values that make sense in the current environment - taking into account low interest rates and central bank intervention, for example. "There is a lot of effort in being able to do that, which requires real-world modelling and Monte Carlo simulations with accurate, time-dependent risk factor evolutions. In order to create stress tests reflective of current market conditions, you need to take the right lessons from history and not apply them too literally to today's markets," says Numerix's Kancharla.
Banks also have to adapt the stresses for different economies. "Even if the next crisis is different, the stressed value-at-risk figures can be conservative enough to be within its limits when the next storm comes. Sometimes this does not work either. Consider Australia, which was not hit heavily in the 2007–2008 period. A VAR calibrated on those years in that country will give a very benign figure and could give Australian banks an incentive to take on more risk," says Alexander Denev, founder of consulting firm Graphrisk.
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe
You are currently unable to print this content. Please contact info@risk.net to find out more.
You are currently unable to copy this content. Please contact info@risk.net to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@risk.net
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@risk.net
More on Market risk
Repo and FX markets buck year-end crunch fears
Price spike concerns ease as September’s surprise SOFR jump led to early preparations for bank window dressing
Market risk solutions 2023: market and vendor landscape
A Chartis Research report that examines the structural shifts in enterprise risk systems and the impact of regulations, as well as the available technology.
The new rules of market risk management
Amid 2020’s Covid-19-related market turmoil – with volatility and value-at-risk (VAR) measures soaring – some of the world’s largest investment banks took advantage of the extraordinary conditions to notch up record trading revenues. In a recent Risk.net…
ETF strategies to manage market volatility
Money managers and institutional investors are re-evaluating investment strategies in the face of rapidly shifting market conditions. Consequently, selective genres of exchange-traded funds (ETFs) are seeing robust growth in assets. Hong Kong Exchanges…
FRTB spurs data mining push at StanChart
Bank building “single golden source” of trade data in a bid to lower NMRF burden
Asian privacy laws obstruct FRTB data pooling efforts
Bank scepticism and regulatory hurdles likely to inhibit cross-border information sharing
Seizing the opportunity of transformational change
Sponsored Q&A: CompatibL, Murex and Numerix
Doubts grow over US FRTB implementation
Fragmented roll-out would price European banks “out of the market”