Recent market turmoil has put risk management firmly in the spotlight, with regulators, lawmakers, industry practitioners, senior management, and the press all scrutinizing current risk management practices. Standard measures such as value at risk (VaR), sensitivity analysis, and historically based stress tests have formed the backbone of risk management for a number of years, but have fallen short in terms of rigorously analysing the extreme events that have swept through the global marketplace.
This market perspective introduces a new risk calculation that complements existing methodologies. Specifically, we address the potentially devastating effects on the overall value of a firm's positions following unforeseeable and extreme movements in key risk factors.
The measures mentioned above - VaR, sensitivity analysis, historical scenario analysis and stress testing* - are useful but have limitations: VaR* measures the minimum loss expected from a portfolio under evaluation, assuming relatively normal market conditions. Even the back testing around VaR is based upon testing that this minimum loss would indeed occur one day in a hundred (assuming 99th percentile). The problem is that the actual loss could be far deeper than the minimum, but these losses are buried deep within the ‘tail' of the profile and go largely unobserved.
A second problem with VaR is that it embeds correlations between risk factors that have shown themselves over the previous x years (typically 2 years worth of data is used). These correlations tend to break down in times of stress.
The third issue is the use of normal distributions to create the scenarios. This is a huge topic in and of itself, but there is considerable evidence against the use of normal distributions for this analysis. Still, the jury is very much split on this point across the market. VaR has uses but its limitations must be taken into account. Unfortunately, many practitioners, senior managers, and regulators viewed the measurement of VaR as the cornerstone of prudent risk management practice. Risk reports offered a false sense of security with respect to knowledge of financial risks at the organization and system levels.
The replay of Historical Scenarios* runs into a similar issue. The analysis allows for the assessment of extreme events on the current portfolio composition. Unfortunately, factor correlations are consistent with the period in which the event occurred and are unlikely to
have relevance to the current environment. The danger with such analysis is that it may give the risk managers and senior management undue comfort regarding the impact of extreme events and disturbances.
Sensitivity Analysis* is the final plank of standard analysis, but is inadequate as far as institutional risk management is concerned. The small movements modelled, typically one or ten basis points, are excellent measures to aid traders and portfolio managers with day to day decisions. However, they offer very little to the risk team in terms of understanding the broader risk profile when bad things happen. Why? Many instruments have built in break clauses, optionally, step ups/downs, variable FX rates and other features that remove the concept of linearity and limit the predictive power of sensitivity measures to assess portfolio behaviour under crisis.
More on Market Risk
Hedging threatened by treatment of liquidity and diversification, critics claim
Risk Awards 2015: Teamwork allowed bank to cut VAR by $30 million in three days
Risk Awards 2015: Critics challenged to look at the data by fund branded ‘not rateable’
Sponsored video: MSCI
Sign up for Risk.net email alerts
Sponsored video: MarketAxess
Sponsored video: Tradeweb
Multifonds talks to Custody Risk on being nominated for the Post-Trade Technology Vendor of the Year at the Custody Risk Awards 2014
Sponsored webinar: IBM Risk Analytics
There are no comments submitted yet. Do you have an interesting opinion? Then be the first to post a comment.