The power of the portfolio

To observers, credit portfolio modelling appears particularly dependent upon making approximations. Derivatives traders may study finite difference schemes, but at least the pricing models are finely calibrated to the market. Asset managers might have to estimate their efficient frontiers, but the price data for traded bonds and equities makes this relatively straightforward.

Credit portfolio managers, overseeing hundreds or even thousands of counterparties, have much less to go on. Default and recovery statistics exist, but how should their idiosyncrasies be untangled from the economic cycle when estimating counterparty default probabilities? And how do these probabilities interconnect as part of a big portfolio? The answers to these questions – and their impact on economic capital, asset allocation and hedging policy – can spell the difference between the success and failure of a credit business.

Lurking at the heart of the problem is the portfolio loss distribution. It is easy to write down a mathematical formulation of this beast, as the weighted sum of many stochastic variables, but very hard to calculate it for realistic portfolios. One tradition is to use Monte Carlo simulation for the entire portfolio. If run to completion, this should provide the most accurate estimate of the loss distribution. However, without the careful use of sampling techniques, the convergence of Monte Carlo is much too slow for it to be useful.

Analytic portfolio models, particularly those which involve conditional independence, were invented as a way around such obstacles. The trick is to obtain the loss distribution by assuming conditional independence of individual defaults subject to the risk factors reflecting the overall state of the economy. Early versions of such models, which assumed a single systematic risk factor, and treated default dependence purely as pair-wise correlations between counterparties, may deserve the term approximation. As Basel II proves, such approximations play an important role in building consensus.

But when applied to more recent generations of analytic portfolio models the characterisation is unfair. Researchers have made great strides in understanding the theoretical properties of credit portfolios, and a rich array of mathematical tools is being deployed. We have been fortunate to see much of this research published in these pages. The two Cutting Edge articles in the main magazine this month are part of this tradition.

In the first article, Kevin Thompson and Roland Ordovas start with the question of contributions from individual counterparties to overall portfolio loss. Rather than work within the conditional independence framework of systematic factors, they ask what is the individual expected default rate conditional on a given level of portfolio loss? This interesting question is almost impossible to answer using Monte Carlo techniques.

However, Thompson and Ordovas have made an analytical breakthrough by borrowing an old concept from statistical physics: the ensemble. They imagine a collection of copies of the portfolio, corresponding to different combinations of defaulting and non-defaulting counterparties. By counting the ways in which various default combinations appear, they find a formula for the most likely configuration in the ensemble, subject to the constraint of fixed average portfolio loss.

Thompson and Ordovas then conjecture that the most likely conditional default rate is equal to the expected conditional default rate. In physics, this corresponds to the so-called ergodic hypothesis, which states that ensemble averages can be used in place of time averages. Thompson and Ordovas concede that for credit portfolios, their conjecture is strictly a heuristic tool. However, their approach does lead to formulas identical to those derived using other methods, and appears to provide important insight into problems of dependence.

Meanwhile, among existing analytical approaches to credit portfolio modelling, CreditRisk+ has become the most popular due to its tractability. However, the model suffers from the restrictive assumption of sector independence. Moreover, the recursion relation for calculating the loss distribution is unstable for very large portfolios. In the second of this month’s articles, Götz Giese presents an improved version of CreditRisk+ with a stable recursion scheme and sector correlations, which compares favourably with other approximation techniques when used to calculate loss distributions.

Finally, our third Cutting Edge article appears in this month’s Hedge Fund supplement (see page S18). Investors in hedge funds need to be cautious when making capital allocation decisions due to problems of survivorship bias, autocorrelation and hidden optionality. Here, Hari Krishnan and Izzy Nelken show how to quantify such caution. By analysing the incentive structure of hedge fund managers using an option pricing approach, they derive a liquidity haircut to compensate for lockup periods, and an illiquidity premium that effectively increases volatility.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here