Mean-variance optimisation is one of the foundations of portfolio management. Developed by Harry Markowitz more than 50 years ago, it is still used widely for selecting efficient portfolios that maximise the expected return for a given amount of risk. However, the technique is highly sensitive to changes in inputs – causing portfolios to diverge a lot for small changes – and is cumbersome when the number of assets gets too large.
Markowitz's mean-variance optimisation uses the estimated returns and volatilities of assets and the correlations between them as inputs to construct portfolios that are less risky than their individual components. It introduces the concept of the 'efficient frontier', at which the most return is earned for a chosen level of risk.
For portfolios consisting of a large number of assets – say, 500 – the dimensions of the covariance matrix is 500 by 500. This means the number of parameters that must be fed into the optimisation is massive. In addition, the inputs are typically estimated from historical data, so when the length of historical data is close to the number of assets, it would create a lot of noise.
None of this is ideal. For portfolio managers making use of mean-variance optimisation to spot market opportunities that might quickly disappear, the speed of the optimisation matters.
In an article published recently on Risk.net, Stable linear-time optimisation in APT models, Gordon Ritter, a senior portfolio manager at GSA Capital Partners in New York, shows how combining two fundamental theories – the arbitrage pricing theory (APT) and mean-variance optimisation – can help overcome this problem.
APT is a commonly used way of understanding the relationship between risk and return. Essentially, it is a multifactor model that establishes a linear relationship between a security's return and broader risk factors, such as market capitalisation, growth, inflation and exchange rate. The factors are used to explain the returns, instead of having to estimate huge covariance matrices. In APT, the number of factors is usually much lower than the number of assets, so the models deal with fewer parameters.
Ritter's method can be used as a research tool for quickly identifying potential opportunities: it allows firms to run an idealised case of portfolio optimisation, without too many messy costs
"One of the reasons arbitrage pricing theory and factor models were developed is that they reduce the number of parameters quite considerably," says Ritter. "In some sense, that is the key to making Markowitz optimisation work."
In his paper, Ritter takes the APT model and incorporates it into the utility function that is maximised in Markowitz optimisation. As a result, the utility function is simplified and can be expressed in terms of variables in the APT model. That helps create an optimised portfolio on the efficient frontier.
What makes this possible is a mathematical trick called the Moore-Penrose pseudoinverse, developed by mathematicians EH Moore and Roger Penrose in the 1900s. One of the problems one could run into in portfolio optimisation is that the covariance matrix can't always be inverted so as to arrive at the solution. However, this can be solved using a pseudoinverse, which Ritter applies to the optimisation problem.
Existing optimisation techniques are slower, and could take longer by a factor of a hundred. But Ritter claims his technique produces a solution that is numerically stable and can generate all the numbers from anywhere between 10 milliseconds and a few seconds.
One key aspect of the paper is that it applies to a simplified state of the world without any costs, or in the presence of only very simple transaction costs. In actual trading, optimisation techniques have to incorporate many real-world problems, such as transaction costs, borrowing costs and liquidity.
'New age' quants who want post-crisis models that account for all sorts of real-life costs and constraints might frown at this. Nonetheless, Ritter's method can be used as a research tool for quickly identifying potential opportunities: it allows firms to run an idealised case of portfolio optimisation, without too many messy costs.
In a market constantly being hit by sharp moves and uncertainty, especially in recent years, speed in performing calculations is essential for identifying investment opportunities and ideas before they disappear. Once the ideas are developed, more complex techniques can then be used in actual portfolio management to incorporate a number of real-world constraints. This way, speed can be balanced against accuracy.
For that reason, elegant mathematical solutions to idealised problems are not a thing of the past – at least, not yet.
Also out this month: Interest rate models enhanced with local volatility, by Lingling Cao and Pierre Henry-Labordère