Journal of Investment Strategies

Welcome to the winter 2012/13 issue of The Journal of Investment Strategies. This issue opens the second volume and the second year of our publication. This milestone comes at a time when we are seeing a steady increase in the number of papers being submitted for publication. We are looking forward to sharing the best of these with our readers in the coming issues.

This issue contains five contributions: three research papers, and two in the Investment Strategy Forum section. The research papers cover topics from portfolio construction to algorithmic trading. The discussion papers highlight practical issues related to the impact of some well-known portfolio construction methodologies on the long-term performance of investment portfolios.

In the first paper of the issue, "A least discrimination method for portfolio optimization: an alternative to the Black-Litterman approach", Jacques Pézier follows up his earlier publication on a new methodology for assessing risk-adjusted returns (The Journal of Investment Strategies 1(3), 3-65) with an article on a new portfolio construction method that is particularly well-suited to addressing nontrivial investor views on returns and other characteristics of the investment set.

The paper presents an alternative to the well-known Black Litterman approach, while trying to achieve essentially the same objective: namely, to find a portfolio that is simultaneously consistent with the investor's views on the relative (or absolute) merits of its constituents and well-anchored on a sensible benchmark or reference portfolio. As with the Black-Litterman method, Pézier starts with the notion of implied views derived from the benchmark portfolio. He then takes a more general approach, however, and instead of postulating the mixture model for implied and investor views, he proposes using the least discriminatory forecast obtained via an iterative procedure as a more consistent way of blending these views. The results, at least in theory, are more consistent and should be more "optimal" for a typical investor. Pézier also establishes other important relationships between this new approach and others, including the correspondence between the improvement in the certainty equivalent return and the generalized relative entropy distance between investor's views and implied views.

The paper demonstrates how the least discrimination procedure can result in both linear optimal portfolios, when the investor's views concern only the returns of the constituents, and nonlinear (option) portfolios, when the investor's views concern characteristics such as volatility or correlation. This feels natural, from a practitioner's perspective. Indeed, if I predict that the volatility of a particular asset will be lower than the market-implied level, then not only can I allocate more to that asset from a risk-return balance perspective, but if at all possible I should also find a way to sell the market implied volatility to gain extra premium. This approach lies at the heart of many derivatives-based portfolios, although I doubt that many of investors are trying to come up with the optimal capital allocation to such volatility (or correlation) premium trades. Perhaps the paper by Pézier will open the door to more comprehensive portfolio construction practices.

In the second paper in the issue, "Leveraged exchange traded funds: admissible leverage and risk horizon", Tim Leung and Marco Santoli present a thorough investigation into the risk and return characteristics of leveraged exchange-traded funds (LETFs) and discuss the limitations of their use in investor portfolios.

LETFs have faced constant criticism, from both the academic community and practitioners, for what is considered to be a fatal design flaw: the embedded decay of returns due to volatility exposure, which arises as a consequence of the daily rebalancing process. Leung and Santoli go one step further: not only do they demonstrate this return decay characteristic, they also estimate the dependence of the leveraged ETF performance on the leverage and the investment horizon.

They find, for each investment horizon, a level of "admissible leverage" that should not be exceeded if the investor wishes to constrain the portfolio to a particular value-at-risk or conditional value-at-risk level. Moreover, they propose a specific stoploss/ take-profit strategy that is optimal for a given leverage level. This paper sheds some light on the still quite confusing market segment of leveraged ETFs and helps investors use them more appropriately in their portfolios.

Our issue's third paper, "Universal algorithmic trading" by Vladimir V. V'yugin and Vladimir G. Trunov, presents an information theory based approach to the design of algorithmic strategies. Following the pioneering works of Kelly, Cover and others, they design a "universal" strategy that is asymptotically at least as good as any other strategy within the "not too complex" set of choices. The new contribution that the authors make in this regard is the use of a so-called well-calibrated forecasting method, which avoids many of the usual stochastic dynamics assumptions regarding stock prices.

The approach that the authors take is very practical, even if it sounds somewhat unfamiliar to many finance practitioners. Their technique, using the randomization of forecasts, is shown to satisfy the criteria of well-calibrated forecasting, in particular, achieving the target probability of forecast occurrence. A given strategy and a benchmark strategy are compared by running a trading game, where the benchmark strategy plays the role of the dealer and the strategy being tested plays the role of the price taker - and both of them use the same randomized forecast for a market observable (such as a stock price). After a rigorous derivation, the authors prove that the proposed universal strategy does indeed conquer all the other benchmark strategies in this trading game.

The authors conclude the paper with a set of examples showing that the proposed technique does indeed appear to outperform the preset benchmarks, if transaction costs are neglected. I must mention that this assumption is not as innocuous as it might sound, because universal algorithms such as those derived in this paper tend to have very high turnover, and because of this the transaction costs can in fact be prohibitive in practice. Still, I believe that information-based portfolio models are underutilized in practice, and research such as that reported in this paper will serve to promote this very interesting approach.

The first discussion paper in the Investment Strategy Forum is "Mean reversion in stock prices: implications for long-term investors" by Laura Spierdijk and Jacob A. Bikker. This continues the theme of optimal portfolio construction methodologies, this time in the dynamic setting. Specifically, the authors study the impact of the mean-reversion assumption on portfolio allocation decisions.

The mean-reversion assumption is deeply ingrained in the psyche of investors. Whether they explicitly use it, or implicitly hope to benefit from "rules of thumb" such as maintaining 60:40 stock:bond policy portfolios or dollar cost averaging, the belief in the reversion to the mean is probably responsible for a good deal of portfolio rebalancing and reallocation decisions.

Spierdijk and Bikker try to spell out the mean-reversion assumption and see whether it should affect the optimal capital allocation for long-term investors. They find that the assumed mean reversion does not actually result in a substantial reduction in portfolio volatility. However, they do find that mean reversion results in the significant dependence of the optimal allocation on the investment horizon across asset classes. This is not surprising, since the mean reversion puts an upper limit on the growth of variance over time. When comparing two investments, therefore, even a small difference between their expected returns results in statistically significantly better performance for a sufficiently long horizon.

However, the statistical estimates of the mean-reversion process parameters are not highly accurate, while the optimal portfolio allocation itself for a given horizon is not very sensitive to the value of mean reversion. The authors conclude that even longer-horizon asset-liability-optimizing investors should not trust their estimates of
reversion coefficient too much, and should take a conservative view toward using the asset allocation.

Our second discussion paper, "Alternative indexing methods: point of reference - does it matter?" by Patrick Gander, Daniel Leveau and Thomas Pfiffner, considers the impact of the benchmark selection on the investment process and its results. They give an overview of benchmarks, from the most commonly used market capitalization weighted index to the more recently introduced fundamental indexing, maximum diversification indexing and dynamic indexing. One important observation is that for most alternative indexing methods, the resulting portfolio exhibits a significant tilt toward value stocks. This makes the long-term performance comparisons between the indexes less clear, in the sense that it is difficult to say whether the better performance of, say, the maximum diversification index is due to index construction or due to resulting value tilt. If it is the latter, we cannot be sure whether the same index construction applied in a different market environment in which value tilt is insignificant would, in fact, still yield the expected improved performance.

The authors argue that the selection of the benchmark index (a point of reference) alters investors'conclusions not only about relative returns but also about factor biases and other characteristics of the portfolios. They also conclude that the equal-weighted indexes should be considered the "clean slate" benchmarks, as they contain almost no assumptions.

Unfortunately, the paper does not analyze the portfolio rebalancing requirements when selecting a benchmark like this. In effect, selecting this benchmark necessitates selling the gaining stocks and buying the losing stocks at each rebalancing interval; the portfolio will therefore gain or lose if there is a realized mean reversion in the stock prices. Implementing what is essentially a short volatility strategy, such a benchmark will in fact have positive alpha relative to a no-rebalancing (ie, market cap based) benchmark precisely because of this volatility exposure. In my view, it is this exposure that drives its performance, and the value tilt happens to be coincidental because the investment gains in growth companies are concentrated by definition, so there are actually always more value companies in the market than growth companies (unless, of course, the entrepreneurs and venture capitalists succeed in constant creation of new growth companies at a rate that exceeds the effect of this runaway concentration, which seems unlikely).

On behalf of the Editorial Board I would like to thank our contributing authors for their excellent papers, and our readers for their keen interest and feedback. I look forward to receiving more insightful contributions and to continuing to share them with eager audiences worldwide.

You need to sign in to use this feature. If you don’t have a account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here