Journal of Investment Strategies

Welcome to the fourth issue of The Journal of Investment Strategies. This issue completes the first volume and the first year of our publication. We on the editorial board are proud of this achievement and are even more certain than we already were that our journal is indeed useful and is appreciated both in the academic community and among practitioners.

In this issue you will find a diverse set of strategies discussed, each of which highlights a different manner in which quantitative analysis can be applied in investment management.

In the first article of the issue, "Jointly modeling the prices of American depository receipts, the local stock and the US dollar", Dilip Madan, a well-known academic expert on the pricing of derivatives, presents a consistent framework pricing of options on American depository receipts (ADRs) and for valuation-based investment strategies trading such options. While it is still a sufficiently rich model that permits many interesting dynamic features, the Madan model imposes a certain structure on asset price dynamics and, importantly, on the interdependence between the stock price and foreign exchange rate processes that can be gleaned from comparing the three markets. He argues that, while the volatility surface of ADR options encodes the joint process distribution, the local (foreign) volatility surfaces for stock and foreign exchange options encode the marginal distributions for each variable. This enables the author to fully decompose the joint dynamics, to calibrate the model parameters to a relatively small amount of liquidly traded instruments spanning each of three markets, and finally to obtain consistents price for ADR options at all strikes and maturities.

Madan then proceeds to devise an investment strategy of "convergence of market to model" as a measure of how well the model works. While the strategy does not always work (in the cases of UBS and Barclays, for example), when it does (see the case of Santander in the paper) it does so consistently over time. The strategy, while admittedly being simplified and disregarding important practical questions such as the cost of trading, is nevertheless informative, and not only as a metric for evaluating a pricing model.

It represents a quantitative example of what I would call a fundamental value strategy. There is no contradiction in terms here: one can indeed have a "quantitative fundamental strategy". In my view, a fundamental strategy is one that starts with an absolute valuation of an asset based on some set of consistent verifiable assumptions - be that a quantitative model or a disciplined analyst opinion of a company and its prospects - and then defines a convergence strategy based on the difference between market prices and model prices: namely, buying when the market price is low compared with the absolute model price and selling when the market price is too high. In contrast, in a relative value arbitrage strategy the valuation of assets as expensive or cheap is relative to some other related asset, and the strategy itself is formulated via long/short hedged positions in these related assets. In other words, a relative value strategy bets on convergence of asset prices to each other, while a fundamental strategy bets on convergence of asset prices to model predictions. Of course, when it comes to any quantitative model, its parameters are fitted to some market observables, such as the more liquid options in the case of Madan's ADR model, so one could claim that the valuation obtained in this manner is relative with respect to the valuation of those reference securities. But if the resulting strategy is formulated in an absolute/directional manner, then I would still deem it a fundamental strategy.

In the issue's second article, "Balanced baskets: a new approach to trading and hedging risks", David H. Bailey and Marcos López de Prado present a set of interesting and very practical solutions to a familiar portfolio construction problem. Imagine a portfolio of N assets whose risk factor exposures and covariance structure are known. What should the portfolio allocation to each asset be? Ever since the advent of Markowitz's mean-variance model, this problem has been under consideration by many academics and practitioners, and somewhat surprisingly still generates novel solutions as we gradually come to understand the relevance of different objectives and the importance of various assumptions.

Bailey and López de Prado highlight, in particular, three of the more advanced solutions to the optimal portfolio construction problem. All of them have the distinction of trying to achieve optimality in a somewhat intuitive manner, by first formulating the desired quality of the portfolio and then finding a solution that has that quality and is more or less independent of the other assumptions. These three approaches - the equal-risk contribution, the maximum diversification ratio and the mini-max subset correlation (the latter introduced in a paper by López de Prado and Leinweber that was published in our journal's spring issue this year) - all result in what the authors call "balanced baskets". This is a name for optimal portfolios that emphasizes the objective of not just reducing the risk but also maintaining a balanced portfolio composition, according to some criteria which quantify the degree of such balance.

In essence, the balanced basket approach replaces a single objective function minimization, as in the mean-variance framework, with a multi-objective function, where some additional objectives can appear either as constraints (eg, all risk contributions must be equal) or as separate functions that must be minimized (eg, the risk contributions from any sub portfolio). Either way, the proper choice of these additional optimization targets can allow one to avoid spurious "corner solutions", which are known to be a serious problem for traditional minimum risk optimization. The authors demonstrate the differences between the three viable balanced basket approaches and even provide the sample implementation for the optimization procedures. I believe this paper will help many practitioners improve their portfolio construction methods, and in the process it might end up influencing how a great deal of money is invested in the future.

In the third article of the issue, "The role of diversification risk in financial bubbles", a team of researchers from ETH Zurich - Wanfeng Yan, Ryan Woodard and Didier Sornette - present the most recent variation of the Johansen-Ledoit-Sornette (JLS) model of bubbles and crashes, which has been extensively applied to the analysis of both market-wide bubbles (such as the 1999 Nasdaq/internet bubble and the 2007 Chinese stock market bubble) and those in narrower market segments (eg, the US repo market), and even to individual asset prices.

The JLS model - a brief overviewof which is given in the article before its extension is presented - is based on the belief that bubbles result from the endogenous behavior of market participants: in particular, noise traders whose actions amplify deviations from fundamentals in an accelerating manner. Thus, the dynamics of bubbles/crashes are believed to be a self-excited complex system phenomenon, and the JLS model essentially postulates a phenomenological log-periodic power-law dynamics that is believed to hold in many such systems.

The model presented in the paper extends this basic framework by adding an additional explanatory variable, called a Zipf factor, which measures the difference between the return on cap-weighted and equal-weighted portfolios of stocks. This factor is somewhat similar to the Fama-French large-minus-small factor, or to the market cap factor in the Barra multi-factor model. However, it represents a very particular weighting choice that is dependent on the market dynamics of stock capitalization and is therefore sensitive to disproportional run-ups of stock prices in some segments of the market. Thus, it measures the concentration of "bubble-like" behavior, or the lack of diversification in the market portfolio.

The authors estimate the expanded JLS-Zipf model and find that, while the addition of the Zipf factor is not significant in improving the predictability of the bubble/crash regime change, it nevertheless does have important explanatory power. In particular, they demonstrate that the returns of a simple "bubble-riding" strategy are improved when it is augmented by a switch from an equally weighted portfolio to a marketcap-weighted portfolio depending on the value of the Zipf factor. One observation I would make, in the light of some of the other papers in this issue that highlight the importance of risk weighting in optimal portfolio construction, is that the definition of the Zipf factor in this paper could perhaps be improved. If the objective is to measure the lack of diversification, then I would imagine that an "adjusted Zipf factor" that is equal to the difference in returns between a market (capweighted) portfolio and an optimally diversified one would be a better choice. Since the best diversification is achieved not by equal dollar weight but by some sort of equal risk weight (with or without taking correlations into account), the proper candidate for the alternative portfolio could be, for example, a managed volatility portfolio (where weights are inversely proportional to stock volatility), or an equal risk contribution portfolio (where weights are such that the marginal contribution to risk is equal for all stocks), or another of the more complex balanced baskets presented in the previous paper in this issue. I would not be surprised if these alternative choices result in more significant statistical evidence for improving the JLS model.

The discussion paper in the Investment Strategy Forum, "A proof of the optimality of volatility weighting over time" byWinfried G. Hallerbach, continues the theme of optimal portfolio construction methodologies, this time in the dynamic setting. The author demonstrates that when volatilities are time varying, volatility smoothing (or volatility targeting) of portfolios improves their Sharpe ratio and their information ratio. Therefore, he concludes that in both absolute (managed against a risk-free benchmark) and active (managed against a risky benchmark) strategies, volatility weighting is preferred over constant exposure portfolios. This is a widely held view among practitioners, and the paper provides provides a much needed theoretical proof for this commonly used technique.

Two limitations areworth mentioning.The first is that the paper considers a portfolio as a whole, not as individual instruments. While volatility weighting (targeting) for the whole portfolio does indeed improve the risk-adjusted performance, we still have to answer the question of how best to achieve such targeting: by scaling the whole portfolio or by somehow adjusting the portfolio's composition. The examples in the paper represent the types of strategies for which this is a less relevant distinction: namely, futures-based strategies, which typically involve only a few instruments. Fortunately, for broader multi-asset portfolios readers can consult another of the papers in this issue to learn how to build balanced baskets. Such optimization would solve the portfolio composition question, while the overall portfolio volatility could still be managed by adjusting its leverage.

This provides a good segue into my second cautionary point for readers. If the volatility targeting is undertaken by adjusting the leverage of the portfolio, then at times of predicted low volatility, it would invariably cause an increase in the leverage. At the same time, an unexpected spike in volatility might be relatively greater from the lo wvolatility state than from an already elevated volatility state, resulting in amplified loss. The effect is not trivial because the potential feedback effects of deleveraging can cause even greater volatility spikes and cascading losses: an example can be seen in the August 2007 quant crunch. This does not mean that one should not try to increase the Sharpe ratio by volatility targeting, it simply means that, by doing so, one makes a trade-off with tail risk and must pay greater attention to maintaining manageable levels of leverage and liquidity.

On behalf of the Editorial Board I would like to thank our contributing authors for their excellent papers, and our readers for their keen interest and feedback. I look forward to receiving more insightful contributions and to continuing to share them with eager audiences worldwide.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here