Random matrix theory provides a clue to correlation dynamics
A growing field of mathematical research could help us understand correlation fluctuations, says quant expert
Harry Markowitz famously quipped that diversification is the only free lunch in investing. What he did not say is that this is only true if correlations are known and stable over time.
Markowitz’s optimal portfolio offers the best risk-reward trade-off – for a given set of predictors – but requires the covariance matrix of a potentially large pool of assets to be known and representative of future realised correlations.
The empirical determination of large covariance matrices is, however, fraught with difficulties and biases. But the vibrant field of random matrix theory (RMT) has provided original solutions to this big data problem – and has droves of possible applications in econometrics, machine learning and for other large dimensional models.
Correlation is not stationary, however. Even for the simplest, two-asset bond/equity allocation problem, being able to model forward-looking correlation has momentous implications. Will this correlation remain negative in years to come, as it has been since late 1997 – or will it revert to positive territory?
Compared to our understanding of volatility, our grasp of correlation dynamics is remarkably poor. And, surprisingly, the hedging instruments that can mitigate the risk of bond/equity correlation swings are nowhere as liquid as the Vix volatility index.
There are in effect two distinct problems in estimating correlation matrices: one is a lack of data; the other is the non-stationarity of time.
Consider a pool of assets, N, where N is large. We have at our disposal T observations – daily returns, say – for each N time series. The paradoxical situation is this: even though every individual, off-diagonal covariance is accurately determined when T is large, the covariance matrix as a whole is strongly biased – unless T is much larger than N. For large portfolios, where N is a few thousand, the number of days in the sample should be in the tens of thousands – say, 50 years of data.
But this is absurd: Amazon and Tesla, to name but two, did not exist 25 years ago. So, perhaps we should use five-minute returns, say, and increase the number of data points by a factor of 100. Except that five-minute correlations are not necessarily representative of the risk of much lower-frequency strategies and other biases can creep into the resulting portfolios.
So, in what sense are covariance matrices biased when T is not very large compared to N? The best way to describe such biases is in terms of eigenvalues. Empirically, the smallest eigenvalues are found to be much too small and the largest are too large. This results in the Markowitz optimisation programme – a substantial over-allocation to a combination of assets that happened to have a small volatility in the past – with no guarantee that this will continue to be the case. The Markowitz construction can therefore lead to a considerable underestimation of the realised risk in the next period.
Compared to our understanding of volatility, our grasp of correlation dynamics is remarkably poor
Out-of-sample results are, of course, always worse than expected, but RMT offers a guide to at least partially correcting these biases when N is large. In fact, RMT gives an optimal, mathematically rigorous recipe to tweak the value of the eigenvalues so that the resulting, cleaned covariance matrix is as close as possible to the true but unknown one, in the absence of any prior information.
Such a result, first derived by Ledoit and Péché in 2011,1 is already a classic and has been extended in many directions. The underlying mathematics, initially based on abstract free probabilities, are now in a ready-to-use format – much like Fourier transforms or Ito calculus.2 One of the more exciting and relatively unexplored directions is to add some financially motivated prior, such as industrial sectors or groups, to improve upon the default agnostic recipe.
Stationarity, still
Now that we’ve addressed the data problem, the stationarity problem pops up. Correlations – like volatility – are not set in stone but evolve over time. Even the sign of correlations can suddenly flip, as was the case with the S&P 500 and Treasuries during the 1997 Asian crisis, after 30 years of correlations being staunchly positive. Ever since this trigger event, bonds and equities have been in so-called flight-to-quality mode.
More subtle, but significant changes of correlation can also be observed between single stocks and/or between sectors in the stock market. For example, a downward move of the S&P 500 leads to an increased average correlation between stocks. Here again, RMT provides powerful tools to describe the time evolution of the full covariance matrix.3
As I discussed in my previous column, stochastic volatility models have made significant progress recently and now encode feedback loops that originate at the microstructural level. Unfortunately, we are very far from having a similar theoretical handle to understand correlation fluctuations – although in 2007, Matthieu Wyart and I proposed a self-reflexive mechanism to account for correlation jumps like the one that took place in 1997.4
Parallel to the development of descriptive and predictive models, the introduction of standardised instruments that hedge against such correlation jumps would clearly serve a purpose. This is especially true in the current environment, where inflation fears could trigger another inversion of the equity/bond correlation structure, which would be devastating for many strategies that – implicitly or explicitly – rely on persistent negative correlations.
As it turns out, Markowitz’s free lunch could be quite pricy.
Jean-Phillipe Bouchaud is chairman of Capital Fund Management and member of the Académie des Sciences.
References
1. Ledoit, O, & Péché, S (2011). Eigenvectors of some large sample covariance matrix ensembles. Probability Theory and Related Fields, 151(1–2), 233–264.
2. Potters, M, & Bouchaud, JP (2020). A first course in random matrix theory: for physicists, engineers and data scientists. Cambridge: Cambridge University Press. doi:10.1017/9781108768900
3. Reigneron, PA, Allez, R, & Bouchaud, JP (2011). Principal regression analysis and the index leverage effect. Physica A: Statistical Mechanics and its Applications, 390(17), 3026–3035.
4. Wyart, M, & Bouchaud, JP (2007). Self-referential behaviour, overreaction and conventions in financial markets. Journal of Economic Behavior & Organization, 63(1), 1–24.
Editing by Louise Marshall
コンテンツを印刷またはコピーできるのは、有料の購読契約を結んでいるユーザー、または法人購読契約の一員であるユーザーのみです。
これらのオプションやその他の購読特典を利用するには、info@risk.net にお問い合わせいただくか、こちらの購読オプションをご覧ください: http://subscriptions.risk.net/subscribe
現在、このコンテンツを印刷することはできません。詳しくはinfo@risk.netまでお問い合わせください。
現在、このコンテンツをコピーすることはできません。詳しくはinfo@risk.netまでお問い合わせください。
Copyright インフォプロ・デジタル・リミテッド.無断複写・転載を禁じます。
当社の利用規約、https://www.infopro-digital.com/terms-and-conditions/subscriptions/(ポイント2.4)に記載されているように、印刷は1部のみです。
追加の権利を購入したい場合は、info@risk.netまで電子メールでご連絡ください。
Copyright インフォプロ・デジタル・リミテッド.無断複写・転載を禁じます。
このコンテンツは、当社の記事ツールを使用して共有することができます。当社の利用規約、https://www.infopro-digital.com/terms-and-conditions/subscriptions/(第2.4項)に概説されているように、認定ユーザーは、個人的な使用のために資料のコピーを1部のみ作成することができます。また、2.5項の制限にも従わなければなりません。
追加権利の購入をご希望の場合は、info@risk.netまで電子メールでご連絡ください。
詳細はこちら コメント
大げさな宣伝を超えて、トークン化は基盤構造を改善することができる
デジタル専門家によれば、ブロックチェーン技術は流動性の低い資産に対して、より効率的で低コストな運用手段を提供します。
GenAIガバナンスにおけるモデル検証の再考
米国のモデルリスク責任者が、銀行が既存の監督基準を再調整する方法について概説します。
マルキールのサル:運用者の能力を測る、より優れたベンチマーク
iM Global Partnersのリュック・デュモンティエ氏とジョアン・セルファティ氏は、ある有名な実験が、株式選定者のパフォーマンスを評価する別の方法を示唆していると述べています。
IMAの現状:大きな期待と現実の対峙
最新のトレーディングブック規制は内部モデル手法を改定しましたが、大半の銀行は適用除外を選択しています。二人のリスク専門家がその理由を探ります。
地政学的リスクがどのようにシステム的なストレステストへと変化したのか
資源をめぐる争いは、時折発生するリスクプレミアムを超えた形で市場を再構築しています。
オペリスクデータ:FIS、ワールドペイとのシナジー効果の失敗の代償を支払うことに
また:ORXニュースによるデータで、リバティ・ミューチュアル、年齢差別訴訟で過去最高額を支払う;ネイションワイド、不正防止対策の不備。
東京の豊富なデータが市場への影響について明らかにすること
新たな研究により、定量金融において最も直感に反する概念の一つが普遍的であることが確認されました。
資金調達コストの配分:集中型 vs 分散型
サチン・ラナデ氏は、特に担保付融資において、集中化は資本効率と自己資本利益率(ROE)の向上に寄与し得ると述べています。