Machine learning has made significant inroads into the finance literature, and this issue of The Journal of Risk presents an evaluation of it, comparing it with traditional parametric models. Also included are papers addressing challenges in estimating market beta and default risk, and in modeling realized volatility with long memory.
In “Forecasting the European Monetary Union equity risk premium with regression trees”, the issue’s first paper, David Cortés and Pilar Soriano use European Monetary Union data to show that regression tree ensemble methods such as bagging, random forests and boosting do not forecast the equity premium better than the standard AR.1/ benchmark model. However, Cortés and Soriano show that boosting and random forests are well suited for identifying economic predictors to be used by a risk-averse investor to form portfolios involving a risk-free asset and an equity fund.
Our second paper, “Shrinking beta”, is by David Blitz, Laurens Swinkels, Kristina Ūsaitė and Pim van Vliet, who split the determination of market beta in the capital asset pricing model into separate estimations of correlation and of volatility of returns via shrinkage. Through an empirical illustration, Blitz et al show that the estimation of correlation is statistically and economically more critical than that of volatility.
In “Distance to default based on the CEV–KMV model”, the third paper in this issue, Wen Su extends the classical Kealhofer–McQuown–Vasicek (KMV) model for credit risk estimation by accounting for nonconstant volatility through constant elasticity of variance (CEV). Using data on Chinese companies, Su shows how the CEV–KMV model can clearly capture the credit risk difference between high-risk special treatment (ST) firms and non-ST firms.
The issue concludes with “A two-component realized exponential generalized autoregressive conditional heteroscedasticity model” by Xinyu Wu, Michelle Xia and Huanming Zhang, which proposes an extension to the exponential generalized autoregressive conditional heteroscedasticity (EGARCH) model to jointly capture the dynamics of asset returns and realized volatility. The authors incorporate the long memory of volatility through a combination of long-run and short-run components, which results in a more flexible approach to identify the leverage effect than the standard EGARCH model. Using index data from the United States, Hong Kong and Japan, they provide empirical evidence of the superiority of their approach over standard alternatives such as realized GARCH, EGARCH and heterogeneous autoregressive EGARCH.
The authors use EMU data from the period between 2000 to 2020 to forecast equity risk premium and investigate Classification and Regression Trees.
The authors shrink correlation and volatility separately and evaluate the predictive power of this approach, finding economically and statistically significant gains from applying more shrinkage to correlations than to volatilities.
A two-component realized exponential generalized autoregressive conditional heteroscedasticity model
The authors propose a two-component EGARCH model for the modeling of asset returns and realized measures of volatility.