Trinity College, University of Cambridge
I am very pleased with the contributions to this issue of The Journal of Risk Model Validation, as they cover a range of topics relevant to risk model validation and to its close cousin scenario testing.
The issue’s first paper is titled “Scenario design for macrofinancial stress testing”. Its author, Emanuele De Meo, provides a novel empirical approach to scenario design for selecting a stress scenario for international macroeconomic financial variables. This is an interesting problem, as casting risk validation in an international framework opens up a number of additional channels where risks may arise that might appear to be absent, or at least unexplained, in a one-country model. The scenario design framework has several building blocks. First, multiple scenarios on the risk factors are generated by simulating a multi-country large Bayesian vector autoregression. Second, De Meo takes the perspective of a representative investor following a factor-investing strategy who aims to select a severe-yet-plausible scenario for a set of systematic risk factors. Moreover, the stress scenarios selected under different approaches are compared with measure plausibility. De Meo also compares the proposed scenario design approach with a historical scenario approach in terms of its ability to select a stress scenario in the run-up to a rare adverse event (in his example, the Covid-19 pandemic). He concludes that this novel framework is suitable for the selection of a proper forward-looking severe-yet-plausible macrofinancial stress scenario.
The second paper in this issue, “Forecasting the loss given default of bank loans with a hybrid multilayer LGD model by extending multidimensional signals” by Mengting Fan, Zan Mo, Qizhi Zhao, Hongming Gao, Hongwei Liu and Hui Zhu, has a record number of authors for this journal. The paper makes a useful contribution to the literature by focusing on the risk modeling of startup companies: an area that is not widely studied. The risks associated with startups could potentially be a major source of loss for lenders in the future. According to the Basel II and Basel III accords, loss given default (LGD) is an important component of credit risk assessment. To improve the accuracy of LGD prediction, this paper uses signaling theory and machine learning methods to study the LGD predictions of commercial banks concerning the venture quality and extended level of uncertainty. Machine learning has been around for a surprisingly long time; what gives it more bite in the current climate is improved computer power and speed. We warn readers that machine learning has a language of its own and that they should be prepared to google any unfamiliar jargon that appears in the following description or in the paper itself. Fan et al use signaling theory to analyze the impacts of venture quality and extended level of uncertainty signals on LGD. Then, they propose a new hybrid multilayer LGD model-based hierarchical framework for studying the LGD predictions of commercial banks in a high-dimensional unbalanced data context. The experimental results demonstrate that the venture quality and level of uncertainty significantly affect LGD. Additionally, the hybrid multilayer LGD model (consisting of logistic classification, random oversampling examples combined with random forest classification and ordinary least squares regression) not only yields high levels of predictive accuracy and interpretability in high-dimensional unbalanced data contexts but is also robust regardless of the selection of the training set and samples used. The results of the study serve as an important reference for decision-making by commercial banks in their assessment of default losses and the risk management of small and startup companies.
Our third paper, “Performance validation of representative sample-balancing methods in loan credit-scoring scenarios”, is by Ling-Jia Chen and Runchi Zhang. Their interest is in imbalanced data sets. Data sets used to construct a credit-scoring model are usually imbalanced, leading to the recognition ability of the model becoming biased toward the majority and low-risk samples and away from the minority and high-risk samples. A typical problem might be UK residential mortgages: in the last few years there have been vast numbers of well-performing loans with few defaults, and indeed not enough defaults for investigating risk characteristics unless they are oversampled. In the past few decades, many sample-balancing methods have been designed to balance the two classes of samples before modeling, but they lack sufficient performance verification, especially on large data sets. This paper quantitatively validates 12 of the most representative balancing methods. The results show that, in terms of performance, a method combining the synthetic minority oversampling (SMOTE) method and the Edited Nearest Neighbor method is optimal, followed by the SMOTE-Tomek method, whose performance is significantly different from the other methods tested. All 12 balanced methods can maintain stability and thus meet the relevant requirements of the regulatory authorities. The performance of each credit-scoring model is also influenced by the balancing ratio and the number of variables in the data set. In general, the user needs to determine the proper balancing ratio according to the comprehensive characteristics of the scoring model and the balancing method, and a data set containing a larger number of related variables will improve the performance and robustness for most balancing methods.
The final paper in the issue is “Risk contagion and bank stability: the role of credit risk and liquidity risk” by Lei Ding, Yaming Zhuang and Hu Wang. Financial crises have shown that credit risk and liquidity risk have an important impact on the stability of the banking system. Considering both credit risk and liquidity risk, Ding et al propose a systemic risk measurement model and measure systemic risk in banking by using 2013–18 data for China’s banking sector. Their results show that taking into account the two risk contagion channels together gives a significantly higher value of systemic risk in the banking system than when summing the individual credit and liquidity contagion channels. Credit risk is the main source of systemic risk in China’s banking system. Systemic risk can be lowered by reducing large single exposures. Increases in the credit guarantee ratio and the cash ratio can reduce systemic risk in banking, and the cash ratio is more efficient at reducing such exposure than a credit guarantee.
The author presents an empirical approach to scenario design for selecting a stress scenario for international macrofinancial variables and compares this approach with a historical scenario approach.
Forecasting the loss given default of bank loans with a hybrid multilayer LGD model by extending multidimensional signals
The authors employ signaling theory and machine learning methods to investigate loss given default predictions of commercial banks and propose a method to improve the accuracy of these predictions.
The authors validate 12 of the most representative sample-balancing methods used for credit-scoring models, finding that a combined SMOTE and Editor Nearest Neighbor method is optimal.
The authors put forward a systemic risk measurement model and measure systemic risk in China's banking sector for the period 2013-18.