Journal of Risk Model Validation
Editor-in-chief: Steve Satchell
Volume 15, Number 2 (June 2021)
Trinity College, University of Cambridge
We are living in interesting times. In fact, the expression “may you live in interesting times”, which I took to be a Chinese blessing, turns out to be both a curse and not Chinese. Clearly, it is wise to question our preconceptions from time to time and learn from our mistakes. Another preconception I have held is that machines cannot learn, but that is apparently not the case, as the first paper in this issue of The Journal of Risk Model Validation describes an application to loan data. The topic of machine learning sits within artificial intelligence, which is (and has been for a few years now) an area of intense financial research. The research in this area often looks not unlike earlier papers in the field written decades ago, but the big, and important, difference is the enhanced computing power that is now available, which makes these methods commercially viable.
Our first paper, “What can we learn from what a machine has learned? Interpreting credit risk machine learning models” by Nehalkumar Bharodia and Wei Chen, notes that machine learning algorithms are gaining popularity in financial risk management due, inter alia, to their capacity to analyze unstructured and alternative data. However, machine learning models are often viewed as lacking transparency and interpretability, which hinders model validation and prevents business users from adopting them in practice. Furthermore, such black box models are only acceptable if performance is good. If performance is bad, the modeler is vulnerable to criticism from both client and employer. Bharodia and Chen investigate a few popular machine learning models using LendingClub loan data and judge them on performance and interpretability. Their study argues that LendingClub has sound risk assessment practices. The findings and techniques used in this paper could be extended to other models and other data sets.
The Journal of Risk Model Validation tends to cater for practical people, but we have always been happy to publish high theory too. The second paper in the issue, “Nonconvex noncash risk measures” by Chang Cong and Peibiao Zhao, qualifies as such. In 2018, Cong and Zhao introduced noncash risk measures with 1-norm for weak cone-type acceptable sets. In their paper in the current issue, they take a further look at nonconvex, noncash risk measures but with more general p-norms for nonweak cone-type acceptable sets. They provide a convex extension of the nonconvex, noncash risk measures and establish a representation theorem for this extension.
Our third paper, “Empirical validation of the credit rating migration model for estimating the migration boundary” by Yang Lin and Jin Liang, develops and validates a structural model for credit rating migration. Lin and Liang derive the migration boundary through this approach. Their model is a steady-state version of a model based on their earlier work and is applied to two long-term corporate bonds. The empirical results show that the theoretical boundary values fit empirical ones quite well. This research shows that the model offers a method applicable to identifying the credit rating migration boundary and provides preliminary evidence of the effectiveness of the model. This appears to be a useful tool in overall validation.
The issue’s final paper, “Validation nightmare: the slotting approach under International Financial Reporting Standard 9” by Lukasz Prorokowski, Oleg Deev and Jean-Daniel Guigou, looks at the implementation of what is known as the slotting approach. This is a requirement by the UK Prudential Regulation Authority to determine the regulatory capital appropriate to real estate loans and involves looking up a two-dimensional table, where the dimensions are time to maturity and loan quality. Prorokowski et al introduce the concept of mapping the probability of default estimates to the slotting scores. A sequential process for deriving the correspondence between the slotting scores and the probabilities of default of a particular obligor is proposed as the solution, thus showing a way to adapt the slotting approach to the IFRS 9 rules. This paper also explains the methodology of the slotting model, discussing specific modeling choices for the real estate slotting approach aligned to the relevant regulatory framework. The methodology presented should be particularly useful for the relevant practitioners, as well as more generally.
Finally, if you have some research that is nearing completion, we would love to see it. While in “uninteresting times”, before Covid-19, we had a steady flow of submissions, the “interesting times” we are living through seem to have created a bottleneck in the process. We urge readers and contributors to submit their research, as the journal can likely offer fairly rapid publication relative to its recent past.
Papers in this issue
What can we learn from what a machine has learned? Interpreting credit risk machine learning models
This paper studies a few popular machine learning models using LendingClub loan data, and judges these on performance and interpretability
Nonconvex noncash risk measures
This paper looks at nonconvex, noncash risk measures with p-norm (1 ≤ p ≤ ∞) for nonweak cone-type acceptable sets.
Empirical validation of the credit rating migration model for estimating the migration boundary
In this paper, a structural model for credit rating migration is developed and validated, by which the migration boundary is recovered for the first time.
Validation nightmare: the slotting approach under International Financial Reporting Standard 9
This paper makes an important contribution to the practice of validation by focusing on an under-researched area of the slotting approach to real estate specialized lending under the International Financial Reporting Standard 9 (IFRS 9) framework.