Journal of Risk Model Validation

Before I take our readers through their staple diet of four papers, I am delighted to announce that we have a special edition of The Journal of Risk Model Validation to celebrate its tenth anniversary, in which Harald Scheule has kindly put together a selection of past contributions that I hope our readers enjoy. Read the papers for free at: http://www.risk.net/static/10th-anniversary-virtual-special-issue

Turning to the current issue, our first paper, by Martín Egozcue and Luis Fuentes García, is entitled "Banks' expected equity-to-asset ratio bounds under foreign exchange risk" and it presents bounds for the expectation equity-to-asset ratio when a bank faces foreign exchange shocks. These bounds are interpreted by the authors as worst-case scenarios and it is this feature that is an innovation to the model validation literature.

The issue's second paper, "An application of sensitivity analysis to hedge funds" by Greg N. Gregoriou and Razvan Pascalau, investigates a sample of 142 live hedge funds via a data envelopment analysis sensitivity analysis using a super-efficiency model. Using regression analysis, the study provides a further in-depth analysis of the factors that affect the efficiency status with respect to each input. The results in this paper have implications for financial risk management and risk model validation.

"Value-at-risk time scaling: a Monte Carlo approach" by Moepa Malataliana and Michael Rigotard is the third paper in this issue. It discusses a value-at-risk (VaR) time scaling approach based on making some innovative distributional assumptions so as to apply Monte Carlo simulation to the determination of long-term VaR. In particular, the paper uses the composite normal-Pareto distribution to better capture tail risk.

Our final paper, "Dynamic credit score modeling with short-term and long-term memories: the case of Freddie Mac's database" by Maria Rocha Sousa, João Gama and Elísio Brandão, investigates two mechanisms of memory - short-term memory(STM) and long-term memory (LTM) - in the context of credit risk assessment. These components are fundamental to learning but are overlooked in credit risk modeling frameworks. As a consequence, current models are insensitive to changes such as population drifts or periods of financial distress. The authors go beyond the typical development of credit score modeling based in static learning settings to the use of dynamic learning frameworks. An empirical study relying on the Freddie Mac database, with 16.7 million mortgage loans granted in the United States from 1999 to 2013, suggests using a dynamic model of STM and LTM components to optimize current rating frameworks.

I find all the papers interesting but I have a particular personal interest in the last paper as I have an interest in agent-based modeling, which this paper makes an essential contribution to. Again, I remind our readers that ten years have elapsed since we launched The Journal of Risk Model Validation; we do not know about the hazard functions for journal survival, but it is a topic that is no doubt researched in quantitative publishing.

Steve Satchell
Trinity College, University of Cambridge

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here