It is not usually my editorial style to quote popular music lyrics in my editorials for The Journal of Risk Model Validation but I cannot help but be reminded, given the current Zeitgeist, of The Doors' lyric: "[It's] been down so goddamn long that it looks like up to me". This seems to closely capture the current ultra-cautious optimism, in Europe at least, as we come out of a long winter to find our institutions more or less intact and stock markets rising, whether the rises are driven by fundamentals or by the sort of psychological mechanism alluded to in the above quotation.
Our first paper, "An integrated stress testing framework via Markov switching simulation", is by Wei Chen and Jimmy Skoglund. It focuses on the capture of tail events, especially those that are big-loss, low-probability events. As such, past data is often inadequate for modeling, and procedures such as importance sampling need to be used to extract information from the tails of the distribution. This paper proposes a multiperiod switching simulation-based method for integrated stress testing risk analysis that incorporates plausible events that are not necessarily captured in historical records or in historical stressed calibration of risk models. The authors claim that such an integrated risk model and stress testing framework not only leads to forward-looking tail risk measurement that mitigates the "black swan" effect, but also takes stress testing into advanced risk management decision-making analysis, such as scenario-based portfolio optimization.
The second paper in the issue is by Oliver Blumke and is titled "Probability of default validation: introducing the likelihood-ratio test and power considerations". The author provides statistical analysis that aims to improve matters in small samples; in many ways, the paper covers a similar issue to that in the first paper in the current issue. Blumke proposes a modification of default probability validation that ensures that default probabilities that are too low are rejected with a certain minimum probability. He also focuses on the likelihood-ratio test applied to the likelihood function derived from the one-factor model and suggests its usage for default probability validation, which allows him to weaken the assumption of independent observations.
Andy J. Y. Yeh and Jose A. Lopez's "An algorithmic model for retail credit portfolio segmentation", our third paper, suggests a constructive procedure for segmenting loan books; as readers will know, segmentation refers to a taxonomy whereby loans are classified into groups of homogeneous risk. Often such procedures are ad hoc at best and, to quote the abstract, "this new technique identifies the optimal number of segments, sorts the individual loan exposures into the various segments, and then leads to a greater degree of risk homogeneity in comparison with the baseline equal-bin and quantile-bin schemes".
Our final paper is "Modeling value-at-risk for international portfolios in different jump-diffusion processes" by Fen-Ying Chen. This has a more academic flavor than the previous papers. It uses Poisson jump risk and exchange rate risk to model asset returns and exchange rates under different jump-diffusion processes and it provides an analytical value-at-risk for international portfolios that could be used to manage market risk during a subprime mortgage crisis. This model considers portfolios that have not only jump risk but also exchange rate risk. The analytical value-at-risk solution is tested for reliability by backtesting with in-sample and out-of-sample fittings.
Returning to our introductory paragraph, the cautious optimism mentioned there has not infected the papers in this issue: big losses remain our principal concern. At some future date, we may concern ourselves with modeling gains!