Editor: Farid AitSahlia
Published: 29 Jun 2014
Models idealize practical situations, and their main role is to guide decision-making. But how do we assess and compare models? There are no general rules, but identifying when a model is appropriate, or is not, for a specific environment enhances our understanding. In addition, models can be compared on the basis of specific criteria. Ultimately, numerical and empirical evidence must be provided. This issue of The Journal of Risk includes papers that confront these challenges...
Papers in this issue
by Masaaki Kijima, Youichi Suzuki and Yasuhiro Tamba
by Evelyn Hayden, Alex Stomper and ArneWesterkamp
by Harald Lohre, Heiko Opfer and Gábor Ország
by Cristina Danciulescu
... The first paper, "Risk evaluation of mortgage-loan portfolios in a low interest rate environment" by Masaaki Kijima, Youichi Suzuki and Yasuhiro Tamba, makes the case for the adoption of a simple one-factor quadratic Gaussian model when interest rates are very low. The authors argue that this model addresses the likelihood of negative interest rates and the calibration of both pricing and physical probability measures over long-term horizons. It also provides analytical pricing expressions for interest-rate derivatives and a practical set-up for risk measure estimation via Monte Carlo simulation.
In the issue's second paper, "Selection versus averaging of logistic credit risk models", Alex Stomper compares two model-selection methods: namely, the commonly used heuristic or stepwise approach and a Bayesian scoring rule. In the particular context of logistic credit risk, the author shows through an empirical study that the Bayesian approach is a better strategy. In essence, through the Bayesian approach all potential models are considered concurrently with their relative weights, in contrast to the stepwise approach where models are compared in succession.
The practical implementation of the celebrated mean-variance model of Markowitz has long been hindered by issues of parameter estimation. In the third paper in this issue, "Diversifying risk parity", Harald Lohre, Heiko Opfer and Gábor Ország assess the empirical performance of risk budgeting that relies on principal components of the underlying assets. In a multi-asset allocation study, they compare and contrast this approach, known as diversified risk parity, or DRP, with alternatives such as 1=N and minimum variance. They illustrate in particular how DRP avoids the extremes of either 1=N , which does not exploit asset correlation, or minimum variance, which tends to concentrate strategies on low-volatility assets.
In the fourth and final paper in the issue, "Pitfalls and solutions in current risk management methodology", Cristina Danciulescu deals with backtesting of valueat- risk models, which is central to regulatory capital requirements. While previous literature has documented the inadequacy of continuous approximations based on asymptotic theory, the author pursues the actual implementation of discrete model tests for small sample sizes, and shows in particular how to separate the effects of bad luck from bad models. The author also shows that simple remedies, such as modest increases in value-at-risk level or sample size, are not effective.
Warrington College of Business Administration, University of Florida
Search the archive
Subscribe to gain access to The Journal of Risk and its valuable archive.
Call for papers
Submit your work and we can offer you:
Please contact [email protected] for more information.