Value-at-risk (VaR) is increasingly replacing volatility as the main measure of risk. In this paper, we investigate the consequences when VaR is used as the relevant risk constraint in portfolio optimization. In particular, we look at bond portfolios and use both normal distribution and historic simulation (ie, the empirical distribution). The findings show that the empirical distribution is able to reliably predict the VaR, but when this combination is used for a risk constraint in portfolio optimization, this approach fails dramatically. As a result, portfolios are generated with a substantial amount of hidden risk that, in real life, might easily remain disguised.