Alexandre Antonov, a director in the quantitative model risk team at Standard Chartered in London, visited our offices to talk about his new paper, Efficient Simm-MVA calculations for callable exotics, co-authored by Andrew McClelland, a director in the quantitative research team at Numerix in New York, and Serguei Issakov, a San Francisco-based senior vice-president in the quantitative research group at Numerix.
Margin valuation adjustment (MVA), the cost of funding the initial margin on a trade, has exploded since the non-cleared margin rules came into force in September 2016. To calculate the pricing adjustment, one needs to simulate the initial margin over the life of the trade. The initial margin itself is calculated using the standard initial margin model (Simm) developed by the industry, which is based on the computation of sensitivities.
StanChart’s Antonov said existing “brute force” methods for computing future sensitivities are extremely cumbersome for callable products, as their future values are calculated using Monte Carlo simulations and regressions. Differentiating these regressions to find future sensitivities is a painful process, said Antonov.
“If I have a hundred time steps and several thousands of the Monte Carlo paths and maybe 20 sensitivities – or even 10 – this would be a huge amount of work. This is feasible, but it’s too slow,” he added.
His technique, on the other hand, converts portfolio sensitivities such as deltas and vegas to model-based sensitivities and calculates those sensitivities using a combination of the popular algorithmic differentiation technique and the much less common tangent differentiation technique.
“This can be done algorithmically. If we have our scripting language for instrument pricing, we just overload certain operations inside the C++ classes and we just leave, so this is pretty simple,” said Antonov. “It doesn’t need any tape during the pricing. So the tape is only during model construction, but not pricing.”
The result is a technique that is “several hundred times” faster than the brute force technique.
Antonov said a new area of research he is eager to venture into is model validation, especially answering the question of how one can determine whether a model is good or not with respect to a certain payoff. This would remove the subjectivity of the assumptions surrounding pricing models from model validation, he added.
“Imagine we have a certain hedging procedure. So we hedge an option, we buy and sell, and we spend a certain amount of money. When it comes to the payoff, we receive or pay something. All this is objective; all this is money. So it’s not option price calculated by a certain model which is subjective,” said Antonov. “Of course, we can follow the classical arbitrage-free arguments, but we recalibrate this model all the time so in some sense the model price is not something objective.”
2:05 New paper in Risk
7:50 The Andersen, Pykhtin and Sokol model
11:00 An alternative method
14:48 Why is the calculation of future sensitivities complicated?
21:30 Criticism of existing techniques
23:19 Mathematical trick behind the paper
26:40 Application of algorithmic differentiation
29:32 Tangent algorithmic differentiation
32:10 The Green and Kenyon model
38:30 Future research
To hear the full interview, listen in the player above, or download. Future podcasts in our Quantcast series will be uploaded to Risk.net. You can also visit the main page here to access all tracks or go to the iTunes store to listen and subscribe.
- People moves: SocGen adds in prime services, Deutsche fills new rates hole, HSBC makes model move, and more
- Quant Finance Master’s Guide 2019
- Princeton tops inaugural Risk.net quant master’s ranking
- Credit risk quants are hitting the tech gap
- Does credit risk need an expected shortfall-style revamp?