A new way to calculate conditional expectations
Gaussian distributions can sharpen one of the most commonly used tools in quant finance
As acronyms go, GMM-DCKE – Gaussian mixture model dynamically controlled kernel estimation – is a bit of a mouthful. Its proponents, though, consider it to be the simplest expression of conditional expectations, one of the bedrock ingredients of quantitative finance.
Conditional expectations – the expected values of random variables given a set of conditions – have all sorts of applications, ranging from the pricing of exotic and out-of-the-money options to the calculation of derivatives valuation adjustments and the calibration of volatility models.
They are typically calculated using the least squares Monte Carlo method, originally proposed by Longstaff and Schwartz in 2001 to price American options. But despite its popularity, this approach is not without drawbacks – chief among them is the need to simulate a large number of paths, especially when dealing with more complex instruments.
GMM-DCKE, by contrast, is based on a combination of multiple Gaussian distributions and requires minimal input data. “All I need to apply this model is data of two time points and the Gaussian mixture model,” says Jörg Kienitz, a partner and senior quant at Acadiasoft, with academic affiliations at the universities of Cape Town and Wuppertal, who introduced the technique in a paper published in Risk.net last month. He describes it as a semi-analytic expression of conditional expectations that is purely data driven and model-agnostic.
Gaussian mixture models can replicate nearly any distribution by combining a sufficient number of them. The first step when calculating conditional expectations using GMM-DCKE is to select the number of Gaussian distributions necessary for the replication. This can also be inferred from the data. Kienitz started with five to 10 Gaussians, which worked adequately. The maximum likelihood of the corresponding probability distribution is then used to compute the mean and the covariance analytically, based on the estimated parameters.
A control variate – a means to mitigate estimation errors – can also be incorporated as a proxy hedge, Kienitz adds, and some of the risk can be hedged using this proxy.
So far, the results have been promising. “We are currently using this method for Bermudan pricing and to calibrate local stochastic volatility models, and we are researching it for solving forward-backward stochastic differential equations, where we have some very promising results compared to analytic solutions,” Kienitz says.
It appears that with Kienitz’s method, we can obtain a better calibration for local stochastic volatility models
Bernard Gourion, Natixis
If applied to exposure calculation, compared with other methods, GMM-DCKE can estimate it more easily by shifting means and keeping the shape of the distribution, he adds. It may be a bit slower due to this expectation maximisation, but no re-simulation is necessary.
Kienitz’s paper has been well-received in the industry and GMM-DCKE is expected to quickly make its way into banks’ model libraries.
“For a long time, we’ve been using the Longstaff-Schwartz model for Bermudan options, but we have been looking at the GMM-DCKE model as an alternative to price our Bermudan book quicker and potentially more accurately,” says Nicholas Burgess, an independent consultant who has collaborated with the equity derivatives quant team at HSBC. “Normally banks are wary of changing the software infrastructure to plug new models in, but GMM-DCKE is a relatively light implementation and it’s likely to end up in production.”
Bernard Gourion, a senior quant in the model validation team at Natixis in Paris, is also impressed. He sees GMM-DCKE as an alternative to the particle calibration methods developed by Guyon and Henry-Labordere and the one proposed by Aitor Muguruza.
“From the preliminary results of the tests we have performed so far, it appears that with Kienitz’s method, we can obtain a better calibration for local stochastic volatility models,” Gourion says, with the proviso that the testing process at Natixis still has a long way to go before reaching final approval.
Others are investigating whether the approach can be used to fill in missing values – for example, in incomplete time series, such as the prices of illiquid securities. The information from the distribution obtained from the data can be used to calculate the conditional expectation and then simulate the missing values. A senior quant at one European bank considers this a promising line of research. He views GMM-DCKE as a low-rank approximation, akin to other recently developed techniques that are emerging as an easy-to-use alterative to neural networks.
GMM-DCKE builds on another recent paper Kienitz co-authored with Gordon Lee, Nikolai Nowaczyk and Nancy Qingxin Geng, in which the dynamically controlled kernel estimation (DCKE) was introduced for the purpose of calculating conditional expectations.
The DCKE uses a numerical method that has limited bandwidth to approximate the conditional distribution on which the calculation is based. The Gaussian mixture model completely eliminates this local bandwidth optimisation and replaces it with an approach that is analytically smoother and more stable.
“I had this idea in mind for a long time,” says Kienitz. “The Python code is available on GitHub and it’s just a few lines long.”
The limitation of GMM-DCKE is the dimensionality it can handle. “I have applied it to dimensions up to 20,” says Kienitz. “Beyond that it might not work well.” Addressing this limitation is Kienitz’s next challenge.
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe
You are currently unable to print this content. Please contact info@risk.net to find out more.
You are currently unable to copy this content. Please contact info@risk.net to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@risk.net
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@risk.net
More on Views
Quants mine gold for new market-making model
Novel approach to modelling cointegrated assets could be applied to FX and potentially even corporate bond pricing
Quants dive into FX fixing windows debate
Longer fixing windows may benefit clients, but predicting how dealers will respond is tough
Podcast: Piterbarg and Nowaczyk on running better backtests
Quants discuss new way to extract independent samples from correlated datasets
BofA quants propose new model for when to hold, when to sell
Closed-form formula helps market-makers optimise exit strategies
Podcast: Alvaro Cartea on collusion within trading algos
Oxford-Man Institute director worries ML-based trading could have anti-competitive effects
Podcast: Lorenzo Ravagli on why the skew is for the many
JP Morgan quant proposes a unified framework for trading the volatility skew premium
Counterparty risk model links defaults to portfolio values
Fed’s Michael Pykhtin proposes using copula models to capture effects of margin calls on default risk
Podcast: Olivier Daviaud on P&L attribution for options
JP Morgan quant discusses his alternative to Greeks decomposition