Gaussian mixture model dynamically controlled kernel estimation (GMM-DCKE), a purely data-driven and model-agnostic method to compute conditional expectations, is introduced. Joerg Kienitz applies it to the pricing and hedging of (multi-dimensional) exotic Bermudan options and to calibration and pricing within stochastic local volatility models
Fast and accurate approximations of conditional expectations and their respective distributions are essential for many problems arising in quantitative finance. The least squares Monte Carlo (LSM) method (see Longstaff & Schwartz 2001) is the market standard for simulation-based estimation. It is well known that LSM requires a large number of paths and faces problems in estimating the tails. Tail estimates are crucial, eg, for risk management or pricing deep out-of-the-money options. Convergence depends on the choice of basis functions, but if LSM converges to the price, its derivative might not converge. Variance reduction, basis function selection and local regression have been considered for mitigation.
In this work, we built upon previous work by Geng et al (2022), but the proposed method is different from dynamically controlled kernel estimation (DCKE) since it does not rely on numerical kernel density estimation including (local) bandwidth selection and it does not need to apply Gaussian process regression (GPR) for interpolation or extrapolation and smoothing. The key difference is that it replaces the numerical methods with analytic expressions that lead simultaneously to smooth approximations.
GMM-DCKE allows us to apply further state variables in the sense of control variates (CVs). This improves upon the basic method and allows for analytical computation of proxy hedge sensitivities. Combining the GMM, CVs and properties of multivariate Gaussians, the contributions of this paper to the literature are the following:
the calculation of model-agnostic, data-driven conditional expectations on observed realisations at given time points even for multi-modal distributions with semi-analytic methods;
the GMM is fitted numerically, but all subsequent calculations are analytic, and thus they are fast; and
the method implies closed-form solutions for smooth proxy hedge sensitivities.
For illustration, the method is applied to pricing of (multi-dimensional) Bermudan derivatives and for pricing and calibration with stochastic local volatility models.11 1 A longer exposition and pedagogical illustrations can be found in Kienitz (2021) or at https://github.com/Lapsilago/GMM_DCKE.
To fix notation we take a discrete time grid with realisations of stochastic risk factors , .
Let be a positive integer and let , , be -dimensional random variables with (denoting a -dimensional Gaussian distribution, with and denoting the corresponding mean vectors and covariance matrices, respectively). Let be the corresponding probability density. We consider the -variate -component GMM determined by the probability distribution :
are called mixing weights and denotes a GMM with components. Once , and , , are fixed, the conditional distribution can be calculated analytically. Let:
We denote the realisations by and the labels by . We set with -dimensional and -dimensional Gaussians and , respectively; the conditionals, , and marginals for and are available in closed form, with:
Taking , for instance, and denoting by the joint density with regard to , we have:
and the means and variances for the conditional distribution are given by:
respectively. Using these results for each single component of we have:
and for the conditional distribution of :
Let the set be of size , and for we assume . Then, is represented by an matrix. Assume the number of components for is given. If denotes the collection of all parameters , and , , then is obtained by maximising the (log)likelihood:
by expectation maximisation (EM) (Dempster et al 1977).
, , with labels , . are joint realisations of some random variables , . The underlying risk factors are denoted by , and is a function of , eg, a payoff at maturity for Bermudan derivatives or a variance value for a stochastic local volatility model.
Let , , be a test set.
The outputs are predictions , , . For our examples, these are the conditional expected values of a Bermudan derivative, or the expected variance values given the value of the risk factor in a stochastic volatility model. For in-sample performance we take .
The number of components is given by and, if we use a control variate, its realisations . Including into the fitting of , quantities such as the conditional mean, given by , are calculated analytically.
The first (numeric) step is to fit to approximate the joint distribution of , and using EM. The next steps are analytic, using (1) and (2). To apply the CVs in order to stabilise the model we consider:
with minimising the variance, (5):
For , with a conditional mean, the are calculated analytically as a quotient of conditional covariance and conditional variance, respectively, for Gaussian distributions.
The quotient can be seen as a (conditional) trading strategy with respect to , given the values that minimise the variance for . If is one of the risk factors, it is the (time-discrete) conditional minimal-variance delta, and thus a time-discrete, model-agnostic, data-driven hedge can be calculated analytically. Other choices, depending on the payoff, may be used to improve the estimate, eg, the maximum or minimum of the assets, when considering ‘best-of’ or ‘rainbow’ options.
Components and enhancements
The number of components can be determined by empirical results or by statistical methods, eg, a Bayesian Gaussian mixture, silhouette scores or minimising information criteria such as the Akaike and Bayesian information criteria:
respectively, where , with the parameters maximising the likelihood, and the number data points. Taking the model where the AIC (or BIC) is smallest may lead to overfitting. Discrete gradients, ie, differences in AIC and BIC, for and , respectively, can be considered. The number of components is chosen once this difference does not significantly increase further.
Silhouette scores exploit the compactness and separability of the clusters implied by the GMM. The Jensen-Shannon metric (JS) for and is calculated, and the number for which we observe a sudden jump in the JS value is chosen.
Known limitations are that the approximation is only valid for sample sizes that are much larger than the number of parameters in the model.
A method for enhancement is ‘bagging’, an ensemble method that consists of bootstrapping and aggregation (see Bishop 2006). In our setting we perform GMM-DCKE times with observations randomly sampled from with replacement and averaged over the number of experiments. Transformations can also be used. For instance, keeping values, especially of option prices or implied volatilities, positive is essential. (When applying Gaussian distributions, negative values are not ruled out.) If we have a set of strictly positive values , we instead work with the set obtained by applying the transform , , for each . Applying the GMM to this new training set and using the inverse transform leads to strictly positive values.
Comparison with other approaches
Our work is related to that of Geng et al (2022), which introduced a kernel density estimator with a (local) bandwidth used to approximate the conditional expected values and Gaussian process regression used for smoothing and interpolation or extrapolation. Selecting a local bandwidth may be slow and depends strongly on the dimensionality. In contrast, our method is semi-analytic, stable and only depends on one parameter: the number of Gaussian regressions, which needs to be determined.
The work is also related to that of Halperin (2017), which presents a discrete-time option pricing model that is rooted in reinforcement learning. Our method essentially gives more direct access to this approach, circumventing the reinforcement learning part. Huge & Savine (2020) train a neural network, taking the hedge, ie, the sensitivities, into account to stabilise the calculations. Techniques such as kernel density estimation in multiple dimensions, reinforcement learning and neural networks are data and computationally intense when required to perform with high accuracy. GMM-DCKE derives the option price, conditional expectations and hedges accurately while applying sparse parameter sets and has significant advantages with regard to both the amount of data and the computation time, especially when considering high-dimensional problems.
For pricing Bermudan derivatives, for each consecutive pair , , we fit a GMM to the observed realisations and consider a discrete proxy hedging approach. Recall that if is the risky underlying, the risk-free rate and the bank account, then the self-financing condition of any hedging strategy implies that, for any time interval :
Denote by the replication portfolio of an option using a hedging strategy with respect to the available information, ie, the filtration . The hedging strategy that minimises the variance is given by:
The computation of the latter can be computationally intense. Notice that the minimal-variance hedging strategy is merely the minimal-variance delta. Taking this as a proxy, we include it in the calculation of conditional values to reduce the conditional variances. With GMM-DCKE the variance minimiser is available in closed form.
For pricing, we need to approximate the continuation value at each possible exercise date. This involves computing a conditional expectation. To illustrate our results we focus on a given time point . For the full valuation this has to be applied in a backward algorithm. We choose the rough Bergomi and Bates models for illustration (Bates 1996; Bayer et al 2016).22 2 We have actually applied the method to diffusion, jump-diffusion and pure jump processes. Let , be Brownian motions with correlation coefficient . Take , as the initial values for the asset price and the variance, respectively. The stochastic differential equations (SDEs) for the evolution of the Bates model are:
The parameters in (7) are , the risk-neutral rate; , the mean reversion of the variance; , the long-term variance; and , the volatility of variance. For a Poisson process with intensity , , and .
For the rough Bergomi model we take the logarithmic price process , . To model the instantaneous variance we consider the instantaneous forward variance , , for date observed at . Let be a negative exponent depending on the Hurst exponent . For , we have and . The rough Bergomi model is given by:
with the variance obtained from (8) by setting:
with being the corresponding realisation of a Volterra process. We assume that a payoff is given and approximate the corresponding option price at time .
We set the parameters as follows: , , , , , and , , and for the Bates model. For the rough Bergomi model we take , , and .
We consider a call option and its delta for the Bates and rough Bergomi models. There is no obvious choice of , since the number of components depends on the shape of the distribution, eg, skewness or kurtosis. Our numerical experiments have shown that the chosen number of Gaussian regressions ( and , respectively) lead to accurate results. In practice we recommend validating this assumption using cross validation or information criteria.
For numerical illustration we consider time and plot the conditional expectations for the calls and their deltas, together with quantiles and , respectively (), and compare them with both semi-analytical and nested Monte Carlo results (see figure 1). The absolute difference to these values, which serves as a benchmark, is only a few basis points and is much smaller than for the LSM. Some discrepancy is observed with respect to the proxy hedge, but this is due to the size of the time interval. The hedge still compares well to the instantaneously balanced hedge. Values of closer to lead to better results and improve upon the results stated in Geng et al (2022). Since we choose the underlying as the control variate, we derive the time-discrete minimal-variance delta analytically. All the simulations were done using 10,000 paths.
Let us summarise some of the findings. First, we observe that analytic smooth option prices and hedge sensitivities can be obtained using GMM-DCKE in a model-agnostic and data-driven way. Second, we observe the impact of the control variate; essentially, by including the proxy, we control the tail behaviour of the estimator. Thus, depending on the choice of underlying, we add linear tail behaviour, which means a slope of for the left tail and for the right tail, since our example is a call option. Compared with those in Geng et al (2022), the resulting hedges are smoother and more accurate, even for large time steps, eg, half-yearly (). The difference from the displayed hedge sensitivities calculated using the model specifics is due to the numerical fit of the and because we consider time-discrete settings and not a continuous setting.
We are able to compute hedge sensitivities for the rough Bergomi model without nested Monte Carlo simulation.
For interest rate derivatives we consider the pricing of an at-the-money Bermudan swaption with the Cheyette model with and without stochastic volatility. Taking and , the model is given by:
with , , being uncorrelated Brownian motions. We take , , , , , and . Using an alternating direction implicit (ADI) implementation with a grid of 200 points for the time component and 100 points for each space component. For the prices (yearly exercise), 5-year option maturity and 15-years underlying maturity, we obtain 572.4052 basis points (respectively, 552.2217bp) for the two-dimensional (2D) (respectively, three-dimensional (3D)) case. GMM-DCKE prices the options accordingly, with a maximum absolute difference of 1.284bp using 10,000 simulations. Both Python-based implementations were done on a Windows i9 machine with 16GB RAM and four cores using Python 3.6 with non-vectorised code. The runtimes are summarised in table A.
GMM-DCKE can be considered as an alternative to other numerical methods for pricing interest rate derivatives, especially for more than two dimensions.
We illustrate the performance of GMM-DCKE using a five-dimensional Heston model with asset correlation matrix and asset-variance correlation matrix (compared to a full positive definite matrix ) defined, respectively, by:
For the numerical experiments we show the performance of GMM-DCKE on the training set with 10,000 paths as well as using 500 paths for validation and we compare it to the DCKE method (Geng et al 2022). As an example, we consider a rainbow option with payoff given by:
We take , , , . For the -dimensional setting we first derive representations for the random vectors , , with being the option values at time , being the asset prices at time and being the th asset value, which serves as a control variate. We form the conditional distribution for each dimension on every component and calculate the conditional delta as well as conditional mean estimates for each . The control variate is the weighted sum of the individual deltas estimated. For all samples we calculate the control variate using . As can be seen from figure 2, the results of GMM-DCKE and DCKE are close, and this is also observed for other payoffs, such as for basket options.
Unlike DCKE, GMM-DCKE is based fully on the initial training set. To apply DCKE in multiple dimensions, the hedge sensitivities are computed by differentiating the payoff. Thus, DCKE depends on knowing the payoff function, and it is assumed that the corresponding derivatives can be calculated. With the hedge sensitivities calculated in this way, DCKE is not fully model-agnostic. With this additional adjustment it leads to results that are a little more accurate than those of GMM-DCKE. Fitting the full -dimensional distribution can also be done, but it is computationally more intense.
The effect of using different control variates (ie, single assets, the max/min of all assets) is shown in figure 3. As expected, taking the as a control variate leads to the best results.
Stochastic local volatility
For pricing and calibrating stochastic local volatility models, several techniques (ie, particle methods, binning and logarithmic normal approximations) have been considered (see Guyon & Henry-Labordère 2012; Muguruza 2020; van der Stoep et al 2014). We consider the dynamics of the forward, , given by:
where , , and are the drift and volatility functions, respectively, of the instantaneous variance, is the leverage function and and are one-dimensional correlated Brownian motions. The function represents the stochastic volatility component. For the Heston model we have . The SDE (11) can be reframed into a nonlinear equation in the sense of McKean and Vlasov, with the volatility depending on the probability distributions of and . Showing the existence and uniqueness of the solutions to such equations is an involved mathematical problem, and for certain parameters, eg, for the stochastic volatility component, large values of the volatility of variance, there may not be any solution (see Guyon & Henry-Labordère 2012).
The market standard method for calibrating the model is to use a previously calibrated local volatility, (Dupire 1994), and a calibrated stochastic volatility model. The leverage function from (11) is derived by Markovian projection, and can be expressed as follows (see Guyon & Henry-Labordère 2012; van der Stoep et al 2014):
The conditional expectation appearing in (12) needs to be calculated. This is approximated by GMM-DCKE. For calibration we take and consider:
: no fit; use the initial value .
: fit the model to the and values and store the parameters as .
: fit the model to the and values using as the initial values.
Using the most recently fitted parameters, we observed that the convergence can be improved. This relies (as do the other methods) on the input of the local volatility function . For widely fluctuating , it may be necessary to fully recalibrate at each step.
Figure 4 shows the accuracy of the method and also considers the version with bagging using only 20 runs with 10% (ie, 1,000) of samples, leading to speed-ups of 2–5 times and increased accuracy. Part (a) of the figure illustrates the application of GMM-DCKE to the rough Bergomi model for several choices of . Part (b) shows the same model but with bagging applied. In our numerical experiments it turned out choosing is reasonable. This is also confirmed by taking the AIC and BIC. We also plotted the 1% and 99% quantiles for Gaussians versus binning, the 0.5%, 1%, 5% and 10% quantiles for the model with bagging and the 99.5%, 99%, 95% and 90% quantiles for the bagging case, respectively, to further illustrate the accuracy, even for the tails.
In terms of runtime we compared our method with the particle method, which is widely applied in financial institutions, and we observed our method performs slightly better when the parameter fitting is done using the previously calculated parameters and if we store the inverses of the matrices used for calculating the control variates (see (1), (2)). The particle method is often applied in conjunction with the Silverman rule, which allows a bandwidth to be picked for the kernel. Distributions that are very different from the Gaussian cannot be handled well with the assumptions made when applying this rule. A more sophisticated choice of the bandwidth (eg, a local bandwidth) may then be necessary. Determining the latter leads to optimisation problems that increase the runtime of the particle method.
For a Heston local volatility model we investigated a stressed parameter for a volatility of variance of 2.5 and observe that the particle and the method of Muguruza (2020) underestimate the smile. The binning method and GMM-DCKE, using , lead to reasonable results (see figure 4(b)).
Finally, we have considered the same example as in Muguruza (2020). Since exactly the same data were not available, we calibrated to the data we observed and plotted a Heston smile that is close to the one shown in Muguruza (2020). We observe that applying GMM-DCKE for calculating the conditional expectations and determining the implied volatility leads to nearly identical results. The largest difference was less than 4.3bp for implied lognormal volatility, with an average error of 2.1bp. Furthermore, since GMM-DCKE does not assume a certain model, it can be applied to the more complex models considered in Muguruza (2020).
Conclusions and summary
GMM-DCKE is a purely data-driven and model-agnostic method to compute conditional expectations. We applied it to the pricing of (multi-dimensional) Bermudan options and the pricing and calibration of stochastic local volatility models. It also leads to hedging strategies with the realisations at given time points but without the need to know the underlying stochastic model. GMM-DCKE also can also be used for computing control variates for standard Monte Carlo simulation in multiple dimensions, where finding appropriate controls that can be computed fast is often difficult. In contrast to the method in Geng et al (2022), it is independent of the choice of (local) bandwidth for kernel density estimation. GMM-DCKE can be integrated into existing set-ups as an alternative to using other numerical methods. Due to its analytical nature and application of the GMM it only needs a decent amount of training data. Relevant practical examples show that its performance is at least comparable with existing numerical methods and can be enhanced by careful implementation of the method using vectorisation or efficient EM algorithms. Using methods such as bagging improves upon the suggested model. Further ideas – for instance, applying autoencoders with GMM-DCKE in low-dimensional latent space or using different distributions – are the subject of ongoing research.
Jörg Kienitz is an adjunct associate professor at the University of Cape Town and the University of Wuppertal. He owns finciraptor.de and is a partner in the quantitative services unit of Acadia. He is based in Bonn. He thanks Gordon Lee and Thomas McWalter for fruitful discussions, and two anonymous referees who helped to greatly improve the paper.
Email: [email protected]
- Bates D, 1996
Jumps and stochastic volatility, exchange rate processes implicit in Deutsche mark options
Review of Financial Studies 9, pages 69–107
- Bayer C, P Friz and J Gatheral, 2016
Pricing under rough volatility
Quantitative Finance 16(6), pages 887–904
- Bishop CM, 2006
Pattern Recognition and Machine Learning
- Dempster AP, NM Larid and DB Rubin, 1977
Maximum likelihood from incomplete data via the EM algorithm
Journal of the Royal Statistical Society B 39(1), pages 1–38
- Dupire B, 1994
Pricing with a smile
Risk July, pages 18–20
- Geng Q, J Kienitz, GT Lee and N Nowaczyk, 2022
Dynamically controlled kernel estimation
Risk February, pages 110–115, http://www.risk.net/7921186
- Guyon J and P Henry-Labordère, 2012
Being particular about calibration
Risk January, pages 88–93, http://www.risk.net/2135540
- Halperin I, 2017
QLBS: Q-learner in the Black-Scholes (-Merton) worlds
Preprint, SSRN, December (https://ssrn.com/abstract=3087076)
- Huge B and A Savine, 2020
Differential machine learning: the shape of things to come
Risk October, pages 76–81, http://www.risk.net/7688441
- Kienitz, J, 2021
GMM-DCKE: semi-analytic conditional expectations
Preprint, SSRN, August (https://dx.doi.org/10.2139/ssrn.3902490)
- Longstaff FA and ES Schwartz, 2001
Valuing American options by simulation: a simple least-squares approach
Review of Financial Studies 14(1), pages 113–147
- Muguruza A, 2020
Not so particular about calibration: smile problem resolved
Preprint, SSRN (https://ssrn.com/abstract=3461545)
- van der Stoep AW, LA Grzelak and CW Oosterlee, 2014
The Heston stochastic-local volatility model: efficient Monte Carlo simulation
International Journal of Theoretical and Applied Finance 17(7), article 1450045
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
You are currently unable to print this content. Please contact [email protected] to find out more.
You are currently unable to copy this content. Please contact [email protected] to find out more.
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. Printing this content is for the sole use of the Authorised User (named subscriber), as outlined in our terms and conditions - https://www.infopro-insight.com/terms-conditions/insight-subscriptions/
If you would like to purchase additional rights please email [email protected]
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. Copying this content is for the sole use of the Authorised User (named subscriber), as outlined in our terms and conditions - https://www.infopro-insight.com/terms-conditions/insight-subscriptions/
If you would like to purchase additional rights please email [email protected]
Former Fed adviser welcomes long-advocated Treasuries clearing mandateReceive this by email