University of Oxford
It is with great pleasure that I introduce this issue of The Journal of Computational Finance, which again provides an exciting snapshot of modern computational finance. The papers herein pick up on some of the most recent developments as well as making widely established, fundamental methodologies in quantitative finance more efficient and practically applicable.
In the issue’s first paper, “Neural stochastic differential equations for conditional time series generation using the Signature-Wasserstein-1 metric”, Pere Díaz Lozano, Toni Lozano Bagén and Josep Vives build on the recent advances of generative adversarial networks in so-called market generators. These are algorithms that generate realistic instances of market evolutions, to be used, for instance, for risk simulation or as training environments for machine learning. By using path signatures as features, conditional neural stochastic differential equation models make the approach more memory efficient than traditional deep learning architectures.
Next, Kenichiro Shiraya, Cong Wang and Akira Yamazaki give us “A general control variate method for time-changed Lévy processes: an application to options pricing”. This new control variate method systematically constructs a time-changed Lévy model that is highly correlated with an underlying price process. While a well-established characteristic function approach with a fast Fourier transform gives an efficient pricing method for the approximate model, the control variate strategy provides for significant variance reduction, and hence efficiency gains, in simulations under the original model. This advantage is demonstrated in numerical tests for lookback and barrier options.
Regression Monte Carlo approaches have played a central role in the solution of optimal stopping problems, such as early exercise option valuation, but they are also key to the computation of valuation adjustments. In the final paper in this issue, “Toward a unified implementation of regression Monte Carlo algorithms”, The Journal of Computational Finance’s own Mike Ludkovski presents mlOSP, a computational template for machine learning for optimal stopping problems. This includes, uniquely, multiple optimal stopping formulations, as required for the valuation of swing options. The mathematical language is that of modern statistical machine learning, and the code is available in R from a GitHub repository. The toolbox allows convenient comparison of these methods, and benchmarking of new developments against state-of-the-art approaches.
Finally, I am delighted to congratulate Mike and fellow editorial board member Christa Cuchiero on their selection as Plenary Lecturers at the 12th World Congress of the Bachelier Finance Society, to be held in Rio de Janeiro on July 8–12, 2024. Mike will deliver the Louis Bachelier Lecture. I am greatly looking forward to the Congress and hope to see many of you there.
In the meantime, I wish you much inspiration in reading the present issue of The Journal of Computational Finance.
Neural stochastic differential equations for conditional time series generation using the Signature-Wasserstein-1 metric
Using conditional neural stochastic differential equations, the authors propose a means to improve the efficiency of generative adversarial networks and test their model against other classical approaches.
The authors put forward a novel control variate method for time-changed Lévy models and demonstrate an efficient reduction of the variance of Monte Carlo in numerical experiments.
The authors put forward a publicly available computational template for machine learning, named mlOSP, which presents a unified numerical implementation of RMC approaches for optimal stopping.