Journal of Computational Finance

Risk.net

Gradient boosting for quantitative finance

Jesse Davis, Laurens Devos, Sofie Reyners and Wim Schoutens

  • Gradient boosted regression trees are used to learn the pricing map of financial derivatives.
  • Gradient boosting models deliver fast price predictions and are easy to train.
  • Feature engineering might enhance the predictive performance of the models.
  • The structure of the trees is analyzed to explain price predictions.

In this paper, we discuss how tree-based machine learning techniques can be used in the context of derivatives pricing. Gradient boosted regression trees are employed to learn the pricing map for a couple of classical, time-consuming problems in quantitative finance. In particular, we illustrate this methodology by reducing computation times for pricing exotic derivatives products and American options. Once the gradient boosting model is trained, it is used to make fast predictions of new prices. We show that this approach leads to speed-ups of several orders of magnitude, while the loss of accuracy is very acceptable from a practical point of view. In addition to the predictive performance of these methods, we acknowledge the importance of interpretability of pricing models. For both applications, we therefore look under the hood of the gradient boosting model and elaborate on how the price is constructed and interpreted.

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here