Podcast: Piterbarg and Antonov on alternatives to neural networks
Two novel approximation techniques can overcome the curse of dimensionality
The rise of neural networks in finance has been rapid and nearly uncontested. The idea that this type of artificial intelligence is the best solution for all manner of problems is almost casually accepted by quants. But there are some sceptics – one such being Vladimir Piterbarg, head of quantitative analytics and development at NatWest Markets. Piterbarg jokes that his warnings about poorly thought-out applications of neural networks, which he has compared to a hammer in search of nails, have given him a bad reputation in the industry.
Despite many successful applications in finance and especially non-financial areas, Piterbarg argues quants too often turn a blind eye to the drawbacks of neural networks, such as training time, data availability, and the predictability and interpretability of outputs.
He has spent the past two years working with Alexandre Antonov, quantitative research and development lead at the Abu Dhabi Investment Authority, on alternative approaches to solving quantitative problems that do not have the same drawbacks.
In this episode of Quantcast, Piterbarg and Antonov discuss the two methods they developed to approximate functions that are cumbersome to calculate – for instance, because they involve lengthy simulations – and to compute conditional expectations.
“We’ve come up with much better and much faster methods for the typical problems that we see in finance,” says Piterbarg.
The two methods are called generalised stochastic sampling (GSS) and functional tensor train (FTT). Antonov describes GSS as “a parametric representation of the function”. The method involves setting up a grid of randomly distributed bell-shaped points to represent the multi-dimensional space of the function. Randomly selected points are then used to reconstruct the function. The approach mimics image reconstruction and helps avoid the curse of dimensionality that classic parametric models would encounter in a multi-dimensional space.
“If the function is relatively smooth, then our method’s precision is higher [than that of neural networks],” explains Antonov. “All the problems we have described for neural networks are more or less overcome.”
FTT reduces the dimensionality of a function. Inspired by a 2011 paper by Ivan Oseledets, this method allows a 10-dimensional function to be decomposed into the product of two-dimensional functions. “That gives structure, explainability, and much faster performance,” says Piterbarg
The two methods can be combined to “translate the problem from the function being sampled on a stochastic into a linear tensor train problem”, says Piterbarg. That lightens the problem considerably, while still allowing for multi-dimensional functions.
These methods can be applied to any problem described by functions that are slow to calculate, as well as problems related to the computation of conditional expectations, such as payoffs of complex financial products. The computation of derivatives valuation adjustments, which require the calculation of a large number of present values of simulated paths, is another use case.
It’s too early to know how widely this approach will be adopted in the industry. “NatWest Markets is looking at it,” says Piterbarg, adding the caveat that the research has only recently been published and it takes time for banks to decide whether to switch to a new model.
But he’s optimistic that the industry will see the value of GSS and FTT. “For a certain class of problems that are important in finance, this method beats neural networks hands down, there’s no doubt in my mind about it.”
Index
00:00 Intro
01:48 The drawbacks of neural networks
08:00 The need for alternative approaches
10:38 Generalised stochastic sampling
17:40 Functional tensor train
24:00 Combining the methods in practice
30:10 Comparing GSS and FTT to neural networks
31:40 Next steps
To hear the full interview, listen in the player above, or download. Future podcasts in our Quantcast series will be uploaded to Risk.net. You can also visit the main page here to access all tracks, or go to the iTunes store, Spotify or Google Podcasts to listen and subscribe.
コンテンツを印刷またはコピーできるのは、有料の購読契約を結んでいるユーザー、または法人購読契約の一員であるユーザーのみです。
これらのオプションやその他の購読特典を利用するには、info@risk.net にお問い合わせいただくか、こちらの購読オプションをご覧ください: http://subscriptions.risk.net/subscribe
現在、このコンテンツを印刷することはできません。詳しくはinfo@risk.netまでお問い合わせください。
現在、このコンテンツをコピーすることはできません。詳しくはinfo@risk.netまでお問い合わせください。
Copyright インフォプロ・デジタル・リミテッド.無断複写・転載を禁じます。
当社の利用規約、https://www.infopro-digital.com/terms-and-conditions/subscriptions/(ポイント2.4)に記載されているように、印刷は1部のみです。
追加の権利を購入したい場合は、info@risk.netまで電子メールでご連絡ください。
Copyright インフォプロ・デジタル・リミテッド.無断複写・転載を禁じます。
このコンテンツは、当社の記事ツールを使用して共有することができます。当社の利用規約、https://www.infopro-digital.com/terms-and-conditions/subscriptions/(第2.4項)に概説されているように、認定ユーザーは、個人的な使用のために資料のコピーを1部のみ作成することができます。また、2.5項の制限にも従わなければなりません。
追加権利の購入をご希望の場合は、info@risk.netまで電子メールでご連絡ください。
詳細はこちら カッティング・エッジ
2025年の影響度合い:デリバティブ価格設定が主導的役割を担い、クオンツはAIの群れに追随しない
金利とボラティリティのモデリング、ならびに取引執行は、クオンツの優先事項の最上位に位置しております。
ニューラルネットワークの真価発揮:SPXとVIXの同時校正がこれまでになく高速化した
SPXおよびVIXオプションは、深層ニューラルネットワークを用いてリアルタイムで共同調整が可能です。
信用移行とスプレッドのダイナミクスを結びつける
信用格付け遷移行列をシミュレートするための、迅速に校正可能なモデルをご提案いたします。
時間非同次モデルにおけるアメリカンオプションの行使価格の変動範囲
価格設定モデルは、マイナス金利または利便性利回りを考慮に入れるために拡張されます。
クアンキャスト・マスターズ・シリーズ:ウォルター・ファルカス、チューリッヒ大学(ETH)
スイスの計画、大規模な共同教員陣、そして公開プレゼンテーションがプログラムを形作っています。
期待値と価格の相対エントロピー
リスク中立価格設定からエントロピーリスク最適化への移行
Quantcast Master’s Series: Jack Jacquier, Imperial College London
A shift towards market micro-structure and ML has reshaped the programme
クオンツキャスト・マスターズ・シリーズ:ナム・キフン(モナシュ大学)
メルボルン拠点のプログラムが年金基金業界に目を向ける