Why machine learning quants need ‘golden’ datasets
An absence of shared datasets is holding back the development of ML models in finance
Today’s computers are able to tell the difference between all manner of everyday things – cats and dogs, fire hydrants and traffic lights – because individuals have painstakingly catalogued 14 million such images, by hand, for the computers to learn from. Quants think finance needs something similar.
The labelled pictures used to train and test image recognition algorithms sit in a publicly available database called ImageNet. It’s been critical in making those algos better. Developers are able to benchmark their progress by their success rate in categorising ImageNet pictures correctly.
Without ImageNet, it would be far tougher to tell whether one model was beating another.
Finance is no different. Like all machine learning models, those used in investing or hedging reflect the data they have learnt from. So comparing models that have been trained on different data can tell quants lots about the data, but far less about the models themselves.
Measuring a firm’s machine learning model against other known models in the industry, or even against different models from the same organisation, becomes all but impossible.
The idea, then, is to create shared datasets that quants could use to weigh models one against another. In finance, it’s a more complex task than just collecting and labelling pictures, though.
For one, banks and investing firms are reluctant to share proprietary data – sometimes due to privacy concerns, often because the data has too much commercial value. Such reticence can make collecting raw information for benchmark datasets a challenge from the start.
Secondly, the new “golden” datasets would need masses of data covering all market scenarios – including scenarios that have never actually occurred in history.
This is a well-known problem affecting machine learning models that are trained on historical data. In financial markets the future seldom looks like the past.
If the dataset you train your model on resembles the data or scenarios it encounters in real life, you’re in business. If it’s significantly different, you don’t know what the model is going to do
Blanka Horvath, Technical University of Munich
“If the dataset you train your model on resembles the data or scenarios it encounters in real life, you’re in business,” says Blanka Horvath, professor of mathematical finance at the Technical University of Munich. “If it’s significantly different, you don’t know what the model is going to do.”
The solution to both problems, quants think, could be to create some of the benchmark data themselves.
Horvath, with a team at the TUM’s Data Science Institute, has launched a project called SyBenDaFin – synthetic benchmark datasets for finance – to do just that.
The plan is to formulate gold standard datasets that reflect what happened in markets in the past but also what could have happened, even if it didn’t.
Synthesising data in this way is increasingly common in finance. Horvath, in another project, carried out tests on machine learning deep hedging engines, for example, by training a model on synthetic data and comparing its output against a conventional hedging approach.
Quants say it would be too complex to formulate a universal dataset comparable to ImageNet for all types of finance models.
The market patterns that would test a model that rebalances every few seconds, for example, would be different from events that would challenge a model trading on a monthly horizon.
Instead, the idea would be to create multiple sets of data, each designed to test models created for a specific use.
Benchmarks could help practitioners grasp the strengths and weaknesses of models as well as whether changes to a model bring improvement or not.
Regulators, too, stand to benefit. Potentially, they could train models using the gold standard data and see how well they perform versus the same model trained on a firm’s in-house data.
In a paper last year, authors from the Alan Turing Institute and the Universities of Edinburgh and Oxford said the industry today had little understanding of how appropriate or optimal different machine learning methods were in different cases. A “clear opportunity” exists for finance to use synthetic data generators in benchmarking, they wrote.
“Firms are increasingly relying on black-box algorithms and methods,” says Sam Cohen, one of the authors and an associate professor with the Mathematical Institute at the University of Oxford and the Alan Turing Institute. “This is one way of verifying our understanding of what they are actually going to do.”
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe
You are currently unable to print this content. Please contact info@risk.net to find out more.
You are currently unable to copy this content. Please contact info@risk.net to find out more.
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. Printing this content is for the sole use of the Authorised User (named subscriber), as outlined in our terms and conditions - https://www.infopro-insight.com/terms-conditions/insight-subscriptions/
If you would like to purchase additional rights please email info@risk.net
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. Copying this content is for the sole use of the Authorised User (named subscriber), as outlined in our terms and conditions - https://www.infopro-insight.com/terms-conditions/insight-subscriptions/
If you would like to purchase additional rights please email info@risk.net
More on Investing
UK investment firms feeling the heat on prudential rules
Signs firms are falling behind FCA’s expectations on wind-down and liquidity risk management
Hedge funds must race the clock to check their dealer-rule status
Working out whether a firm is caught by SEC registration requirement could take months
Pension schemes prep facilities to ‘repo’ fund units
Schroders, State Street and Cardano plan new way to shore up pension portfolios against repeat of 2022 gilt crisis
As dispersion hikes in price, equity traders slice and dice
Banks tout alternative versions of relative value vol strategy, including reverse dispersion
Why Europe still awaits a private credit CLO
Tricky questions face managers that plan to launch the structure on the continent
Risk transfer and the shift from camaraderie to competition
The risk transfer market could be moving into a more competitive, more transactional and, some fear, riskier cycle
More pension scheme withdrawals expected for UK property funds
An estimated £5 billion is queued for redemption, as defined benefit schemes seek to improve liquidity
The signs of tacit collusion in the dividend play trade
Game theory and real-world data point to a different understanding of how arbitrage in markets works
Most read
- Top 10 operational risks for 2024
- Japanese megabanks shun internal models as FRTB bites
- Market for ‘orphan’ hedges leaves some borrowers stranded