Why machine learning quants need ‘golden’ datasets

An absence of shared datasets is holding back the development of ML models in finance

Today’s computers are able to tell the difference between all manner of everyday things – cats and dogs, fire hydrants and traffic lights – because individuals have painstakingly catalogued 14 million such images, by hand, for the computers to learn from. Quants think finance needs something similar.

The labelled pictures used to train and test image recognition algorithms sit in a publicly available database called ImageNet. It’s been critical in making those algos better. Developers are able to benchmark their progress by their success rate in categorising ImageNet pictures correctly.

Without ImageNet, it would be far tougher to tell whether one model was beating another.

Finance is no different. Like all machine learning models, those used in investing or hedging reflect the data they have learnt from. So comparing models that have been trained on different data can tell quants lots about the data, but far less about the models themselves.

Measuring a firm’s machine learning model against other known models in the industry, or even against different models from the same organisation, becomes all but impossible.

The idea, then, is to create shared datasets that quants could use to weigh models one against another. In finance, it’s a more complex task than just collecting and labelling pictures, though.

For one, banks and investing firms are reluctant to share proprietary data – sometimes due to privacy concerns, often because the data has too much commercial value. Such reticence can make collecting raw information for benchmark datasets a challenge from the start.

Secondly, the new “golden” datasets would need masses of data covering all market scenarios – including scenarios that have never actually occurred in history.

This is a well-known problem affecting machine learning models that are trained on historical data. In financial markets the future seldom looks like the past.

If the dataset you train your model on resembles the data or scenarios it encounters in real life, you’re in business. If it’s significantly different, you don’t know what the model is going to do
Blanka Horvath, Technical University of Munich

“If the dataset you train your model on resembles the data or scenarios it encounters in real life, you’re in business,” says Blanka Horvath, professor of mathematical finance at the Technical University of Munich. “If it’s significantly different, you don’t know what the model is going to do.”

The solution to both problems, quants think, could be to create some of the benchmark data themselves.

Horvath, with a team at the TUM’s Data Science Institute, has launched a project called SyBenDaFin – synthetic benchmark datasets for finance – to do just that.

The plan is to formulate gold standard datasets that reflect what happened in markets in the past but also what could have happened, even if it didn’t.

Synthesising data in this way is increasingly common in finance. Horvath, in another project, carried out tests on machine learning deep hedging engines, for example, by training a model on synthetic data and comparing its output against a conventional hedging approach.

Quants say it would be too complex to formulate a universal dataset comparable to ImageNet for all types of finance models.

The market patterns that would test a model that rebalances every few seconds, for example, would be different from events that would challenge a model trading on a monthly horizon.

Instead, the idea would be to create multiple sets of data, each designed to test models created for a specific use.

Benchmarks could help practitioners grasp the strengths and weaknesses of models as well as whether changes to a model bring improvement or not.

Regulators, too, stand to benefit. Potentially, they could train models using the gold standard data and see how well they perform versus the same model trained on a firm’s in-house data.

In a paper last year, authors from the Alan Turing Institute and the Universities of Edinburgh and Oxford said the industry today had little understanding of how appropriate or optimal different machine learning methods were in different cases. A “clear opportunity” exists for finance to use synthetic data generators in benchmarking, they wrote.

“Firms are increasingly relying on black-box algorithms and methods,” says Sam Cohen, one of the authors and an associate professor with the Mathematical Institute at the University of Oxford and the Alan Turing Institute. “This is one way of verifying our understanding of what they are actually going to do.”

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here