The use of machine learning in building investment strategies is limited by data and fails to adapt to changes that humans could otherwise quickly pick up, argued quants at a debate held in London on March 7.
Six experts from the industry participated in a Risk.net debate as part of the Quant Europe conference held in London. The debate asked whether machine learning and artificial intelligence are a revolutionary set of tools that will fundamentally change investment strategies.
Those speaking against the motion argued that changes in a market regime, such as interest rates or inflation, can cause machine learning-based strategies to rapidly break down.
“It’s very difficult for machine learning to say exactly when a sample is relevant,” said Nicholas Harper, a portfolio manager at Janus Henderson Investors in London. “It takes some degree of intuition and guidance on top of the underlying algorithms. I don’t think it’s the end solution.”
He added: “I would say machine learning in the wrong hands is probably dangerous.”
Machine learning is a type of artificial intelligence that allows computers to perform tasks such as interrogating datasets, based on algorithms that can evolve through experience. Some banks are already using the technology in their credit underwriting processes and XVA optimisation.
Hedge funds are also known to be using machine learning to develop trading strategies, while dealers are looking to use the technology to help derivatives salespeople pitch trade ideas more accurately.
One common technique for machine learning is cluster analysis, which is used to identify hard-to-see similarities and patterns in complex data. Another is reinforcement learning, which aims to train the machine, through a large number of simulations, to choose the best course of action in a particular environment.
Those speaking for the motion in the debate argued that the increase in available data in today’s markets has opened the way for machine learning techniques to extract more value from information than human analysts could.
“Twenty years ago, quant finance was about using once-a-day data. Now our data comes tick by tick,” said Nick Granger, chief investment officer at MAN AHL in London, arguing in favour of the motion. “We use structure and the dynamics of the order book. We look at text data, voice data, image data. If you can’t use machine learning you can’t ingest this data and you certainly can’t make sense of it, and this would leave you with an enormous information disadvantage compared to your competitors and the rest of the market.”
Machine learning is about prediction and you are paid on predictions, not estimations
Tony Guida, RPMI Railpen
But the data that powers machine learning could be its Achilles heel: data inputs are by definition backward-looking, which could undermine the ongoing relevance and usefulness of the resultant strategies, Harper argued.
“When you are looking at longer periods, machine learning becomes much more challenging,” he said. “Let’s say you are trying to assess whether something is a dog or not. You can say the dog has hair, has a nose, has ears, has all these characteristics and you can do that for a million dogs; and you can look at the next dog and say, yes that’s a dog. Well, let’s imagine you are in a world where all the dogs suddenly lose their hair. All that work you have done is broadly irrelevant. You are basing it on a sample set that doesn’t relate to the state of how things are now. That is finance. It’s a lot of dogs without hair.”
Granger countered that fundamental shifts in financial markets would have a similarly disruptive effect on existing methodologies. It is the challenge of machine learning to adapt to these step-changes.
“The issue of non-stationarity and the issues of feedback effects…these are a problem and these were a problem in all financial markets. If you are a traditional quant and you look at value or carry or momentum, if they stop working, the dog’s hair falls off, and your model stops working. If you’re running machine learning algorithms, the key point is learning. The algorithms can learn,” he said.
David Jessop, global head of equities quantitative research at UBS in London, highlighted an additional problem with machine learning’s dependence on data. Computers learn by creating multiple scenarios based on given data, whereas real financial transactions constitute only a single scenario.
“People talk about reinforcement learning, they talk about computers playing Go [a strategy game]. Well, that’s great I can get a computer to play with itself loads of times at Go. You can’t do that with the market. You have one sample. And it’s not even the universe. It’s a sample,” said Jessop.
But the scope to broaden data inputs beyond traditional areas can help investors identify interactions between factors that had never been studied before, which can in turn improve predictions, said Tony Guida, a senior quantitative portfolio manager at RPMI Railpen.
“If you use a lot of those features that are not in the literature in the risk-premia space you will find that you can find something even better. Machine learning is about prediction and you are paid on predictions, not estimations,” Guida said.
George Lentzas, chief data scientist at Springfield Capital Management in New York, took the argument one step further by suggesting this could lead to a new breed of investors combining systematic quantitative and fundamental analysis in implementing investment strategies.
“The machine learning tool will allow fundamental analysts, and not necessarily quants, to use these tools that are widely available and easily accessible and understandable as a tool. The paradigm of fundamental analysts looking at 10 stocks all the time is likely to go away, and a new paradigm will emerge of the quant fundamental investor,” he said.
But what if the intelligent algorithms spit out predictions that real-life investors would reject? The ability to apply qualitative judgement to investment strategies is beyond the scope of current machine learning. Jessop made this point using the example of factor investing, a popular investment approach that aims to capture the return in underlying factors common to a variety of securities, rather than focusing on individual names.
“A good example of this was a paper by three authors. They created two million factors of which once you adjust for them, about 14 are significant – none of which a fundamental investor would buy. And that’s the problem with machine leaning,” Jessop said.
Another key issue pointed out by the opposition in the debate was that excessive reliance on algorithms could end up influencing market prices in a potentially dangerous way. Riccardo Rebonato, professor of finance at EDHEC Business School, described a feedback effect called reflexivity whereby automated machine learning algorithms could potentially move prices as they try to learn from price data and implement new trading strategies at high frequencies. So the algorithm ends up moving the very prices it was meant to analyse and learn from.
“If we are looking at prices, to the extent that my study changes the price you are looking at, I have reflexivity which I don’t think is good at all for the market in general. Prices are precious. Tinkering with the prices can create really serious situations,” said Rebonato.
The week on Risk.net, March 10-16 2018Receive this by email