For as long as there have been markets, there has been market impact – the phenomenon through which a large transaction moves the price – but changes in market structure and trading strategies are reviving interest in the topic.
The effect comes into play when large orders cannot be executed straight away because of a lack of liquidity at the current market price. One solution is to break the trades up into smaller pieces, but if you trade too slowly, the markets could move against you over time. If you trade too quickly, you could move the price significantly – upwards for buying, and downwards for selling.
Today, many market participants deal with this by using so-called optimal execution algorithms, which are a way of monitoring and controlling trading activity to minimise costs.
These algorithms tend to be applied to either aggressive market orders, where trades are done at market prices, or limit orders, which place a limit on how much an investor is willing to buy or sell. But algorithms typically can’t be applied to both market and limit orders at the same time. The former can create a significant price impact but can get the order completely filled, while the latter come with the risk of not being able to complete the order.
Perhaps a more realistic method, then, would be to identify when to be aggressive, when to be passive and apply limits. But this is enormously complex.
“This is what happens if people try to use just limit orders. By the end of the day, if they really need to get the execution finished – that is, have shares to sell but only five minutes to close – they need to use market orders. It’s just that in the literature, you only see papers that have market orders or limit orders. They don’t interact. When you have all the features, the control problem becomes intractable because of too many state variables, too many stochastic processes and the correlations are unknown,” says Tim Leung, an associate professor at the University of Washington.
In our latest technical, Fast and precautious: order controls for trade execution, Brian Bulthuis, head of quantitative research in electronic trading at KCG Holdings in New York, Julio Concha, a quantitative analyst at the same firm, Brian Ward, a research assistant at Columbia University, and the University of Washington’s Leung attempt to combine both market and limit orders in optimal execution.
Alex Lipton, a connection science and engineering fellow at the Massachusetts Institute of Technology, says this makes the exercise more realistic and useful for dealing with real-world problems.
The quants do this by introducing three penalties that add to the cost of trading, and hence need to be minimised using the algorithm. The first penalty is the so-called speed limiter, which penalises fast trading rates. The other two, the non-liquidation terminal penalty and the trade director, push trades towards complete liquidation and ensure market and limit orders, respectively, are in the same direction.
Optimal execution has always been a tough beast to tackle, mainly because of time constraints
The idea is to generate a trading schedule that prevents traders from deviating from the end goal of pulling a full execution with minimal costs.
The trade director element, for instance, prevents people from executing trades for their own profit. If there are two types of order, a trader could put a large limit order in one direction and a small market order in the reverse direction for profit. This means the client order may not be completed in time. The trade director ensures trading under both orders are in the same direction.
“As soon as you can find two types of order, you can do that … the net could be still in the right direction but then someone might be trading for profit. The idea of trade director is that sometimes we don’t exclude the possibility that a trade can be traded for profit,” says Leung.
Optimal execution has always been a tough beast to tackle, mainly because of time constraints. Generally, the inclusion of price limits alone within optimal execution algorithms entails solving complicated partial differential equations (PDEs), and traders usually lack the time to solve these.
Some traders avoid doing so by using common methods such as the Almgren-Chriss model, which sets a time limit by which the trade must be executed but does not use a price limit – a simple solution, but one that can expose them to significant downside risk.
The quants’ approach, on the other hand, can be solved in seconds, says Leung, because it imposes a structure that allows them to convert a non-linear PDE – notorious for being difficult to solve – into a series of ordinary differential equations.
With problems such as optimal execution, it isn’t a lack of understanding of the factors at play that makes it tough to manage costs, but rather the lack of quick and easy-to-solve techniques that make traders want to avoid complexity altogether.
This aversion to complexity can also be seen in other areas, such as regulatory requirements, for instance. However, a big part of quantitative finance research in recent years has been concerned with improving speed and applying new technologies more than anything else. This means complex problems need not remain complex for too long.
For that reason, complexity – in optimal execution and other areas – should be managed, not avoided.
The week on Risk.net, February 10-16, 2018Receive this by email