Algo my way

Algorithmic trading is becoming increasingly important, not just in equities, but in foreign exchange and fixed income. However, in volatile markets with trading times trimmed to milliseconds and volumes mushrooming, how are traders able to monitor market and credit risk exposures? Clive Davidson investigates

p30-jarrod-yuster-jpg

Algorithmic trading is changing the way markets operate. Rather than a team of people waiting to execute orders or sitting around a Bloomberg machine in search of opportunities on a handful of exchanges, trades are now conducted at sub-second frequencies, while algorithms are programmed to spot tiny arbitrage opportunities across different trading venues and execute near-simultaneous trades in the blink of an eye. The wide use of algorithms in the cash equity market is thought to have contributed to a sharp rise in trading volumes over the past few years, and algorithmic trading is starting to take off in other asset classes - in particular, foreign exchange and fixed income. But this development has also raised significant challenges for risk managers - primarily, how can they keep up? Just how can a bank monitor trades, ensure the algorithms are performing as they are meant to and calculate market and credit risk exposures in real time, especially when the markets are in turmoil?

At first glance, it sounds a tough task. Equity trading volumes have been growing exponentially and hit record highs during the turbulence of July and August, with the New York Stock Exchange (NYSE) handling 5.73 billion shares on August 16, with cash equities trading by that point in the month up by more than 100% on the previous year. The London Stock Exchange, meanwhile, handled 12.1 million trades, a 96% increase compared with July 2006. The proportion of those trades carried out at least partly by machines rather than humans is now almost 50% in Europe and rising globally, according to research by New York-based consultancy Celent.

This increased reliance on algorithms is partly down to new regulations, such as the US Securities and Exchange Commission's Regulation National Market System (RegNMS) and the European Union's Markets in Financial Instruments Directive (Mifid). These rules, aimed at fostering greater competition within financial markets, have reduced barriers for the launch of new trading venues. Banks, in turn, need to be able to view prices on multiple venues and execute at the best available price for their clients. (The largest portion of algorithms are aimed at achieving such 'best execution'. The algorithms that have got a number of hedge funds into trouble recently are based on statistical arbitrage.)

In the past two years, the US has gone from having a couple of primary exchanges and some alternative trading systems to having around 40 primary and regional exchanges and dark pools (internalised trading venues where prices are not displayed), says Kyle Zasky, president of New York-based agency broker-dealer and algorithm technology provider EdgeTrade. "At any one time, there might be a little bit of liquidity at trading venue number 36, a lot at 27 and none at venues 16-20. We are reaching the point where you can't operate in this environment without smart order execution and algorithms designed to efficiently ferret out liquidity in a fragmented market," he comments.

Mifid is set to cause a similar fragmentation in the European securities markets. In April, New York-based agency broker Instinet launched the Chi-X Europe equities trading platform, and a group of banks, including Citi, Credit Suisse and Deutsche Bank, is planning a similar platform, codenamed Project Turquoise, for early 2008.

"In a post-Mifid environment, there will be no real upside to a human not using an algorithm at some stage of the execution process," says Tom Middleton, director, head of European algorithmic trading at Citi in London.

So is this takeover of the markets by machines leading to new risks, and how do institutions manage the risks of their algorithms?

"When you have a computer trading an order rather than a human, you have to have the right checks and monitoring in place so that an order isn't sent to a server that shouldn't be traded," explains Jarrod Yuster, head of global portfolio and electronic trading at Merrill Lynch in New York. An experienced human trader would know that if 100,000 shares of a particular company usually trade each day, then trying to buy 200,000 shares is likely to move the price. These instinctive mental checks of a human trader must be built into the algorithms, he adds.

As well as checking the order against the average daily volume, Merrill Lynch incorporates a number of other statistical checks into its execution algorithms - for example, the average frequency and size of trades, as well as the average spread, so that an algorithm will not try to slice an order into 100 segments if the stock only trades an average of 10 times a day, or attempt to fill an order if the spread is so wide that the stock is trading by appointment only.

"We have built statistical databases for all the markets we trade electronically, going back to 1993 for equities," says Yuster. "We use this data to set up criteria checks to prevent orders going to the market that might result in bad behaviour."

Other institutions maintain comparable databases, which need to be constantly revised in order to be effective. BNP Paribas, for instance, updates the checks and filters in its algorithms every 15 minutes, based on the market data it continuously gathers, says Stephane Balouka, senior trader, global portfolio and algorithmic trading at the French bank in Paris.

However, collecting this information will become increasingly challenging. RegNMS and Mifid will result in a 900% increase in market data by 2010, estimates Tom Price, senior analyst, securities and capital markets at Massachusetts-based research and advisory company TowerGroup. "More trades and an even larger number of quotes will ensure that data bandwidth and the capacity to process and store information remain an ongoing challenge for all market participants," he says.

Indeed, the maintenance of databases is seen as a significant risk for banks with algorithmic trading operations. Technological failure - even for short periods - is a pressing concern because of the speed at which algorithms operate.

"It is important to have scalable systems, redundancy and service models," says Yuster. Merrill Lynch's support and technology teams regularly simulate scenarios where a market data feed or exchange connection fails, or a server goes down and has to be recovered. "These are tremendous tests on our systems and processes to ensure that we can handle the scale of trading that comes into our platforms," he adds.

The increase in speed and in trade volumes - largely because the algorithms can slice orders into smaller segments for execution - also creates some difficulties for traders. For instance, a consequence of slicing orders into many small trades is that orders take longer to fill, creating the possibility that a price may move against the order over the period. "It is like a car on a highway - the longer you are on the road, the greater the risk," says Balouka.

However, such concerns must be taken in context. The purpose of slice-and-deal algorithms (these types of best-execution algorithms make up the majority of machine trading) is to fill orders without moving the market - often the biggest risk facing an investor. (The other two main types of algorithms are statistically based opportunity-seeking proprietary trading algorithms, and automatic hedging algorithms.)

And institutions argue that because algorithms react so quickly to arbitrage opportunities, they help dampen volatility. Furthermore, they do not overreact to events. "Humans can react aggressively and irrationally when markets are moving fast," says Balouka.

But this still leaves a number of questions. Can an algorithm, even though designed to carry out the best execution of an order or seek an arbitrage opportunity, unwittingly run up losses that threaten the stability of the institution, or spark a run in the market that leads to a crash? Can algorithms be used for nefarious purposes? And what are the regulators doing about the possible threats?

Citi's Middleton says that because algorithms can execute many trades in a short time in reaction to market data or events, they could in theory get caught in feedback loops that might exacerbate losses unless they are designed with appropriate checks and balances. Therefore, considerable effort must go into the design and testing of algorithms. Citi has recently reviewed the way it releases new algorithms to further tighten up its risk management processes.

The first step is quality assurance processes and regression testing, in which the algorithm is tested against historical data. "Once it passes that test, we will release it to a very small audience of tech-savvy traders who will test it on a small amount of internal (order) flow, or choose orders that it is particularly applicable to," Middleton says. "Then, we gradually roll it out on the floor in a staged way, all the time getting feedback and looking to make sure the algorithm is doing exactly what we expect it to and it is delivering the performance we've been aiming for."

Many institutions ensure their algorithms have fail-safe devices built into the system, which causes the machine to default to a human trader when it encounters territory it is not programmed to deal with. EdgeTrade's Zasky stresses that algorithms are a means of increasing productivity, and that humans can never opt out of the ultimate decision-making process. "Algorithms aren't a replacement for humans, they are tools," he says.

In one way, algorithmic trading has been good for risk management by forcing institutions to explicitly acknowledge the need to monitor and manage risk in real time. The same technology that powers the algorithm - complex event processing - can be used to address risk issues by including risk management rules into the algorithm itself, or in the infrastructure in which it operates. "So risk management is moving from the middle office to the front office, which is a good thing," says John Bates, vice-president, Apama products, at New York-based Progress Software. Apama is a complex event-processing technology widely used in algorithmic trading by institutions such as JP Morgan and Deutsche Bank.

Others stress the importance of bringing algorithms into traditional risk control structures. BNP Paribas, for example, has new activity and transaction approval committees that approve new algorithms and trading strategies. Market supervisors take this view too, demanding that institutions have appropriate risk structures and controls around their algorithms, as well as demonstrable resilience and capacity in the technology on which they operate.

Tom Gira is executive vice-president in the market regulation department at the Financial Industry Regulation Authority (Finra) - a Washington, DC-based non-governmental regulator for securities firms in the US, recently created by the amalgamation of the National Association of Securities Dealers and the regulatory entities of the NYSE. He says the control structure cannot focus only on trading but must stretch across all relevant areas of an institution, including legal, compliance and technology. "The control structure must be such that if there is an elephant, you are not just seeing its tail. There must be a holistic approach to ensure that all the people you need to focus on are included."

Keeping up

Exchanges and regulators have systems of varying degrees of sophistication that aim to monitor and prevent both abuse and systemic crises. However, it is clear they are struggling to keep up. The UK's Financial Services Authority (FSA) recently ordered a new high-tech surveillance system based on the same kind of complex event-processing technology that market participants are using. However, neither the FSA nor the London Stock Exchange would discuss the potential for algorithms to cause systemic risk.

Finra, meanwhile, says it monitors around 400 million quotes, orders and trades a day. Given this sort of volume, spotting algorithmic activities that might be manipulative or threaten the market would appear to be like finding a needle in a haystack. Gira acknowledges the scale of the problem: "We have seen an explosion in trading volumes and we have had to increase our capacity to monitor commensurate with the growth in the market."

However, he doesn't believe algorithms could cause a major disruption to the market undetected. "At the exchange level, we have a wide variety of programs we run on a daily basis where, if there is disproportionate volatility or pricing anomalies, they will generate alerts for us. In most cases, if there is going to be a pattern that disrupts the market, it will hit our radar screen and we will analyse it," he says.

Some people suggest that as the markets get taken over by machines, a new type of supervision needs to be devised: traditional regulation is based on enforcing ethical behaviour, but ethics do not apply to machines. Progress Apama's Bates disagrees: "If two traders collude on an opportunity in breach of regulations using algorithms, they are responsible for the behaviour of the algorithms - they are essentially digitised versions of themselves. They can't say, 'it wasn't me - it was this robot'. It won't wash."

While a number of quantitative hedge funds have got into trouble recently through their algorithm-based trading strategies, those operating best execution algorithms, with their built-in checks and balances and risk management frameworks, have fared better. Middleton at Citi says: "Our modification, monitoring and release processes have proved their value in the current market turmoil. We have also recently upgraded many of our systems to increase capacity and reduce latency, and these efforts have been rewarded as we have had no significant issues dealing with high volatility and market data volumes."

So far, large-scale events that provoke systemic risks have not materialised. For some, this has been a surprise. Cedric Beaurain, head of foreign exchange spot trading at Societe Generale Corporate and Investment Banking in Paris, says he believed two years ago that algorithms would lead to market disruption. "But I was completely wrong. I thought that at least once or twice a year, algorithmic trading would create some kind of liquidity mirage - all the platforms and algorithms linked to each other would result in us chasing our own tail. But it has not been the case. I completely underestimated the ability of algorithms to handle the liquidity smoothly," he says.

But no-one really knows what will happen when the markets are entirely transacted by machines. There are arguments that algorithms will move closer to the ideal of rational efficient markets - and that day is coming, says TowerGroup's Price: "The challenge of regulatory compliance (with RegNMS or Mifid) will pave the way for firms to automate the trading process completely. Once all the players have automated their trading processes, the winner in the hunt for liquidity will be whoever can process the data the fastest." It's an arms race, with everyone hoping that no-one is unwittingly harbouring a weapon of mass destruction.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here