Why central banks aren’t worried about FX algos – for now

Disclosure failings feed into FX code; other issues are worrying, but distant, says SNB’s Maechler

Andrea-Maechler of the Swiss National Bank
Andréa Maechler, Swiss National Bank
Louis Rafael

A positive change, with hidden dangers. That’s the shortest version of the 55-page study published by almost two dozen central banks in November, after months of research into the growth and use of execution algorithms in foreign exchange markets.

In this interview, Andréa Maechler – the Swiss National Bank (SNB) governing board member who chaired the banks’ work – explains what she and her colleagues like about FX algos, and also what concerns them.

In short, algorithms really do save money for both users and dealers, but precisely how they do it is not always clear. Users need “detailed information” in order to pick from among the crowd of competing algos, Maechler says, and their disclosures can be hard to compare. Standardisation could help here, but the central banks approached their work as stakeholders in the market, rather than as regulators. Maechler notes, though, that their findings have been shared with peers that are working on the planned review of the FX Global Code.

“We knew some of the questions would be better suited to being addressed in that work,” she says.

Other worries are more remote. One is that, as a greater share of FX flows are algorithmically filleted, it may become easier for banks to digest internally: more trades will be matched within a provider’s franchise, rather than sent out to the public markets. Does this mean the reference prices available from those markets will become increasingly unmoored from reality?

Maechler’s study group had to prod their sources to consider this possibility, but there was a general acceptance that it might – one day – be a problem.

Items in the ‘plus’ column will be familiar to anyone who has ever read the dealers’ marketing brochures: by slicing larger orders into smaller ones, the algos reduce market impact and cut trading costs; and by matching those orders against a mix of multilateral venues and bilateral streams, the algos knit together a fragmented market.

It’s striking that central banks are making these claims, but they have the data to back it up. The study group, which was convened by the markets committee at the Bank for International Settlements (BIS), surveyed 70 market participants and interviewed many of them; the finished report found that three broad classes of algorithm frequently outperform risk transfer execution, and did so particularly when volatility spiked in March this year.

Execution algorithms change the way in which market participants access the market and in which orders are executed – including the fact that end-users have to manage additional risks – for example market risk
Andréa Maechler

Maechler and co did not have the data they needed to answer some of the tougher questions posed by the study, though: when will we know if a reference price has degraded? What capacity do algorithms have to replenish liquidity?

These issues are now being tackled by the SNB, as part of a new BIS initiative to address “frontier topics” that matter to the central banking community. For now, the project at the SNB is simply trying to work out how to answer questions such as those raised by the spread of algorithmic execution – but it could ultimately become the foundation for a shared platform that central banks use to keep tabs on the market, Maechler says.

What do you see as the biggest changes in financial markets over the past decade?

Andréa Maechler: An important change is clearly new technology, the speed with which markets and market participants can digest information and react to it, the frequency of trading – but this trend has been ongoing for decades, it’s not new. What is new is that these changes have increasingly started to affect the fabric of the broader market. Powerful technology and tools are no longer the preserve of a specific group and are migrating to the mainstream – that’s the biggest change. The use of FX execution algorithms is one example of this.

As of today, our report estimates the use of execution algos accounts for 10-20% of FX spot trading. In my view, the question now is how widely they are going to be adopted going forward. On the one hand, they give users more control over what they do. On the other, they also imply new challenges, and users need to be set up to handle such challenges.

What is the rise of execution algos a response to?

AM: Fragmentation. Execution algorithms essentially help users access and aggregate liquidity across the FX market’s various liquidity pools, and that’s one of their major benefits.

They also address end-users’ increasing desire to optimise their trade execution. By creating smaller orders out of the larger parent order, execution algorithms can help lower the market impact of a trade. Beyond this, they also give users the ability to choose how far they prefer trading on lit trading venues as opposed to making use only of banks’ internal liquidity pools.  

You alluded to some trade-offs – or dangers – though, in the form of a change in the risk profile for the buy side, and additional complexity.

AM: I want to be very clear here: we see execution algorithms as a net positive for the FX market microstructure, but there are also challenges.

Execution algorithms change the way in which market participants access the market and in which orders are executed – including the fact that end-users have to manage additional risks – for example market risk. End-users must be aware of these risks and need to have access to information and tools to handle these risks adequately. All actors need to understand what is changing, and need to have the opportunity to respond.

Did you encounter much dissent during your research? Did anyone see the growth of algos as a net negative?

AM: At the outset, it wasn’t clear to us that the outcome of the study would be positive. Clearly, the growing use of algorithms was changing the way users trade and how orders were executed, and it wasn’t obvious what that meant in terms of new sources of risk. There was also a widespread sense that market liquidity seemed to be thinner but no clear view about what that meant.

Andrea-Maechler of the Swiss National Bank
Louis Rafael
Andréa Maechler

One thing was clear. Before the Covid-19 pandemic, there was scepticism around how execution algorithms would perform during a period of elevated volatility. Many observers and market participants expected them to be less useful and thus less widely used during a crisis. It appears that the opposite has happened. Most providers reported more than a doubling of volumes relative to the average. Risk transfer spreads really increased during the crisis, which is what you’d expect given higher volatility, but FX algos – particularly the passive and time-sliced ones – generally seem to have outperformed risk transfer during this period.

That said, this singular episode of elevated volatility should not be used to draw general conclusions regarding the robustness of order book dynamics and algorithm performance during shocks under all crisis-like circumstances. In fact, an ultimate test in the context of highly disorderly market conditions remains to be faced as market volatility in this most recent crisis episode was generally seen as high but not extreme. More extreme market conditions may still reveal deficiencies in execution algos.

If you were an outsider to FX trading, and algos were described to you as a way to help rebuild a fragmented market, you might wonder why they only account for a fifth of the volume. What’s your view?

AM: I don’t have a definitive answer, but my hunch is that it comes down to one of the things we’re trying to bring out in the report: these are good tools, but you really need to understand them. And you need more than technicians who understand the maths. In fact, a recent GFXC [Global Foreign Exchange Committee] survey found that lack of experience and understanding were common reasons why users choose not to make use of algorithms at all. And after all, FX execution algorithms are only one potential means of execution when you want to trade in size. When you trade smaller tickets, other methods of execution such as risk transfer are typically widespread.

Risk transfer spreads really increased during the crisis, which is what you’d expect given higher volatility, but FX algos – particularly the passive and time-sliced ones – generally seem to have outperformed risk transfer during this period
Andréa Maechler

So – everyone knows how to pick up the phone or submit a quote request, but not everyone knows how to use an algo?

AM: For users, the nice thing is that they get more control. But that means making choices, which can add complexity. Transparency of trading activity in the primarily bilateral and over-the-counter FX market is limited. Disclosures related to algos are typically high-level and non-standardised. Given the myriad execution algos on offer, an informed decision about which one to choose requires detailed information on what an algorithm does and how it does it – that is, its characteristics and decision logic.

At one point in our work, we had compiled a list of algorithms – they had really interesting and catchy names – but the names don’t really tell you what the algorithm actually does. Users need to be aware: what am I choosing, and what are the questions I should be asking to the provider of the algo? Do they want certainty that the order will be executed in a particular period of time? Do they want to minimise market impact? Or reduce market risk as swiftly as possible?

Andrea-Maechler of the Swiss National Bank
Louis Rafael
Andréa Maechler

In that context, the study notes a clear trend among some algo providers to offer greater degree of in-flight control to users, while others offer much simpler, fire-and-forget algos.

AM: Yes. But that’s typical of an emerging area. Different players are trying to find their niche – which is what the market forces you to do – and your niche can change depending on what will ultimately emerge as the most viable solution.

The study also notes that user agreements and disclosures may not spell out the allocation of risks and responsibilities clearly. I wondered whether some providers are opting to keep it simple in order to manage that issue – a possible source of mis-selling or reputational risk?

AM: Well, there are many possible explanations. Some providers are trying to make these products as simple as possible to use, and to reduce the user-defined parameters. And others are giving more choice, and training to go along with it.

Ultimately, the market wants to make algo users comfortable, but they’re doing it in different ways and we’ll have to see what happens.

That point in the study, about the quality of the information provided to algo users – is that something you were told, or did you look at some agreements and disclosures and draw your own conclusion?

AM: Both. A lack of standardisation is part of the issue here. If you look at many of the available algos, they have very different disclosures. Sometimes they may provide the same information but they will provide it in a different form – and unless you are really well-versed in the topic you may not realise it. So this is why we said greater standardisation would really help.

If you look at many of the available algos, they have very different disclosures. Sometimes they may provide the same information but they will provide it in a different form – and unless you are really well-versed in the topic you may not realise it. So this is why we said greater standardisation would really help
Andréa Maechler

The differences in disclosures might also explain why some people just end up using a couple of algos or prefer to make use of the simplest ones – once they feel comfortable with a product, they keep using it.

Another interesting topic raised by the study is the idea that internalisation could be a double-edged sword – that as more trades are matched bilaterally, the prices available on public markets might lose their relevance. How concerned are you about this, as a trend?

AM: Internalisation can be beneficial to both customers and dealers. The latter benefit from internalisation by avoiding intermediation costs and users benefit from potentially reducing the information leakage – and hence market impact. However, higher rates of internalisation also mean public venues will get less volume and probably less digestible flow.

What does this mean going forward? Well, one possible argument is that the easily digestible flows contain less market-relevant information – and if you believe that, then it might be OK for more of these trades to disappear from the public venues. Even in that case, though, the problem is that the venues are likely to see much less volume.

Andrea-Maechler of the Swiss National Bank
Louis Rafael

Should something be done? This is a complex question and too early to answer. Ultimately, it depends how internalisation affects price discovery. In our assessment, the market is currently able to provide adequate price discovery. However, taken to the extreme, there may well be negative side effects or even disruptions. For example, excessive internalisation could force public venues to adapt their business models towards charging for market information rather than flow – a development we already observe today. This could threaten a level playing field for all market participants.

To be clear, we don’t see internalisation as a problem right now, but it’s a very important question for the market.

How will we know if the reference price begins to degrade?

AM: [Laughs] I wish I knew! It is conceivable that the price discovery process will worsen slowly, but it is also possible the equilibrium between liquidity providers and liquidity takers will suddenly break down. This is why the impact of internalisation on price discovery warrants greater analysis and close monitoring over time.  

Even for us – the 22 central banks behind this report – with our monitoring skills, our deep expertise and our participation in these markets, it’s difficult to give a definitive answer.

One big hurdle is data – just understanding what kind of data we need, what kind of architecture is needed to handle the data and how to develop the monitoring indicators we need to understand it. That’s where a lot more work needs to be done.

That leads to another point raised by the study. At one point it floats the idea of a “dedicated, fit-for-purpose” platform that could be built and used by central banks to help monitor markets and answer some of these questions. It sounds exciting, but I don’t know what that means in practice.

AM: Well, it’s in the making. Look, you’ve heard the BIS has created an Innovation Hub and the hub is actually made up of several centres. Currently, three are already up and running – in Hong Kong, Singapore and Switzerland – and four more are being set up. In each case, the host central bank works closely with the BIS on the projects of each centre.  

This is the work being led by Benoît Cœuré?

AM: Exactement! The BIS Innovation Hub Centre in Switzerland, headed by Morten Bech, is working on setting up an architecture that would help central banks monitor fast-paced markets such as FX.

The result will be a prototype, but the idea is to get a better sense of what’s required to answer questions such as the ones we raised in our report. At what point do the costs of internalisation outweigh the benefits? What indicators might you use to measure liquidity as the execution model changes? How do you understand the resilience of liquidity replenishment?

It is also possible the equilibrium between liquidity providers and liquidity takers will suddenly break down. This is why the impact of internalisation on price discovery warrants greater analysis and close monitoring over time
Andréa Maechler

How old is the architecture project?

AM: The Swiss centre opened in October 2019 and this is one of the important projects we’re running at that centre. Another project looks at how to best integrate the cash leg into a distributed ledger – using a private coin, a wholesale central bank digital coin, or a simple interface with the payment system.

These centres are a new way for central banks to work together on frontier topics relevant to their mandates. Obviously, central banks benefit already from a close-knit community, but the Innovation Hub – under Benoit’s leadership – is something new.

The idea is to choose topics with a concrete output, something that can be used more broadly – a kind of public good for the central bank community. In the case of the work on fast-paced markets, the idea is to explore what a monitoring platform for these markets could look like – what technology it entails, what new skills are needed – and then central banks can decide whether it’s something they want and where more collaboration could be helpful. For example, the data can be very costly: so, understanding what central banks need, what they have access to and what not, what is lacking, is crucial.

The study suggests more uniform disclosures by algo providers might be needed. I think that’s a reference to the point made elsewhere, about a lack of clarity on risks and responsibilities. Is that right?

AM: That’s right. We didn’t go into too much detail, for a couple of reasons. First, we didn’t do this work from a regulatory perspective. And in the end, disclosures are something that is better dealt with from that perspective.

But also because the three-year review of the FX Global Code is being conducted, and we knew that some of the questions would be better suited to be addressed in that work. From the onset, the idea was to feed relevant findings from our study into the three-year review.

So, you know your study is being considered by the code review group?

AM: The GFXC will form its own, independent view on which topics to pursue further. But it’s a close-knit environment, so many of the people who were involved in our group are also involved in the review – and throughout the work we did, we kept that community informed.

Have you heard of any disputes between an algo provider and a user on the terms of service?

AM: I’ve heard of discussions. I haven’t heard of any specific disputes and it wouldn’t be my place to give any examples.

  • LinkedIn  
  • Save this article
  • Print this page  

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact [email protected] to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: