Sponsored by ?

This article was paid for by a contributing third party.More Information.

Data-driven execution: looking back to see forward

Looking back to see forward

Reviewing favourable outcomes and attempting to replicate them is by no means a new concept across the capital markets. Portfolio managers, execution professionals and risk managers use this principle to drive their decisions, although it is really only with the advent of advanced data analytics tools that they are now in a position to unlock the potential hidden within their vast data repositories with the view to monetising it

Reviewing favourable outcomes and attempting to replicate them is by no means a new concept across the capital markets. Portfolio managers, execution professionals and risk managers use this principle to drive their decisions, although it is really only with the advent of advanced data analytics tools they are now in a position to unlock the potential hidden within their vast data repositories with the view to monetising it

The truism that past success guarantees nothing is especially pertinent to the capital markets. However, when firms are able to understand the variables and decisions that contributed to successful outcomes, it is entirely reasonable to expect similar results if those variables are known and their decisions are systematised. From a business user’s perspective, the rationale is this: if this is what we did last time and the result was favourable, then it is reasonable to expect a similar outcome if the variables/inputs are similar or the same. This is one of the principles that drives firms on both sides of the market in pursuit of performance and delivering value to their customers and investors.

In recent years, the industry has even witnessed the emergence of tools designed to monitor portfolio managers’ and traders’ behaviour to understand how and why decisions were made, and then modelling that behaviour in the interest of optimising performance and consistency.

A simple premise

The premise of systematising execution decisions based on historical trade data is relatively simple. For example, if you’re a portfolio manager looking to buy a specific illiquid bond, you have two options. You can either do it how you’ve always done it – by assuming the banks you’re familiar with will be able to offer the most reliable liquidity in that bond, based on past experience, or you can inform and systematise your decision by looking back across the entire universe of securities you’ve traded over, say, the past three years, two days or even two minutes for more liquid instruments, where a significant number of those activities were more closely related to similar instruments than the bond you’re looking to trade (in terms of sector and issuer, etc.).

The idea is to leverage all of that information along with recent axes you might have received from dealers, which, when aggregated, can provide the pre-trade market intelligence that significantly increases your chances of a favourable outcome. In short, your decision regarding how you execute that trade becomes more deliberate, more informed and significantly more repeatable.

The International Securities Identification Number (ISIN) or issuer is not the only filter relevant to trading early in the morning or late in the afternoon that can generate different indications and decisions typically in terms of execution venues. The flexibility to explore datasets through multiple filters and axes is key to taking into account all the specifics of potential interest.

However, the execution of the premise is near impossible unless you have the technology to allow you to do it. In other words, accurate, reliable and transparent pre-trade market intelligence is only feasible with the requisite technology underpinning it.

Stéphane Rio, Opensee
Stéphane Rio, Opensee

“The idea is that firms can leverage the value of their database and all the datasets they have been storing by filtering according to an ISIN or group of ISINs, an issuer or group of issuers, a maturity, a liquidity context or a certain time during the day,” explains Stéphane Rio, chief executive and founder of Opensee, a Paris-based provider of real-time self-service analytics solutions for financial institutions.

“They can look at their datasets how they want in order to generate this [pre-trade] market intelligence. Essentially, firms can decide in real time what information they want to see and how they want to use it, so the outcome of the exercise is real best execution.”

The challenge

There are a number of challenges facing firms that are generating pre-trade market intelligence, although pretty much all pertain to data. Datasets tend to be siloed across the business, and different asset classes invariably have their own trading desks, platforms and conventions, which makes automatically commingling, normalising and storing disparate datasets a complex undertaking. It is often even a challenge within the same asset class – some firms, for example, store the data relating to their fixed income repo and cash transactions separately.

Similarly, axes that dealers send to investors tend not to be stored but, in the event they are, they are often located in different repositories to where the trades reside, while orders and requests for quotes tend to be located in yet another location. This challenge is further exacerbated by the ongoing digitisation and electronification of the industry, leading to a significant increase in data volumes, sources and formats – all of which needs to be ingested, processed and stored. “Storing all this data in a single location means you have access to all that information in a single place, which makes it dramatically easier to leverage the data and cross that information,” Rio says.

The way forward

Traditionally, generating pre-trade market intelligence in real time or close to real time has been a pipe dream for all but the very largest market participants with the deepest pockets. Rio explains that Opensee’s strategy has two primary components: scalable technology and an intimate understanding of how data needs to be organised and stored so business users can leverage it how and when they choose to.

“First, the solution must be scalable,” he says. “When you start to look at all the trades, all the orders, all the axes, all the information firms receive and store and historical ranges – and there is a lot of value in retaining all of that information in order to identify trends and patterns – that’s a lot of data, which is why scalability is so critical. But scalability does not mean you have to lose granularity or the real-time aspect, as that is where the value resides.”   

Scalability is all well and good but, if the data is not managed properly and made easy to analyse, users are not going to be able to fully realise its value. “There are many steps to take before data can become useful to a user,” Rio explains. “First, you need to enrich it, join multiple sources and design the optimal data model, and you need to develop processes to correct static data and automatically identify outliners and errors in live data. Once your data is well organised and cleaned, you then need to look for an easy way to build and iterate the right machine learning algorithm to achieve best execution. And when you are done, that’s usually the time you want to include even more datasets – you are therefore looking for an easy way to onboard any new dataset on the fly without having to rebuild everything.

“One of our strengths in addition to tech is our understanding of what is required, and our product includes all those functionalities, which is where we believe we have a competitive advantage.”

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here