A “Big Data” Study of Microstructural Volatility in Futures Markets

Kesheng Wu, E Wes Bethel, Ming Gu, David Leinweber and Oliver Rübel

Factors such as electronic exchanges, decimalisation of stock prices and automated order slicing created an explosion in the amount of financial data in the first decade of the 21st century, and the number of trades per day has been increasing dramatically. A large portion of the trades happen near opening or closing of the trading day, which creates very high rates of trading activities in short bursts. This high data rate and even higher burst rate make it difficult to understand the market. Many researchers have argued that a better understanding of high-frequency trading, and better regulations, might have prevented events such as the US flash crash of May 6, 2010 (Easley et al 2011b; Menkveld and Yueshen 2013). However, academic researchers and government regulators typically lack the computing resources and the software tools to work with large volumes of data from high-frequency markets. We believe that the existing investments in high-performance computing resources for science could effectively analyse financial market data. In this chapter, we use the concrete task of computing a leading indicator of market volatility to demonstrate that a modest machine could analyse

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here