Skip to content

What can NASCAR teach NASDAQ about avoiding crashes?

June 28, 2012
by Guest author

“Safety advances have, with a few exceptions, come as the result of tragic consequences”

This month marks the centennial of the birth of mathematician Alan Turing, the “father” of modern computing and artificial intelligence. To celebrate the occasion, we’ll be publishing a series of articles on modelling and economics. Today’s article is from David Leinweber, head of the Lawrence Berkeley National Laboratory’s Center for Innovative Financial Technology and author of “Nerds on Wall Street: Math, Machines and Wired Markets

The Flash Crash wiped one trillion dollars off US stocks in 20 minutes on May 6, 2010, with most of the damage being done in only five minutes. But it took the Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) nearly five months to produce a report on those five minutes. If it takes so long to reconstruct and analyze an event that has already happened, imagine the difficulties in trying to regulate and prevent such incidents in markets where 1500 trades are made in the time it takes you to blink and where dozens of globally interconnected exchanges and trading facilities have replaced a small number of centralized stock markets.

The Lawrence Berkeley National Laboratory (LBNL) is actually a Department of Energy national laboratory, but we work in a number of data-intensive scientific areas where detecting and predicting particular events is crucial, ranging from cosmology to climate change. In 2010, Horst Simon, then director of LBNL’s Computational Research Division (now deputy director of LBNL) and I co-founded LBNL’s Center for Innovative Financial Technology (CIFT) to help build a bridge between the computational science and financial markets communities.

At present, a basic tool in regulating financial markets is the “circuit breaker” that stops trading, and after the Flash Crash new circuit breakers were instituted that stop the trading of individual stocks if their price variations exceed a prescribed threshold. However, as different markets and venues become more interdependent, sudden halts in one market segment can ripple into others and cause new problems.

What’s needed is a system to detect and predict hazardous conditions in real-time to allow the regulatory agencies to slow down rather than stop markets. Energy networks do this with brownouts to prevent blackouts, but we can also seek inspiration in NASCAR racing, where, faced with a growing number of increasingly gruesome crashes as the cars got too fast for the tire technology of the day, officials introduced the yellow flag to slow the races down when things got too dangerous.

Racetrack officials (like air traffic controllers or weather forecasters) can see trouble looming and intervene to prevent disaster. We are exploring the possibility of using supercomputers to survey markets in real time and turn on a “warning light” to advise regulators to slow things down when anomalies start to appear. Anomaly is in fact a rather bland term for some of the weirdness seen during the Flash Crash. For instance, you could buy Accenture shares for one cent or more than $30 during the same second at one point.

Based on recommendations from traders, regulators, and academicians, we have implemented two sets of indicators that have “early warning” properties when applied to the data for the period preceding the Flash Crash. The Volume Synchronized Probability of Informed Trading (VPIN) measures the balance between buy and sell activities using volume intervals rather than time intervals. A variant of Herfindahl-Hirschman Index (HHI) of market fragmentation measures how concentrated the exchange operations are, since fragmentation is considered as a source of market instability.

Because of the computational demands, computing indicators like VPIN and HHI in real-time will require high performance computing (HPC) resources. It will also need reliable data. For example, we discovered that different sources disagree on how many trades there were of Apple Inc at $100,000 per share on May 6, 2010.

Is real-time high frequency monitoring needed? The SEC/CFTC has announced their intention to direct many billions from the financial industry to this effort, which has been criticized by others as unnecessary overkill. We disagree with the critics. It is worth spending money on ways to improve on regulatory approaches based on circuit breakers. Stopping trading is a very blunt instrument that does not allow the market to self-correct and stabilize, and can easily make a bad situation worse.

Our tests show that VPIN, HHI and similar indicators could provide early warning signals for a more gradual slow down, rather than stop, replacement for on/off circuit breakers and our high frequency trading and academic collaborators hold this opinion strongly as well. Furthermore, we believe that the same approach, likely with additional computation, is applicable in the area of financial market cyber-security, which is widely acknowledged as important, but largely ignored in the regulatory debate.

Useful links

For a detailed account the work summarized above, see Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

OECD work on financial market trends and policies

One Response

Trackbacks and Pingbacks

  1. Cloudy, with a Chance of Default

Comments are closed.

Follow

Get every new post delivered to your Inbox

Join other followers: