This post presents events that shaped the evolution of High Frequency Trading, the massive penny-ante arbitrage programs used by the high-speed traders.
Like most technologists, I approach history with some disdain–this despite an occasional “aha moment” when I am surprised by the fundamental impact of a historical event on today’s technology. But, when it comes to explaining the complex and supremely silly structure of the current financial markets, history is the key. Today’s market rules may seem utterly illogical, yet they were made for seemingly good reasons at some point in time, many years ago. In fact, at the high level, our market technology works the way it does mostly “for historical reasons”.
I started this post by making a set of notes for my own use and it grew to the point where I decided to share it. I hope that it will prove helpful to readers that need a quick perspective of the HFT history. I intend to update the post periodically, as my time permits. Please suggest changes by email or by leaving a blog reply/comment.
My focus is on the US stock market and the global Forex spot market, two very important but different markets that had to cope with the “issue” of HFT for two decades. I also cover the European stock market because of its leading role in HFT regulation.
Here is a quick and rough side-by side comparison of the two markets that shows parameters important for HFTs. You can see some stunning differences and similarities. (Concerning the differences: even this simple table required a non-trivial effort to establish meaningful dollar trading volumes because equities volumes are usually specified only as a number of “shares”. For additional comparisons, see MarketFactory’s post Complexities of FX and Equities are Inverted by James Sinclair.)
|US Stock Market
|Forex Spot Market
|Approximate average total daily volume (2013)
|Number of actively traded symbols
|Busiest symbol (max avg daily dollar volume)
|Average daily dollar volume of the busiest symbol
|Typical unit price (order of magnitude)
|Typical price tick size on major venues
|$0.01 (1 bp)
|$0.0001 (1 bp)
|Value of a "round lot" on major venues
|Round lots per day for the busiest symbol
|Typical bid-ask spread for the busiest symbol
|$0.01 (1 bp)
|$0.0001 (1 bp)
This post provides only a chronological sequence of important HFT-related events, rather than an annotated framework of causalities (because this would require a long book). Events include regulations, influential books, papers, blog posts, technology developments and glitches.
Events are listed in reverse chronological order. To keep the number of entries manageable, a singly-dated headline may include a “More…” section with some history leading to the event, as well as mentions of future developments. There is some dichotomy in dating of the key events: For regulations, I often use the date when a law was adopted or implemented, even though it might have been in the works for many years. For ideas (papers, posts, etc.), I try to find the initial publication that is deemed seminal or influential.
Acknowledgments: For helpful discussions and suggestions I am grateful to numerous industry practitioners: James Sinclair (MarketFactory, Inc.), Chris Sparrow (Sparrow Consulting), Prof. Donald MacKenzie (Univ. of Edinburgh), Prof. Eric Budish (Univ. of Chicago), Dr. Anatoly Schmidt (Kensho Technologies, Inc.).
Apologies: I tried for a compromise between a dry wiki article and a blog piece. Unavoidably, I exercise a subject bias by selecting only specific events. I may also be biased in my views or in the smart-alecky conclusions (made with twenty-twenty hindsight); these are preceded with the heading “Opinion:” and are hidden in the “More…” sections below — you have been warned.
Click here toor to all “More…” sections on this page. (Use “show” if you are searching for a particular term or if you wish to print the whole thing.)
Feb 11, 2015: Physicist Mark Buchanan suggests in a Nature’s article that – because of speed of light limitations – within a few years it may become profitable to station a ship or other trading platform near halfway points between pairs of financial centers worldwide. More…
The commentary article Physics in finance: Trading at the speed of light (Mark Buchanan, Nature, Feb 11, 2015) asserts:
“To explain: special relativity says that nothing can travel faster than the speed of light, c. Hence, a trader standing a distance D away from an exchange can find out what happened there, in the best circumstance, at a time T = D/c after it happened. Between major trading centres around the globe, such delays can be from a few to tens of milliseconds. If a trader stands halfway between the two exchanges, he or she will receive information from both after the same interval, T = D/c. Anywhere else, the distance to at least one of the exchanges would be greater and information would take longer to get there.
In other words, within a few years it may become profitable to station a ship or other trading platform near halfway points between pairs of financial centres worldwide (see Fast-trading hotspots).”
The news regarding the optimal placement of HFT computers on mid-ocean barges traveled fast, see stories in MarketWatch, International Business Times, and WatersTechnology. Some pundits accepted the “fact” as self-evident, some expressed doubts. Bill Harts, CEO of Modern Markets Initiative, a group that supports high-frequency trading, dismissing the idea, told MarketWatch: “Even if a trader could receive data from a market faster at such a location, in order to profit from it she would have to transmit orders to a market from the same location and would always be slower than traders stationed at the actual markets.” In a follow-up IBT article, he adds: “If this were possible, data center space in Youngstown, Ohio, (roughly half way between the New York and Chicago exchanges) would be at a premium because all the HFTs would locate there. But guess what? There is no HFT in Youngstown, Ohio.”
Opinion: In fact, Nature’s article is completely wrong about trading from a midpoint between financial centers. The article is important only as another illustration of the literati’s ignorance of high-speed trading basics (see also the Opinion section under the Mar 31, 2014, Flash Boys entry).
The Nature’s article is trivially correct in stating that the midpoint between the two exchanges is the point where an arbitrage opportunity can be first discovered. What is less obvious — given that Mark Buchanan, a respected physicist and author got it wrong — is that the midpoint is a particularly bad place where to make trading decisions.
My Feb 25, 2015 post “Colocation beats the speed of light” provides an easy to digest formal proof that the optimal configuration for a trading strategy that communicates with a number of distant exchange servers is fully distributed — that is, all of the strategy’s computers are colocated with the servers. (Remarkably, the same logic applies to a financial exchange that provides services to several distant data centers.)
Jan 1, 2015: Daniel Fricke and Austin Gerig publish an estimate for the optimal frequent batch auction interval for the US equities market. More…
The concept of synchronized markets (aka frequent auctions) has been championed in recent years by Chris Sparrow, Eric Budish at al., and others. This paper (Too Fast or Too Slow? Determining the Optimal Speed of Financial Markets, Daniel Fricke and Austin Gerig, Jan 1, 2015) is the first attempt to establish the optimal length of the auction interval in the US stock market.
The model uses three factors to determine the optimal clearing frequency: the volatility of the security, the intensity of trading in the security, and the correlation of the security’s value with other securities. All else being equal, a security should be traded faster if its volatility is higher, slower if its intensity of trade is lower, and faster if its correlation with the market is higher. Using rough estimates of these values, the authors determine that the optimal time interval of trade for a typical US stock is currently 0.2 to 0.9 seconds. The analysis suggests that speed is important in financial markets and that time delays of even a fraction of a second can harm market quality. On the other hand, the results also suggest that for many securities, milli- and microsecond speeds are unnecessary.
For a helpful summary, see Why we need a speed limit for financial markets (Mark Buchanan, Bull Market, Jan 25, 2015).
Opinion: The paper takes the bold initial step to establish the optimal length of the auction interval in the US stock market. But the real issue is that that US equities market — like most other markets — operates in continuous, not discrete, time. In the end, if we ever agree on trading using frequent auctions, the auction interval will very likely be chosen by regulators to be uniform across all securities, just as the price granularity is determined today. The most likely scenario is that the less liquid securities will trade at a multiple of some standard minimum interval (say 10 or 50 milliseconds).
First, we must prove beyond any doubt that discrete time trading is fundamentally preferable to continous time trading. Then, the actual length of the of the auction interval will be about as important as agreeing to trade in price increments of a penny rather than a nickel or an eighth of a dollar. Given that we halt all trading on weekends (or at night – in equities markets), why is it so controversial to match orders only every 10 or 500 milliseconds when preponderance of research indicates that this is the most efficient way to trade?
Aug 26, 2014: SEC announces a 12-month pilot program that will widen tick size from $0.01 to $0.05 per share for certain small cap stocks and study the effects. More…
Earlier this year, on Jan 31, 2014, SEC’s own Investor Advisory Committee recommended against the pilot. On Feb 11, 2014, the pilot was overwhelmingly mandated by the U.S. House “Tick Size Bill” (although the Senate never proposed a companion bill). The scope of the SEC pilot is narrower than what was proposed by the House bill; it aims only to assess whether wider ticks would enhance small cap market quality for the benefit of investors.
Opinion: Everyone agrees that the larger tick size would be a boon to the traditional market makers by guaranteeing a minimum spread and preventing the HFT market makers from cheaply jumping in front of the queue. Besides this, there is a lot of iffy logic making for a fascinating debate. Advocates say that wider tick sizes would compel market makers to provide more liquidity in small stocks, increase number of IPOs, and ultimately create jobs. Critics say that spreads in small stocks are already considerable, yet market makers are not interested; wider tick sizes would only make trades more expensive for retail investors. The thoughtful Recommendation of the SEC Investor Advisory Committee is definitely worth a read.
It is instructive to compare this anti-decimalization movement in the stock market with the anti-decimalization of the Forex spot market. Where, in the stock market, the large ticks are intended to incentivize market makers to provide liquidity in illiquid small cap stocks, the Sep 17, 2012 change to half pips on the EBS/ICAP venue was done to satisfy manual traders trading larger-than-normal amounts in the most liquid currency pairs, such as EUR/USD.
Jun 30, 2014: Following in the footsteps of EBS/ICAP, Thomson Reuters Matching prepares to test randomization of order processing. More…
Thomson Reuters will pilot the order message delay randomization with a single currency pair, likely to be USD/MXN, in early 2015 — a year and a half after EBS started to deploy its “latency floor” on Aug 19, 2013. Similar to EBS, Thomson Reuters plans to randomize order processing for at least three milliseconds during the pilot phase. But, unlike EBS, the company will apply randomization only to taker (aggressive) order submissions, not to maker (passive) orders or order cancellations.
Opinion: Thomson Reuters’ tip-off that only the taker order submissions will be delayed seems to indicate that the order matching industry finally came to the realization that delaying maker orders and cancellations harms liquidity (see also May 30, 2014 and May 8, 2014).
Somewhat incongruously, Phil Weisberg, global head of FX at Thomson Reuters, said at a recent (2013) conference that he has always thought of randomization of orders “as a bug, not a feature” (see James Sinclair, Aug 1, 2013).
“If you’re worried about the liquidity impact, you don’t randomize the cancellation orders, but only the taking of orders and possibly the making of them as well.” But, he goes on to say that, nevertheless, “Randomising cancellation orders has the biggest impact on HFTs, as the success of one of their favored strategies – quote matching – depends on a trader’s ability to cancel orders faster than others can.”
Opinion: Larry Harris is right on both counts. But the most important points remain unsaid. Firstly, delaying cancellations hurts all market makers equally, whether HFT or not, by making them vulnerable to quote sniping. Secondly, randomizing taker orders enables everybody, not just HFTs, to be a successful quote sniper–it levels the sniping field. Thus, random order delay scheme (when applied to all order messages) is a double whammy to liquidity.
May 8, 2014: Thomas Peterffy presents to SEC the Interactive Brokers Group Proposal to Address High Frequency Trading. The proposal would impose random delays of up to 200 ms on taker transactions only on all equity and options trading venues. More…
Peterffy recommends “that all U.S. equity and option trading venues be mandated to hold any order that would remove liquidity for a random period of time lasting between 10 and 200 milliseconds before releasing it to the matching engine.”
Opinion: This recommendation goes further than any implementation of Larry Harris’ random order delay scheme (ParFX, EBS, or Thompson Reuters). The magnitude of the delay (200 milliseconds) is truly enormous, easily noticeable even to a human trader. Although the proposal seems self-serving (coming from a traditional market making firm), it actually follows a reasonable modification of the original Harris proposal that mitigates its anti-liquidity effects by applying delays to taker order submissions only (not to maker order submissions or cancellations).
May 6, 2014: The European Financial Transaction Tax proposal refuses to die. Ten member states issue a Joint Statement on FTT. More…
The statement is unspecific and says that “It is evident that complex issues have arisen” but pledges that “the first step should be implemented at the latest on 1st January 2016”. Slovenia did not sign the statement, leaving the group of ten PMS with only one vote above the required minimum of nine EU member states.
For a balanced summary, see Latest Developments Regarding the Proposed European Union Financial Transaction Tax, May 9, 2014. The United Kingdom’s attempt to derail the proposal at an early stage, via a legal challenge, was dismissed on Apr 30, 2014. The current plan of the Participating Member States is to work on a “progressive” implementation of the FTT. Fixed-income products, for example, would not be taxed initially. There has also been a suggestion that the minimum rate of tax might be ten times lower than originally proposed – namely 0.01% instead of 0.1%.
Apr 15, 2014: European Parliament adopts Markets in Financial Instruments Directive and Regulation (MiFID II/MiFIR). The legislation proposes strict rules on HFT trading, including requirements that automated traders be authorized and regulated, that they reveal and test their algorithms, and that market makers provide liquidity on a consistent basis. It also sets standards meant to prevent small tick sizes and to limit HFT order-to-trade ratios. More…
The HFT provisions of MiFID II are based on ESMA’s Automated Trading Guidelines that went into effect in May 2012. ESMA Discussion Paper is published on May 22, 2014. On the basis of the responses received, a consultation paper will be published in Q1 2015. EU member states are required to implement MiFID II in their national legislations by June 2016. The legislation will be applied across EU by January 2017.
At the same time, European Parliament adopts Market Abuse Directive and Regulation (CSMAD/MAR) which strengthens the existing 2003 Market Abuse Directive by addressing specific HFT strategies (for example, “quote stuffing”) and explicitly banning manipulation of benchmarks (such as LIBOR). The legislation will go into effect by July 2016.
Opinion: The MiFID II legislation is wide-ranging and quite revolutionary as far as market regulations go. Of course, ESMA (European Securities and Markets Authority) had the benefit of having been able to observe the effects of inadequate regulation of the U.S. equities markets and Congress’ and SEC’s inability/unwillingness to fix it.
Some initial proposals were rejected by ESMA, such as a minimum half-second order “resting time” or a ban on customers using their brokers’ exchange membership to trade directly on the market (“sponsored naked access”).
Individual EU member states are free to impose tougher HFT regulations, such as Germany’s High Frequency Trading Act (effective May 15, 2013) that makes HFT a licensed activity, or Italian Financial Transactions Tax (effective Sep 2, 2013) that imposes an HFT tax of 0.02% on modifications and cancellations of HFT orders.
Mar 31, 2014: Michael Lewis’ Flash Boys: A Wall Street Revolt is published. The book describes the stock market plagued by conflicts of interest that benefit high-frequency traders and harms ordinary investors. It gains enormous notoriety by implying that the U.S. stock market has been “rigged” and triggers several congressional hearings on high-speed trading. More…
You probably read “Flash Boys”, otherwise you would not be reading this post. But, if you did not, read at least the thoughtful review by John Lanchester in London Review of Books — you will save a lot of time. Here is a quote from that review that introduces the main problem, eventually solved by the hero, Brad Katsuyama: “Flash Boys is a number of things, one of the most important being an exposition of exactly what is going on in the stock market; it’s a one-stop shop for an explanation of high-frequency trading (…). The book reads like a thriller, and indeed is organized as one, featuring a hero whose mission is to solve a mystery. The hero is a Canadian banker called Brad Katsuyama, and the mystery is, on the surface of it, a simple one. Katsuyama’s job involved buying and selling stocks. The problem was that when he sat at his computer and tried to buy a stock, its price would change at the very moment he clicked to execute the trade. The apparent market price was not actually available. He raised the issue with the computer people at his bank, who first tried to blame him, and then when he demonstrated the problem – they watched while he clicked ‘Enter’ and the price changed – went quiet.”
There are many positive reviews of the book, along with some harsh criticisms (see, for example, Matthew Philips in Bloomberg Businessweek, Charles Gasparino in New York Post, Larry Tabb in TabbFORUM, or this Amazon review). The Modern Markets Initiative (an HFT advocacy group) site maintains a long list of video clips upholding virtues of HFT, many of them critical of the Lewis’ book.
Opinion: My main (additional) criticism is that Michael Lewis, in his attempt to deepen the thrill and mystery, makes his heroes appear dull-witted. To illustrate this, here is just one passage (selected out of many similar ones) which describes the moment of Brad Katsuyama’s key insight: “The increments of time involved were absurdly small: In theory, the shortest travel time, from Brad’s desk to the BATS exchange in Weehawken, was about 2 milliseconds, and the slowest, from Brad’s desk to Carteret, was around 4 milliseconds.” This meant that exchanges in Weehawken and Carteret would receive Brad’s orders 2 milliseconds apart and (as discovered later by the heroes) allow the HFTs to react in Carteret on the information that they gained when the initial order was received 2 milliseconds earlier in Weehawken.
Well, the “absurdly small” two milliseconds worth of human time is actually a whole lot of computer time: two million nanoseconds. Starting in 1986 (when Brad Katsuyama was in primary school), retired Rear Admiral Grace Hopper, US national icon, toured the country educating young people and businessmen on the value of nanosecond. At her lectures, she would distribute 30 cm (about one foot) pieces of wire that represented the furthest distance that information could travel in one nanosecond. She would also suggest that programmers hang a one-microsecond-worth loop (984 feet) of cable around their necks so that “they know what they throw away when they throw away a microsecond”. Thus, one would expect that, three decades later, every child knows that two milliseconds is a very, very long time in computer terms. Moreover, every teenager knows that her smartphone processor runs at 1 GHz or better, which means that it executes about one processor instruction–a “blink of computer’s eye”–per nanosecond. Using the blink metaphor, 2 milliseconds of computer time equal about 55.5 hours of human time (2 million blinks at 0.1 sec per human blink). That’s enough of human time to ponder a trading decision, have ten long conferences, and then send a messenger by foot from Weehawken to Carteret. My point is that people like Brad Katsuyama (the Global Head of Electronic Sales and Trading at RBC Capital Markets) should live by this metaphor and should be able not only to quickly solve the problem he discovered, but to predict that the latency arbitrage problem must exist in the first place, while using only basic logic and the faith that human greed will explore every opportunity to make a buck.
In general, any rational person–for example, an SEC regulator–should know that a market regulation that allows multiple exchanges (Regulation ATS), continuous time trading, and that mandates order execution at the “universal” National Best Bid and Offer price (NBBO, which is at the heart of the Regulation NMS) is self-contradictory; see Einstein’s relativity of simultaneity established in 1905. In 2014, the fastest trading algorithms, implemented on FPGAs, take only 650 nanoseconds (less than a microsecond) to make a trading decision. But it takes at least 100,000 nanoseconds for the information to travel at the speed of light the 20 miles between Weehawken and Carteret (and much longer in practice). So any concept of the “best price” available to all traders at the same time is utterly silly. It is straightforward to brainstorm and enumerate all the basic ways that a trader could take advantage of this regulatory contradiction, something that the Flash Boys’ heroes eventually do when designing the new IEX venue. (Advice to US SEC: hire a physicist to review your regulations!)
At the core, the story of Brad Katsuyama is similar to Haim Bodek’s and covers events that ocurred roughly at the same time (culminating around 2009). In each case, a very competent, professional equities trader is greatly surprised by certain fundamental rules of the game that are well known and widely used by the HFTs. But Katsuyama’s solution is much more radical and intrepid: he changes the fundamental rules of the game by creating the IEX exchange. There, he uses the same laws of physics that bit him to protect his customers from effects of latency arbitrage.
Katsuyama’s solution doesn’t eliminate all the perceived ills of HFT and does not scale up well from it’s use at a single exchange. In my view, the only rational way to achieve the noble goal of having one National Price that’s available to all at the same time, while preventing multi-venue latency arbitrage and assuring fair distribution of spoils is the concept of synchronized markets (aka frequent auctions). The concept has been championed enthusiastically in recent years by Chris Sparrow and Eric Budish et al.
For an interesting early reaction, see The Economist’s article The end of the street (which reads a bit like a NYSE obituary).
Opinion: ICE is an unbelievably successful and entrepreneurial company, see Intercontinental Exchange – Mergers and Acquisitions.
Sep 18, 2013: In a well-publicized and hotly debated incident, information about the Federal Reserve’s decision to maintain bond buying program (the “no-taper announcement”) travels from Fed’s safe “lockup room” in Washington to New York and Chicago in zero milliseconds, thus contradicting Einstein’s special relativity theory. More…
The incident was analyzed in a Dec 4, 2013 paper by Quincy Data researchers who looked at microsecond-granularity timestamps to confirm the irregularities.
Opinion: For the first time ever, U.S. regulators had to seriously consider the speed of light limitations. Although the incident was never adequately explained by the Fed, it seems that the news leaked from the Fed’s media lockup room a few minutes before the scheduled release time; this allowed a news distributor to program its servers in NJ and Illinois to distribute the news within the first millisecond after the announcement’s official release time. On October 25, 2013, the Fed announced it would add an internet kill switch and other measures to prevent future leaks of FOMC news announcements.
Sep 16, 2013: A runaway algo at Black Phoenix Research, one of Rabobank‘s FX Prime Broker customers, accumulates a very large position that “breaks the house” by exhausting some of the global bank’s daily interbank credit limits. More…
Rabobank is a Dutch bank with a global presence and an excellent reputation, rated in Feb 2013 Global Finance as the 10-th safest bank in the world. Until 2014, the bank offered FX prime broking services (that allow its customers to trade in the global FX spot markets in the bank’s name).
Black Phoenix Research is a relatively small Chicago prop firm, an offshoot of Last Atlantis Capital Management (a firm that was discredited by being kicked out of National Futures Association in 2009 and an unsavory lawsuit in 2012). Black Phoenix uses Rabobank as its FX prime broker and Integral Development as its FX aggregator.
On Sep 16, 2013, the Black Phoenix’ runaway algo unintentionally exploits a weakness in Rabobank’s risk control procedures. The glitch exhausts some of the banks credit limits and causes up to $15M in loses for the bank. (This could not come at a worse time. A month later, in an unrelated development, Rabobank gets slapped with $643M in fines for false reporting and manipulation of LIBOR over a period of 6 years; Rabobank’s CEO resigns shortly thereafter.) In 2014, Rabobank severs its relationship with Integral Development aggregator and exits the FXPB business for good.
Opinion: The way prime broking works is that the customer (a hedge fund or a prop shop) posts a collateral with his prime broker, who sets customer’s trading net position limits based on margin ratios for the instruments traded by the customer. There are, however, big differences between equities and FX prime broking. First, the margin ratios in the unregulated forex market are huge by comparison to equities (typically 1:50 or 1:100).
Second, where an equities broker handles and routes orders directly on behalf of the customer, and thus is able to control customer’s total net exposure on a pre-trade basis, things work differently in the FX markets. The FX prime broker is usually a global bank that simply allows the customer to access the FX spot market in its name through a multitude of independent and competing venues — EBS, Thomson Reuters, Hotspot, Currenex, FXall, bank portals, etc. Depending on the venue, the bank may be able to set some pre-trade limits for the customer on each venue, but — because it does not actually handle or route customer’s orders — it cannot set a pre-trade limit on customer’s total net exposure.
As a compromise, the FX prime broker bank often assigns each customer huge pre-trade limits on each venue and then monitors the customer’s total net exposure using post-trade reports. Unfortunately, post-trade notifications are always slightly delayed and may take as much as 15 minutes in the worst case. To make matters even worse, many FX prime broker banks still use manual control procedures that take additional time before customer’s trading is finally halted. These half-measures were designed (and sufficient) at the time when most of the prime customers were smaller banks trading on manual workstations. But this mechanism is completely inadequate in today’s HFT scenarios, especially when faced with runaway strategies.
Rabobank (like most other FXPB banks) did not pay enough attention to the hypothetical risks of a customer’s buggy code accumulating a huge position in a few seconds. And this is exactly what happened on Sep 16, 2013. Although Black Phoenix was not a real HFT player, its runaway trading strategy turned it into one. The details are unclear; my best guess is that the strategy traded furiously until it exhausted Rabobank’s daily credit limits with its market-making counterparties. This would indicate that Black Phoenix ended with a position of around $10B (for a firm with assets of a few million dollars). The runaway activity moved the market, so unwinding the position was costly, aggravated by Rabobank’s exhausted interbank credit limits. The saving grace was that it is much more difficult to move the $2T/day FX market rates than it is to move individual stock rates. (Compared to Knight Group’s $440M loss, Rabobank’s $15M seems almost insignificant, in spite of the fact that the total accumulated positions were comparable.)
Solution: One solution to the FX Prime Broker rogue algo risk problem is to use the equities broker model where the prime bank controls submission and execution of all orders submitted by its customer. Unfortunately, this can be very expensive in terms of keeping up with the quickly evolving, varied, and unregulated FX trading venue technology and quickly leads to requirements for best price order routing, etc. An alternative is to use a third-party FX market access point that implements a real-time pre-trade exposure monitoring and requires each customer to use it. This works especially well for so-called prime-of-prime second-tier broker banks that specialize in FX, offer larger margins, and provide direct market access (DMA).
The pioneer in this space is Saxo Bank which launched its Saxo Prime business in partnership with MarketFactory only in Jan 2013. In Apr 2014, the bank hired Rabobank’s Peter Plester, the FXPB veteran who learned a lot from the Black Phoenix glitch. Saxo is the first FX prime bank that requires its customers to use a single FX market access point with real-time pre-trade exposure limit monitoring. MarketFactory provides the technology (disclaimer: I am a cofounder of the company) through its FX Limit Monitor.
Sep 2, 2013: Italy becomes the first country in the world to levy financial transaction “Tobin tax” that specifically targets HFT. More…
History: On Feb 14, 2013, the European Commission proposed a financial transaction tax limited to a group of eleven member states, including Italy, that is working its way to a possible implementation in 2016. But, on Dec 24, 2012, Italy jumped the gun and approved a wide-ranging FTT legislation as part of its Stability Law. In the first phase, on Mar 1, 2013, a general FTT 0.22% tax was applied to all equity trades in large-cap stocks causing a significant drop in Italian stock market volumes. The HFT tax constitutes the second phase of the legislation.
According to the HFT tax, order modifications and cancellations are taxed at 0.02% when they occur within a timeframe shorter than 0.5 second, once above a daily threshold of 60% of transmitted orders. Unlike the general FTT tax, the HFT tax applies regardless of where the transaction is executed, or the country of residence of the counterparty.
Opinion: Italy has no real “HFT problem”, and no HFT industry that’s worth taxing. This tax appears to be a “sin tax” that is easily imposed and will bring some (meager) revenues to a cash-strapped country. The tax is silly because it penalizes HFT at a time when its harms and benefits are not yet fully understood.
Aug 19, 2013: EBS/ICAP Forex matching platform begins deploying a random order delay scheme (the “latency floor”) designed to mitigate the HFT advantage. More…
The EBS pilot begins with Australian dollar (AUD/USD). In Sep 2013, the pilot is extended to Swiss Frank (USD/CHF). After a period of analysis and testing, in Feb 2014, the latency floor is applied to all active EBS currency pairs ending (on Mar 3, 2014) with the EBS crown jewel EUR/USD pair.
The EBS latency floor scheme delays all order messages (including submissions and cancellations) by a random delay of up to 3 milliseconds. The scheme uses batching intervals and works as follows: At the start of each batching interval, its length is selected at random between 1 and 3 milliseconds. All order messages received by EBS (from a specific city/region) during the interval are captured (on their way to the matching engine) and grouped in rows by the sender id (with message order preserved within each row). The rows are randomly shuffled. Finally, messages from all rows are submitted to the matching engine in a round robin fashion in a single burst.
Opinion: This ingenious scheme does not require the complex record keeping required by ParFX’s “green room”. The consequences for an EBS customer are that it does not pay to reduce her order latency unless she can improve it by at least 3 milliseconds. The takeaway is “stay/get colocated; beyond that, the latency race is unproductive”. The average delay applied to an order message by this scheme is only 1 millisecond. Thus, although EBS applies delays to order cancellations (as does ParFX), the resulting liquidity reduction is fairly insignificant.
Jul 7, 2013: Eric Budish, Peter Cramton, and John Shim publish an influential study of a novel market type where (very) frequent auctions are used instead of the current continuous matching mechanism. They prove that the continuous market forces a latency arms race whose cost is ultimately borne by investors (via wider spreads and thinner markets) and is largerly eliminated in the proposed frequent auctions market model. More…
In the first part of their lengthy article The High-Frequency Trading Arms Race: Frequent Batch Auctions as a Market Design Response (first version published Jul 7, 2013, see here for the latest) the researchers use historical market data to show that, in continuous markets, correlation betweeen similar instruments breaks down completely at small timescales. This creates latency arbitrage opportunities forcing a latency arms race among HFTs. Significantly, the arb opportunities remain constant in the face of constantly improving IT speeds.
Next, a simple abstract model of continuous limit order book (CLOB) market is defined and several remarkable propositions are formally proved about the Nash equilibrium of the model: (P1) All other things being equal (volatility, etc.), the market bid-ask spread is proportional to the cost of implementing the high-speed trading. (P4) The bid-ask spread and the prize associated with the latency arms race are invariant to both the cost of high speed trading and the magnitude of speed differences between the fast and the slow traders. (P5) HFT market makers are faced with the Prisoner’s Dilemma; they would be equally well-off if they would all commit not to invest in speed technology–but, as each individual market maker has an incentive to invest in speed, this arrangement does not form equilibrium; this total “unproductive” expenditure on speed is ultimately borne by the investors.
In the last and most significant part, authors define and analyze the frequent batch auctions as a possible “market design response to HFT”. Frequent batch auctions are uniform-price (each auction clears at a single optimal match price) blind (order activity is not visible during the batch interval) double auctions (multiple buyers and sellers) conducted at frequent but discrete time intervals, for example, every second (although intervals as small as a millisecond would be possible). Orders, aggregated by price, as well as the clearing price, are announced publicly at the conclusion of each batch interval. The authors prove a number of things regarding this market model–in general, market makers compete here on price rather that on speed. In particular, quote sniping is largerly eliminated, bid-offer spreads are reduced, depth (liquidity) is increased, and social costs of the speed race are minimized. One material handicap is an increased delay suffered by the liquidity takers.
The key study deals with an abstract simple centralized market. But in May 2014, the same group of researchers publishes a sequel article Implementation Details for Frequent Batch Auctions: Slowing Down Markets to the Blink of an Eye, where they propose an implementation of frequent auction venues that agree with the SEC Reg NMS and could coexist with CLOB venues in the US stock market.
The key study is featured in many articles. See, for example, Frequent but inefficient (Economist, Nov 29, 2014).
For an opposing view, see the Nov 18, 2014 post Synchronized Frequent Batch Auctions: A Rebuttal by D. Keith Ross. (Note: It seems that the author did not fully understand the Budish et al. proposal — the paragraph ending with “Same race; different finish line” indicates that he missed the key features of the Budish frequent auctions: they are “blind” and “pro-rata” — not based on time priority.)
Opinion: The key study is ground-breaking in its formal comparison of continuous and discrete time trading. It demonstrates a number of laws that are fundamental enough to invoke esteem not unlike the one felt for Einstein’s thought experiments (exaggeration intended): if it’s so damn simple, how come no one thought about it before? Still, human behavior is much more complex than physics. One should be cautious of models with simplified assumptions, which–although necessary to allow a theoretical treatment–may omit some fundamental considerations (see, for example, the concept of taker-as-maker which caused EBS to reverse its price granularity decision on Sep 17, 2012). The simplifications adopted in this study are quite severe. Most ominously, the Nash equilibrium–used to formally prove the main results–does not generally apply in real-life markets, where participants do not usually know equilibrium strategies of the other players.
Ideally, one would like to see a realistic Monte Carlo type simulation of the CLOB market with (at least some of) its complexity, defined by tweakable parameters, where the correctness could be empirically confirmed based on historical data, followed by similar simulation of the frequent auctions market. (In my view, every regulation should be first optimized in this fashion, and only then tried in limited pilot evaluations.) Shameless admission: I am very interested in a keen sponsor of this approach.
Regarding the possible introduction/implementation of frequent auction venues, I think that one should first define the “ideal” discrete high frequency auction market, before attempting to force the concept into the obsolete world of Reg NMS which does not befit even the current CLOB market that it was created for.
May 15, 2013: German High Frequency Trading Act (Hochfrequenzhandelsgesetz) goes into effect, paving the way to similar MiFID II requirements. More…
The German government started working on the HFT Act in July 2012. The law enters into force on 15th May 2013, with an implementation period of six to nine months; it will be fully implemented by Feb 14, 2014.
The HFT Act mandates licensing and supervision of all automated trading (including prop shops), electronic identification of trading algos, risk controls, and management of order-to-trade ratios and tick sizes. Most of these items appear in similar form in drafts of MiFID II/MiFIR or other European legislations. However, those are not expected to be implemented before late 2016.
May 1, 2013: Wall Street Journal unveils a CME “loophole,” where high-speed traders are notified about their own order execution before the rest of the market sees that data. More…
In the page one Wall Street Journal article “High-Speed Traders Exploit Loophole“, the authors state with a straight face: “High-speed traders are using a hidden facet of the Chicago Mercantile Exchange’s computer system to trade on the direction of the futures market before other investors get the same information. Using powerful computers, high-speed traders are trying to profit from their ability to detect when their own orders for certain commodities are executed a fraction of a second before the rest of the market sees that data, traders say. The advantage often is just one to 10 milliseconds (…). All firms that connect directly to CME’s trading computers are able to get information ahead of the market when their trades are executed, firm officials say. But many companies are unaware of the advantage or choose not to use it, traders say, either because they don’t have the technology to take advantage of such tiny edges or employ different investing strategies.”
In a prompt and equally straight-faced response CME Group vows to shrink to “as close to zero as possible” the time advantage some investors gain from seeing trading data before the rest of the market.
Opinion: The WSJ article beats all banality records. The CME “loophole” described in the article is normal, logical, expected, and well established in the trading world — going back to the open outcry, voice-broker, and OTC practices. In simple terms, a trade is first done (which includes direct and private communication between trader submitting the order and the broker or execution venue), and only then the market data about the trade is distributed to other market participants. In all normal cases there is a slight delay between the first and the latter, which may be exploited by the trader to his advantage.
The post “Well known trading secrets become public” by Meanderful, explains all of this well and with only a minimum number of expletives.
The bottom line is that trader’s communications with the exchange are private until they are publicly announced by the exchange as market data. Going beyond the basic methods described in the WSJ article, consider the moment when a market order is sent by a trading algo (well before it is received by the exchange). The algo can use this information to predict the impact of the order on the future state of the market book — here, the trader’s algo actually beats the speed of light limitations. When a group of traders/algos share this private information with each other, they may gain considerable predictive capabilities, especially where the target venue is sufficiently remote (for example, when hedging on CME by NY/NJ firms, or when trading GBP on London’s Thomson Reuters exchange out of NY). The methods to do this right are non-trivial; see MarketFactory‘s 2012 U.S. Patent No. 8,296,217, “Method and apparatus for enhancing market data feed using proprietary order flow“.
Apr 18, 2013: Tradition launches ParFX wholesale currency trading platform designed to curb the advantage of high-frequency traders. More…
ParFX is founded by ten global banks as a counterweight to EBS/ICAP and Thompson Reuters Forex matching platforms that banks consider overly HFT-permissive. The platform was renamed several times: from Pure FX (2010) to traFXpure (2012) to ParFX (2013). ParFX implements the random order delay scheme proposed by Larry Harris. A randomized pause of between 20 and 80 milliseconds is applied to all order messages, including cancellations. On Jan 23, 2014, ParFX opens its doors to hedge fund participation.
Opinion: The magnitude of the delay applied by ParFX is relatively huge. Where Harris’ 0-10 millisecond scheme aims only to eliminate the winner-take-all race among the HFTs, ParFX attempts to nearly eliminate any advantage of HFTs over banks. Incidentally, the large delay also eliminates any advantage of colocation. Most importantly, delaying cancellations makes ParFX market makers defenseless against quote sniping (even by slow players) and thus decreases liquidity. The random delay generation, handled by a “Green Room” module, is fairly complex because, although each order message is randomly delayed, the relative order of messages from an order source must be preserved (which means that the delays are not independently random).
Mar 5, 2013: Anatoly B. Schmidt, Taming HFT in the Multi-Dealer FX Market, FX Week (also here), proposes modifying trading rules so that certain HFT behaviors are allowed only as privileges that entail defined responsibilities. More…
Opinion: Schmidt’s proposal is made in the context of the Forex market, and, specifically, the EBS/ICAP trading system. Schmidt’s HFT characteristics do not apply in equities markets. But his general idea of dynamic tit-for-tat trading rules deserves consideration: An HFT earns the right to engage in certain “disruptive” behavior only if she provides certain “beneficial” services at (nearly) the same time. (One thing that comes to mind: allow quote sniping in return for providing liquidity–one can imagine several ways of achieving this.)
Feb 14, 2013: European Commission proposes an eleven-country €30 billion “Tobin tax” More…
History: On Sep 28, 2011, the European Commission issued a proposal for a pan-European system of financial transaction tax (the FTT Proposal). By July 2012 it became clear that the proposal would not gain unanimous support within EU. In Oct 2012 eleven member states (covering two-thirds of EU economic output) formally requested a common system of FTT within this group of states on the basis of enhanced cooperation procedure (where a group of at least nine EU member states is allowed to establish general rules without the other members being involved). On Jan 22, 2013, the European Council adopted decision authorizing the group to proceed with the introduction of FTT.
The new proposal levies 0.1 per cent on stock and bond trades and 0.01 per cent on derivatives transactions involving a trader in the tax area, or trading on behalf of a client based in the tax area. In a revision that strengthens the original pan-EU FTT Proposal, the tax will now also be applied to a transaction based on where the financial instrument was issued, regardless of where it takes place. This wide net of taxation causes US as well as European countries which rejected the FTT, such as UK and Luxembourg, to strongly oppose the proposal.
Jan 2013: SEC rolls out the Market Information Data Analytics System (MIDAS). More…
For a recent semi-official view on the project see SEC’s MIDAS program highlights how to do big data. The system was created by SEC when it became clear that the organization had no capabilities to analyze market anomalies such as the Flash Crash of May 6, 2010. MIDAS was built in record time on a small budget by Tradeworx, an HFT company, that initially hosted the system in the Amazon Web Services cloud. MIDAS collects billions of records per day from all national equity exchanges, each time-stamped to the microsecond. For the first time in history, the SEC has access to data about every displayed order posted on the national exchanges in near real-time.
Dec 27, 2012: Larry Harris publishes an influential Financial Times article Stop the high-frequency trader arms race where he proposes to add a random delay of up to 10 milliseconds to all order messages. More…
Prof. Harris states: “A small and easy to implement change in exchange trading rules can substantially reduce the incentives to acquire the expensive trading technologies now required to compete successfully as an HFT. Regulatory authorities could require that all exchanges delay the processing of every posting, cancelling and taking instruction they receive by a random period of between 0 and 10 milliseconds.”
An expanded version of the article appears as Mar 2013 paper What to Do about High-Frequency Trading in the Financial Analysts Journal.
Opinion: The random order delay idea breaks the sacred price-time priority that was the basis of fair markets for millennia. In my opinion, it is acceptable to remove the time priority (see, for example, Frequent Auctions), but not to randomly reverse it, even by nanoseconds. But, besides, the idea is not as “easy to implement” as it may seem: adding a random delay to each order message must still preserve the order of messages, so cancellation of an order is not processed before its submission, for example. Finally — and worst of all — delaying order cancellation makes quote sniping easier than ever; this makes the HFT’s cardinal sin cheap and available to all, thus obviously hurting liquidity. Harris eventually recognizes this on May 30, 2014
In Jan 2013 the collected writings are published in a book form as The Problem of HFT. Among some new material, “Chapter 6. Reforming the National Market System” deserves particular attention. It proposes a 10-step plan for strengthening the operation of the US equities marketplace. He says: “I assert that there is a straightforward path to correcting the system. The industry took a wrong turn when Regualation NMS was implemented market-wide in 2007. To address the market structure head on, we need to reassess Regulation NMS in the context of its original purpose and intent — to bind a fragmented marketplace into an effective national market system that serves long term investors.”
Opinion: As I mention later, Haim Bodek’s story is somewhat similar to that of Brad Katsuyama’s in “Flash Boys”. In each case, a brilliant, professional trader is greatly surprised by the rules of the game that are well known to the more-inner-circle-than-thou traders.
In Haim’s case it’s all about complex and obscure order types devised by the stock exchanges to cater to their most profitable customers, the HFTs. The special order types are not “secret”–they are simply not publicized and not even well documented by the exchanges. (Allegorically, this made me think of high-stakes blackjack tables in casinos, where faster shuffles are used, where joining a game in progress is disallowed, and where concealed restrooms are placed suitably close, so that one can be back in the game before the shuffle is done. You learn these elite rules only by playing high stakes, by observing carefully what is going on, and by asking the pit boss for the password to the private restroom!)
Bodek’s posts and disclosures induce exchanges to become more transparent. Nasdaq elaborates its order types on Nov 30, 2012.
In my wiew, Bodek’s 10-step plan for reforming the National Market System is very practical, but fairly limited. It represents the best that can be done without radically rearchitecting the NMS.
Sep 17, 2012: EBS/ICAP Forex matching platform reverses its recent price decimalization in all currency pairs. More…
Only 18 months after decimalizing Forex spot prices on Mar 7, 2011, EBS reverts to trading in full and half-pips. All EBS liquid currency pairs–including EUR/USD, USD/JPY, USD/CHF, and nine others–trade now in half-pips; other pairs go back to full pip pricing.
EBS decimalization in 2011 created a confusing scenario where the major banks first unanimously approved the decimalization (vital for bank Forex taker-only portals that competed to quote in tenths of a pip), and then demanded a reversion to larger ticks (vital for big time bank manual traders) presaging top-level management changes at EBS.
Opinion: For the first time in history, a major Forex exchange increases price ticks in world’s most liquid instruments.
The change is billed as anti-HFT and is done under pressure from large banks for the benefit of EBS’ most important customers: the manual bank traders, still responsible for roughly half of EBS dealt volumes.
Now, it is known that increasing tick size in a liquid instrument widens the bid-ask spread and thus benefits the market makers and disadvantages the takers. It is also known that there are no professional manual market makers in ultra-liquid markets. Manual bank Forex spot traders are driven by “natural need”, and never place two-sided passive limit orders; they are natural takers, not makers. So, why would they want larger tick sizes?
To answer this question, it is important to understand the large manual transaction scenario in an ultra-liquid market. What’s special about an ultra-liquid symbol (such as EUR/USD or AAPL), that trades many times per second, is that an aggressive market order (a take) may often be replaced with a quickly executed passive limit order (a make). That is, rather than paying the ask (say) price, the would-be-taker joins the current bid and gets done within seconds as a maker, saving money that’s proportional to the bid-ask spread and the amount traded. Let’s call this the taker-as-maker technique. It works wonders for manual Forex spot bank traders trading amounts larger than the nominal 1M of base currency. As natural takers, they are attracted to lowest spread venues: FxAll is great, zero spread would be even better. But on high-volume Forex exchange platforms like EBS and Thomson Reuters that allow customers to make prices, the taker-as-maker technique offers the effect of a negative spread–taker heaven!
So, let’s see how decimalization affects a big time Forex spot manual bank trader using the taker-as-maker technique. To begin with, note that large ticks force larger bid-ask spread, concentrate liquidity at the best bid-ask price edge, and make prices more stable in a second-by-second timescale. Now imagine that you are a manual trader out to buy (say) a largish amount of an ultra-liquid symbol, and that you decide to trade time for a better price according to the taker-as-maker technique. With large ticks (no decimalization), you join the solidly populated best bid, wait a short time, and (usually) get done at the maker’s price, saving money equal to the (large) spread times the (large) amount. With decimalization, the best that can happen is that you get done equally quickly but you save less money because of the smaller spread. Regrettably, in a typical case, your presence is noted right away in the thinly populated best bid, small guys and HFTs jump the bid queue for a tenth of a pip and block you from being on the best price. You may never get done at your original bid; you will likely give up and move your bid up slightly, reducing your already slim (“decimated”) maker advantage. You will eventually get done at the worst possible time, when the market moves down and catches you–the slow manual guy–before you cancel.
Finally, let’s see what makes the taker-as-maker technique work. It requires a matching venue with sufficiently large dealing volumes to produce attractively short maker waiting times. This, in turn, requires a large number of bona fide aggressive takers (willing to cross the spread in return for instant execution). This requires attractive narrow spreads. Which require fine price granularity. Which causes all the small tick “decimalization” problems described above. You see the vicious circle.
All things being equal (tick size, volatility), there is a delicate balance between the taker-as-maker waiting time and the trader’ willingness to cross the spread as a taker. The longer the waiting queue, the more compelled the trader is to pay the spread thereby reducing the queue length (see The Difficulty of Trading Ultra-Liquid Stocks, Pragma Securities). The problem unique to Forex is that the tick size is not regulated: decimalized taker-only portals compete with EBS which offers the taker-as-maker mechanism. Impatient taker-as-makers will cross the spread, but may do it on a competing, lower (decimalized) spread venue.
Manual traders’ frustration with Mar 7, 2011 decimalization forced EBS to revert to larger ticks. But to remain attractive to aggressive takers, EBS made the Solomonic decision to restore price granularity to half pips (not full pips which was the norm before decimalization). In my view, even half pip ticks may be too large to compete on spread with decimalized taker-only portals in times of record low Forex volatility. (When volatility increases, portals increase their spreads to avoid being sniped by HFTs, allowing the natural interest “reference venues”–such as EBS and Thomson Reuters–to become competitive again.) Perhaps stock market-style taker-maker rebates would help…?
According to SEC investigation findings, Knight’s smart order router software that was modified to work with NYSE’s Retail Liquidity Program was incompletely deployed causing a “repurposed” flag to execute code that was unused for 9 years. The code went into an uncontrolled loop buying and selling securities when responding to routine requests. A mere 212 Knight-originated orders resulted in 4 million executions in 154 stocks for more than 397M shares resulting in positions worth $6.65 billion. The glitch was one of the costliest computer bugs ever.
The incident reveals a number of weaknesses in the software development and computer center operations of one of the largest market maker firms (17% of NYSE and Nasdaq orders, 10% of total US stock market’s dollar volume). The company receives emergency funding that averts a bankrupcy, but ends up being acquired by Getco LLC in Dec 2012.
The incident illustrates vividly the possibility that a single component (here, a smart order router) may cause a loss that ruins a major financial company.
Opinion: Almost all early media reports regarding the cause of the glitch are wrong. To see a deeper picture, read the SEC report that took over a year to prepare, or see the more digestible comments here or there.
From a dev-ops professional point of view, the scenario looks very scary; a true tragicomedy of errors. First, the Knight’s software is changed to accommodate new requirements (RPL). A global (“Power Peg”) configuration flag bit, not used for years, is “repurposed” to control the new behavior (this is common when modifying legacy software because it saves the coding effort needed to set and control a new flag). On Aug 1, 2012, the new software is deployed to only seven out of eight Knight’s SMARS servers. The one with the old Power Peg code responds dutifully by buying (or selling) shares until the accumulated result reaches the requested amount – except that the accumulated result is no longer being updated in the code that was dormant for 9 years. This means that a single order triggers a virtually unlimited number of “child” orders.
The story reveals a number of cardinal Knight’s dev-ops deficiencies:
- There are no controls to guarantee that any new software is consistently deployed.
- There are no real-time monitoring mechanisms that could trace back the unusual trading activity to a particular server.
- There is no global kill switch.
- The rollback procedures are deficient (the software is rolled-back, but the system configuration is not restored — so now all eight SMARS servers go bonkers).
May 18, 2012: The social networking company Facebook (FB) holds its initial public offering, the largest in US history, with initial market capitalization set at $104B. The FB IPO is marred by serious NASDAQ technical glitches. More…
Start of FB trading is delayed from 11:00 until 11:30am due to technical problems with the NASDAQ exchange. Once trading starts, Nasdaq fails to acknowledge trades for another three hours.
Investors, including the Wall Street banks involved in the FB IPO, claim losses over $500M but settle eventually for a mere $62M. In addition, SEC imposes on Nasdaq a $10M fine, the largest ever paid by an exchange. The SEC faults Nasdaq’s poor systems and decision-making that led to costly trading delays.
Opinion: The Facebook IPO problems are generally similar to the BATS IPO glich, affecting the transition from the initial IPO auction to continuous trading. According to this WSJ article: “The day of Facebook’s IPO, Nasdaq’s computer systems got caught in a loop while lining up orders before the company’s shares started trading. The opening of trading in Facebook shares was stalled by half an hour. For almost three hours after that, Nasdaq failed to send order confirmations to brokers, causing uncertainty about who held what positions.”
Embarassingly, unlike the BATS IPO fiasco that was caused by a one-time careless software deployment, FB IPO glitch is due to a material design flaw in the in the NASDAQ IPO Cross software that debuted in 2006. The bug causes deadly race conditions preventing the start of continuous trading. After the botched Facebook IPO Nasdaq modifies the IPO Cross process so it no longer accepts orders after the auction’s final calculation has been made (duh!).
Mar 23, 2012: BATS IPO fails because of a BATS exchange software glitch. More…
BATS (Better Alternative Trading System) was founded in 2005 by Tradebot‘s founder David Cummings to counter the emerging NYSE and Nasdaq duopoly. The new exchange offered lower fees and openly catered to HFT clients. It earned good reputation for efficient low-latency executions and excellent reliability (99.94% uptime as of 2012).
BATS decided to handle its own IPO and list its own shares (at that time, all US public companies were listed on venues owned by either NYSE Euronext or Nasdaq OMX). Unfortunately, the BATS’ matching engine handling the IPO became crippled by a software bug. The BATS share market price, unsupported by the disabled BATS server, quickly plummeted from $15.25 to $0.0002 before the trading in BATS symbol was halted. The company’s IPO was cancelled and postponed indefinitely.
Opinion: The software bug was never adequately explained by BATS. The best that we can tell is that it affected the code that controlled transition between the initial IPO auction and the subsequent continuous trading of the BATS symbol. The bug was caused by “a unique scenario of different option/order types”. Embarassingly from design/dev/ops point of view, the bug incapacitated an entire BATS matching engine (one of 12), responsible for matching all symbols in the range A – BF, including AAPL. This happenned despite the IPO code being “something we’ve tested more thoroughly than anything else in our history,” according to Joe Ratterman, the company’ CEO. (In my view, the worst mistake was to run the BATS IPO as the very first public company listing on BATS — why would you not try to run an IPO for another company first — just to make sure you can handle it?)
Some additional details about the BATS IPO price sliding down to near zero may be gleaned from Nanex Research’s report and the Zero Hedge article insinuating a conspiracy. (In my view, once the BATS’ matching engine failed, anything that subsequently affected BATS’ share price on other venues should be completely disregarded.)
Dec 5, 2011: Chris Sparrow begins a series of seminal TabbFORUM articles pointing out that all HFT issues are caused by fragmented markets operating in a continuous rather than discrete and synchronized fashion. More…
In The Failure of Continuous Markets (see also Journal of Trading), Sparrow, an authority on market structure, postulates that markets should move from the continuous to a quantized time regime. “Moving to a market in which executions only take place at prescribed, predetermined times is not necessarily simple but is simply necessary to restore fairness and confidence to equity trading.” To summarize his detailed description of the process, the synchronous market would function as a series of frequent auctions taking place once per second (or more frequently). Each auction would include all visible orders from all the venues. The results of each auction would be announced only after the matching was complete, giving every computer system sufficient time to leisurely digest the new market information and to react to it before the next auction. In the proposed scheme, the order submission time would not be considered in matching, but intra-venue matching would be given priority.
Sparrow follows his proposal with further posts: How to Control High Frequency Trading on Sep 12, 2012, and Eliminating ‘Unfairness’: Creating a Protocol for Synchronized Periodic Trading on Apr 21, 2014, which courageously details a possible synchronous matching protocol in a multi-venue setting.
On Jul 7, 2013, Eric Budish, Peter Cramton, and John Shim present a thorough analysis of this idea with a somewhat different proposal.
Opinion: I wholeheartedly embrace the general idea of synchronized high-frequency auctions. Regardless of the devilish details that still need to be worked out, synchronized trading would eliminate virtually all of the HFT behaviors that are generally detested by ordinary traders, market makers, and investors: inter-venue latency arbitrage, order front running, quote sniping, quote stuffing, crossed/locked markets, etc., etc. Instead of investing in the mindless speed race, the competition would focus on price and would involve traders that have the same information about the market price (but perhaps different — and asynchronous — information and views on the real world). An implementation of this idea would also simplify market data distribution, market auditing processes, and provide a true NBBO price.
Mar 7, 2011: EBS/ICAP Forex matching platform implements granular “decimalized” pricing in all currency pairs. More…
After a pilot that started in Aug 2010, decimalization is now implemented for all EBS currency pairs, including EUR/USD, USD/JPY, EUR/JPY, USD/CHF and EUR/CHF. (In Forex spot, “decimalization” refers to prices using an extra digit specifying tenths of a pip, for example, 1.23456 in EUR/USD. In the stock market this would correspond to a $100.00 stock trading with a price increment of one tenth of a cent.)
For a rare view of the difficult EBS market at the time, see Ecology of the Modern Institutional Spot FX: the EBS Market in 2011 by Anatoly Schmidt.
Opinion: According to EBS, the main reason for decimalization is that “it’s inevitable” as most Forex platforms have already implemented it, and because the new breed of customers (aggregators, HFTs, and smaller banks) desire it. EBS advisory board consisting of executives from major banks unanimously approved the change at a meeting in 2010. And yet, only 18 months after the change, on Sep 17, 2012, EBS will reverse itself under pressure from the same major banks. Thompson Reuters, EBS’ main competitor, will not decimalize its Forex spot prices.
Late 2010: Dark pools account for over 14% of U.S. equities market. More…
The total dark market (including dark pools, broker internalization, and dark orders on visible marketplaces) accounts for 40-50% of U.S. equities market. This share will increase very little in the next few years.
Jul 21, 2010: The Dodd-Frank Wall Street Reform and Consumer Protection Act is signed into law. More…
Passed as a response to the Great Recession of 2008, it brought the most significant changes to financial regulation in the United States since the regulatory reform that followed the Great Depression in 1930’s. It made changes in the American financial regulatory environment that affect all federal financial regulatory agencies and almost every part of the nation’s financial services industry.
The Dodd-Frank Act has two provisions that affect high-frequency trading:
- The Volcker Rule which prohibits banks from proprietary trading.
- Prohibition of spoofing in commodities and futures markets (spoofing in equities is already prohibited under the Securities and Exchange Act of 1934 and other regulation).
CFTC takes three years to issue specific guidelines regarding spoofing (May 20, 2013), defining spoofing as submitting limit orders with malevolent intent (scienter) to cancel the bid or offer before execution. Remarkably, “reckless trading, practices, or conduct” do not violate the rules.
The first case under the new powers is brought by CFTC against Panther Energy Trading and its owner, Michael Coscia on Jul 22, 2013.
Opinion: Ironically, both provisions above benefit the mainstream high-frequency traders:
- The effect of the Volcker Rule is to remove banks from competition against the specialized HFT companies such as Getco and others.
- Spoofing is a strategy used by some HFTs to exploit the common front-running logic used by other HFTs. See Spoofers Keep Markets Honest (John D. Arnold, Jan 23, 2015).
May 6, 2010 Flash Crash. Dow Jones Industrial Average falls over 1,000 points (9%) and then rebounds in about 15 minutes. More…
According to the Sep 30, 2010 CFTC and SEC report, many of the almost 8,000 individual equity securities and exchange traded funds traded that day suffer similar price declines and reversals. Some equities experience much more severe price moves. Thousands of trades in major stocks execute at insane prices of a penny or less or as high as $100,000. By the end of the day, major futures and equities indices recover to close down about 3% from the prior day.
CFTC and SEC spends nearly 5 months investigating what happened before it issues a a voluminous report on Sep30, 2010. (Most of this time is spent organizing and reconciling data from multiple sources exposing a glaring defficiency in SEC’s ability to act as a watchdog. This eventually leads to Project MIDAS deployed in Jan 2013.)
The CFTC and SEC report blames a single firm (identified by other sources as Waddell & Reed Financial) as a direct cause for the crash — a fact that is later disproved; otherwise, the report is deemed accurate. Here is a (much abbreviated) summary:
“May 6 started as an unusually turbulent day for the markets. By 2:30 p.m., the S&P 500 volatility index was up 22.5 percent from the opening level and selling pressure had pushed the Dow Jones Industrial Average (“DJIA”) down about 2.5%.
At 2:32 p.m., against this backdrop of unusually high volatility and thinning liquidity, a large fundamental trader (a mutual fund complex) initiated a sell program to sell a total of 75,000 E-Mini contracts (valued at approximately $4.1 billion) as a hedge to an existing equity position. The “Sell Algorithm” used by the large trader was programmed to feed orders into the June 2010 E-Mini market with the target execution rate set to 9% of the trading volume calculated over the previous minute, but without regard to price or time. The Sell Algorithm executed the sell program extremely rapidly in just 20 minutes.
HFTs were the likely buyers of the initial batch of orders submitted by the Sell Algorithm, and, as a result, these buyers built up temporary long positions of a few thousands contracts. Then, between 2:41 p.m. and 2:44 p.m., HFTs aggressively sold about 2,000 E-Mini contracts in order to reduce their temporary long positions. At the same time, HFTs traded nearly 140,000 E-Mini contracts or over 33% of the total trading volume. This is consistent with the HFTs’ typical practice of trading a very large number of contracts, but not accumulating an aggregate inventory beyond a few thousand contracts in either direction.
The Sell Algorithm used by the large trader responded to the increased volume by increasing the rate at which it was feeding the orders into the market. (…) The combined selling pressure from the Sell Algorithm, HFTs and other traders drove the price of the E-Mini down approximately 3% in just four minutes from the beginning of 2:41 p.m. through the end of 2:44 p.m. During this same time cross-market arbitrageurs who did buy the E-Mini, simultaneously sold equivalent amounts in the equities markets, driving the price of SPY also down approximately 3%.
Still lacking sufficient demand from fundamental buyers or cross-market arbitrageurs, HFTs began to quickly buy and then resell contracts to each other – generating a “hot-potato” volume effect as the same positions were rapidly passed back and forth.
(…) At 2:45:28 p.m., trading on the E-Mini was paused for five seconds when the CME Stop Logic Functionality was triggered. In that short period of time, sell-side pressure in the E-Mini was partly alleviated and buy-side interest increased. When trading resumed at 2:45:33 p.m., prices stabilized and shortly thereafter, the E-Mini began to recover, followed by the SPY.”
The E-Mini crash caused a liquidity crisis in the equities markets for individual stocks. Many automated market-making algos paused as they were programmed to do in face of an unknown cataclysmic event.
“(…) As liquidity completely evaporated in a number of individual securities and ETFs, participants instructed to sell (or buy) at the market found no immediately available buy interest (or sell interest) resulting in trades being executed at irrational prices as low as one penny or as high as $100,000. These trades occurred as a result of so-called stub quotes, which are quotes generated by market makers (or the exchanges on their behalf) at levels far away from the current market in order to fulfill continuous two-sided quoting obligations even when a market maker has withdrawn from active trading.
(…) After the market closed, the exchanges and FINRA met and jointly agreed to cancel (or break) all such trades under their respective ‘clearly erroneous’ trade rules.”
Opinion: The report is criticized for several inaccuracies and for whitewashing the HFT’s role in the crash. In fact, on Oct 1, 2010, a day after the CFTC and SEC report, CME Group issues a statement that absolves Waddell & Reed Financial of aggressively pushing the E-Mini market down.
This fact is later confirmed by Eric Hunsader’s (Nanex Research) analysis: “Based on interviews and our own independent matching of the 6,438 W&R executions to the 147,577 CME executions during that time, we know for certain that the algorithm used by W&R never took nor required liquidity. It always posted sell orders above the market and waited for a buyer; it never crossed the bid/ask spread. That means that none of the 6,438 trades were executed by hitting a bid.”
In the Mar 26, 2013 post “Flash Crash Mystery Solved“, Nanex states their final conclusions: “(Based on this extremely thorough analysis) we were able to zero in on the ignition point, or starting time of the crash: 14:42:44. That is the moment when one or more large HFT “market makers” hit their limit of long positions in the eMini Futures (ES.M10), and reversed out – “readjusted their position”. Immediately. That aggressive act sucked out a significant amount of liquidity and caused thousands of trading instruments (stocks, options, indexes, futures) to reprice, which severely overloaded all trading systems processing market data (peak message traffic set a record at that time, which was not exceeded for the balance of the day). Overloaded systems caused bad, delayed, and unexpected pricing to appear, which caused other algos and traders to stop trading, removing any remaining liquidity. We know the Waddell & Reed algo practically ceased trading shortly before the 600 point slide in the Dow Jones Industrial Average; selling a mere 1000 contracts in small lots (averaging 6 contract per trade), all on the offer side. No Virginia, the Barclay’s algo used by Waddell & Reed did not sell indiscriminately without regard to time or price: it didn’t take liquidity either. That was the work of HFT. In short: High Frequency Trading caused the Flash Crash. Of this, we are sure.”
As for CFTC and SEC report whitewashing the HFT role in the Flash Crash, read the Jun 12, 2014 Nanex’ post “Reexamining HFT’s Role in The Flash Crash“.
Nanex findings are confirmed and analyzed further by many researchers. See the extensive HFT bibliography compiled by Themis Trading. I found the article “The Flash Crash: The Impact of High Frequency Trading on an Electronic Market” by A. Kirilenko et al. illuminating (especially the section “What Do High Frequency Traders Do?”); it explains well the “hot-potato” behavior of the HFT algos.
At the high level, the story is really quite simple. The 2010 market is run by very fast half-intelligent black boxes. Their strategies work reasonably and predictably under normal market conditions. Things turn ugly when market goes out of kilter. Market making algos have some risk-avoidance logic that says “if observed conditions are unusual, stand back”. Trend-following algos follow their logic unconditionally — sell a stock if its price shows a clear downward trend. No HFT boxes have enough intelligence to use longer-term logic and conclude that Apple or P&G stock trading below certain price represent a buying opportunity. Instead, they are focused exclusively on their very short time horizon (seconds-minutes). That’s similar to a day trader who does not pay any attention to what may happen to the stock in a few weeks.
At a detail level, one learns (five years after the event) that the Flash Crash was triggered by a single London-based day trader using relatively primitive tools to spoof the market. The trader (Navinder Singh Sarao) placed some “spoofing” limit orders to sell well before the HFT avalanche begun. This brings to mind the butterfly effect in the chaos theory, where a butterfly flapping its wings causes a tornado. Extending the analogy, passing anti-flapping (or anti-spoofing) laws will not prevent hurricanes (or flash-crash events).
In biological terms, the market run by the machines lacks a reliable homeostasis mechanism. However, being an incorrigible optimist (and a firm believer in human greed), I am sure that strategies ready to take advantage of a major flash crash will be developed; such algos would have prevented the extreme price gyrations during the Flash Crash.
Lessons learned: On Feb 18, 2011, the Joint CFTC-SEC Advisory Committee publishes Recommendations Regarding Regulatory Responses To The Market Events Of May 6, 2010.
Feb 26, 2010: SEC enacts Amendments to Regulation SHO that reintroduce a modified short sale uptick rule as a circuit breaker for distressed securities. More…
Regulation SHO Rule 201 is passed in the wake of financial crisis (and, specifically, Lehman Brothers failure) in 2008. The rule forbids the execution or display of a short sale order of a security at a price that is less than or equal to the current national best bid if the price of that security decreases by 10% or more from the previous day’s closing price.
Opinion: The 1938 uptick rule ceased to make any sense in the fragmented U.S. stock market where thousand of deals per second on fifty different venues may take place in a single security. Consequently, Rule 201 gives up on the idea of price change (since back-and-forth asynchronous price changes occur with high frequency in the decimalized market) and refers instead only to the price value itself (without referring to the “previous” price). The fact that it refers to the “current national best bid” makes it more conservative, as the NBBO is notoriously delayed and shows the best bid that is many milliseconds old (and, therefore, higher than the true current best bid in a downward market). The SHO Rule 201 uptick definition is styled after the 1994 NASD Rule 3350.
Jan 14, 2010: SEC Concept Release on Equity Market Structure.
Nov 23, 2009: SEC Regulation of Non-Public Trading Interest proposed. More…
Dark Pool Reg, Not adopted as of 2014
Nov 18, 2009: Ciamac C. Moallemi, Mehmet Saglam, The Cost of Latency in High-Frequency Trading is published. More…
The paper discusses and quantifies the cost of stale information in trading. It presents a closed-form expression for the cost of market information latency in terms of parameters of the underlying asset.
2008: Dark order types evolve on visible exchanges (full iceberg, pegged, flash orders).
Oct 1, 2008: NYSE Euronext acquires the American Stock Exchange (Amex). More…
On Jan 17, 2008, NYSE Euronext announced it would acquire the AMEX for $260 million in stock; the acquisition is completed on Oct 1. Before the closing, NYSE Euronext announces that the AMEX would be integrated with the Alternext European small-cap exchange and renamed the NYSE Alternext US. In Mar 2009, NYSE Alternext US will be renamed to NYSE Amex Equities. On May 10, 2012, it will be renamed again to NYSE MKT LLC.
The transaction extends NYSE Euronext’s leadership in U.S. option, cash equities, and exchange-traded funds (ETFs), making it the third largest U.S. equity options marketplace based on number of contracts traded. After the merger, Amex listings trade directly on NYSE.
Apr 1, 2008: The Value Of A Millisecond.
Early 2008: Dark pools account for about 5% of U.S. equities market. More…
The total dark market (including dark pools, broker internalization, and dark orders on visible marketplaces) accounts for about 10% of U.S. equities market. This percentage will nearly triple in the next three years.
Oct 8, 2007: SEC Regulation NMS implementation complete. More…
Industry compliance with Rules 610 and 611 of NMS.
Jul 9, 2007: SEC Regulation NMS implementation begins. More…
Start of NMS Rules 610 and 611 implementation (Pilot Stocks phase).
Jun 13, 2007: SEC Regulation SHO and Rule 10a-1 enacted. The regulation completely eliminates and outlaws the short sale price test (Securities Exchange Act, Rule 10a-1).
2007: NYSE Group merges with Euronext, creating NYSE Euronext, the first trans-Atlantic stock exchange group.
2006: NYSE and Archipelago merge, creating NYSE Arca and forming the publicly owned, for-profit NYSE Group.
2005: SEC Regulation NMS adopted.
Late 2004: Block crossing networks account for 2% of US market.
Late 2004: Smart Order Router technology introduced.
Late 2004: Broker dealer internalization engine technology introduced.
Jul 28, 2004: SEC Regulation SHO enacted. The regulation introduces rules to restrict naked short selling. More…
The regulation is followed by numerous amendments enacted over the following years. The regulation adopted only a subset of Regulation SHO (proposed on Oct 28, 2003) and deferred the rest. In particular, Rule 202T sets up a pilot to investigate need for a short sale price test (such as the “uptick rule”).
2002: First continuous match Dark Pool, Posit Now, created.
Apr 9, 2001: Nasdaq decimalization. More…
Completes decimalization of US equities market.
Jan 29, 2001: NYSE decimalization.
2000: Intercontinental Exchange is formed to develop a transparent marketplace for OTC energy markets.
2000: Euronext is formed from Amsterdam, Brussels, Paris and Portugal stock exchanges.
Dec 8, 1998: SEC Regulation ATS adopted.
Fall 1986: First dark pools appear as daily auctions.
Late 1971: James Tobin proposes a transaction tax designed to dampen short-term speculation in the currency markets. This is later considered as means to eliminate HFT activity in financial markets. More…
James Tobin, the 1981 Nobel Prize in Economics laureate, suggested what later became known as the “Tobin tax” in his 1971 Janeway Lectures at Princeton, after the Bretton Woods system collapse. He elaborated the idea in the paper “A Proposal for International Monetary Reform” (Eastern Economic Journal, July-October 1978) and later clarified it in the FT article An Idea that Gained Currency but Lost Clarity (Financial Times, Sep 11, 2001).
Tobin tax was originally defined as a tax on all Forex spot transactions. The tax was intended to discourage short-term currency speculation. A similar general financial transaction tax was later proposed as means to reduce HFT activity in financial markets. The first HFT-specific Tobin tax is implemented by Italy on Sep 2, 2013.
Jan 24, 1938: “Exchange Act Release No. 1548, Rule 10a–1” restricts short selling in a declining market.
Jun 6, 1934: “Securities Exchange Act of 1934” is enacted by U.S. Congress. More…
The act and related statutes form the basis of regulation of the financial markets in the U.S. The act also established the Securities and Exchange Commission.
Sep 1871: NYSE starts continuous trading. More…
NYSE adopts a system of continuous trading, replacing the twice-daily call auctions used since 1817. As part of the new system, brokers dealing in a particular stock remain at one location on the trading floor, giving rise to the specialist.
Continuous trading is a fundamental prerequisite for high frequency trading.
May 17, 1792: The Buttonwood Agreement, a precursor of the New York Stock Exchange, is signed. More…
The agreement to trade securities is signed by twenty-four brokers outside of 68 Wall Street under a buttonwood tree. It has two provisions: 1) the brokers are to deal only with each other, thereby eliminating the auctioneers, and 2) the commissions are set to 0.25%. In 1817 the organization drafts its constitution, rents rooms at 40 Wall Street, and names itself the “New York Stock & Exchange Board”. In 1863 this name is shortened to “New York Stock Exchange”.