The heart of the New York Stock Exchange is a bank of computers located in New Jersey. If someone issues a buy or sell order it will be transferred to a computer that will look for a buy or sell order that matches it. This functionality is referred as a “matching engine.” This computer thus performs the traditional role of traders played so famously in images of tumult on the trading floor. The computer can work more quickly and efficiently, resulting in a narrower spread between what the seller wants and what he might be willing to accept.
A lot of traffic on the exchanges consists of humans issuing buy or sell orders for transmission to the matching engine, but more than half of the traffic comes from what is referred to as algorithmic trading. Computers programmed to perform given functions based on derived algorithms provide the majority of the trading activity. MacKenzie’s article describes the several classes of algorithms that are used and discusses the efficacy of allowing computer driven trading to dominate the exchanges.
There are valid reasons for resorting to algorithmic trading. The most obvious example involves any large entity, such as a mutual fund or pension fund that wishes to change its portfolio. Given its large size, an attempt to buy or sell a significant number of shares will, in itself, drive the market to some extent. Buying drives the share price higher while selling will drive the share price lower. This is referred to as “slippage,” and it results in a loss of money for the organization wishing to make the transaction. If it is recognized that a significant number of shares are to be moved, then prospective buyers and sellers have some leverage in holding out for a better price. For this reason these outfits have developed what are referred to as “execution algorithms,” whose sole purpose is to leak out buy or sell orders in a fashion that minimizes slippage. This usually involves choosing a size and time for a given transaction that lowers its profile and least affects the expected price. This is best done by a computer and it merely mimics what traders have always done.
The next class of algorithms is what MacKenzie refers to as “statistical arbitrage” algorithms. These look for small, temporary drifts of a stock price from that which is expected to be its running average. If the stock is deemed lower than expected the algorithm could send out a buy order on the assumption that a profit will be made when the stock price returns to the expected average value. These algorithms can be expanded to include multiple stocks whose prices are expected to have a well-defined correlation.
There are algorithms that prey on other algorithms, a process MacKenzie refers to as “algo-sniffing.” The example provided is where someone is trying to make a significant purchase of a given stock using an execution algorithm. Someone detects this play and tries to purchase the stock faster than the execution algorithm in order to drive the price up and profit from a later sale at higher price.
The above methods are clearly legal. There is one approach called “spoofing” that perhaps crosses the line.
“A spoofer might, for instance, buy a block of shares and then issue a large number of buy orders for the same shares at prices just fractions below the current market price. Other algorithms and human traders would then see far more orders to buy the shares in question than orders to sell them, and be likely to conclude that their price was going to rise. They might then buy the shares themselves, causing the price to rise. When it did so, the spoofer would cancel its buy orders and sell the shares it held at a profit. It’s very hard to determine just how much of this kind of thing goes on, but it certainly happens.”
What all of the above algorithms require is instant access to market data and the ability to immediately respond. The time unit of interest in these trading mechanisms has become the millisecond. In fact, traders are trying to move transactions into the microsecond range on the assumption that he who is fastest has the financial advantage. Clearly, these are speeds that can only be handled by a computer—and not just any computer. Since transmission times to remote cities are too long to be effective, the computer must be co-located with the servers providing the market data—rendering human intervention even more irrelevant.
The obvious question to ask at this point is “What good is all this high frequency trading?”
“Tales of computers out of control are a well-worn fictional theme, so it’s important to emphasise that it is not at all clear that automated trading is any more dangerous than the human trading it is replacing. If the danger had increased, one way it would manifest itself is in higher volatility of the prices of shares traded algorithmically. The evidence on that is not conclusive – like-for-like comparison is obviously hard, and the academic literature on automated trading is still small – but data we do have suggest, if anything, that automated trading reduces volatility. For example, statistical arbitrage algorithms that buy when prices fall and sell when they rise can normally be expected to dampen volatility.”
“The bulk of the research also suggests that automated trading makes the buying and selling of shares cheaper and usually easier. Renting rack space in a data centre may be expensive, but not nearly as expensive as employing dozens of well-paid human traders. Twenty years ago the ‘spread’ between the price at which a human market maker would buy and sell a share was sometimes as much as 25 cents; the fact that it is now often as little as one cent means substantial savings for mutual funds, pension funds and other large institutions, almost certainly outweighing by far their losses to algo-sniffers. When assessed on criteria such as the cost of trading, the effects of automation are probably beneficial nearly all of the time.”
This makes for an encouraging story, but, as MacKenzie points out, things can go wrong. On May 6, 2010 a company issued an exceptionally large sell order using a rather common execution algorithm. What happened next is complicated and not all observers agree with the conclusions of those who studied the event. Mackenzie discusses the event in detail. It will suffice here to merely say that the net effect of all the millisecond timescale decisions being made by all the algorithms in play was to cause the market to go unstable. Strange things occurred, but the market had anticipated such an event and had safety measures that could counter it. A complete meltdown was avoided and stability was regained.
What is both interesting and troubling about the author’s discussion is the apparent conclusion that the trading system, with its algorithms active on multiple, interconnected trading sites, may be too complicated to be rendered completely safe.
“As Steve Wunsch, one of the pioneers of electronic exchanges, put it....US share trading ‘is now so complex as a system that no one can predict what will happen when something new is added to it, no matter how much vetting is done.’ If Wunsch is correct, there is a risk that attempts to make the system safer – by trying to find mechanisms that would prevent a repetition of last May’s events, for example – may have unforeseen and unintended consequences.”
MacKenzie doesn’t buy that logic and is worried by the fact that little has been done to prevent a similar situation from developing again.
“There has been no full-blown stock-market crisis since October 1987: last May’s events were not on that scale. But as yet we have done little to ensure that there won’t be another.”
If one steps way back and ponders the content of this article, one is led to the conclusion that financial manipulations that benefit only a few, and that contribute little or nothing to the economy, are again putting us at risk of a financial disaster. Regulation seems necessary. If the system that has evolved is too complicated to regulate, then the system needs to be simplified. Perhaps that is all the regulation required.
No comments:
Post a Comment