Saturday, May 8, 2021

Trading Faster: "The War Among Algorithms"

 From Berfrois, April 23:

Twenty years ago, a financial trader was still usually a human being, either sharing a trading pit along with dozens or hundreds of other sweaty human bodies, or sitting at a computer terminal, talking into a telephone and buying and selling with keyboard and mouse. A decade later, digital algorithms had made decisive inroads into trading, but those algorithms still mostly ran on conventional computer systems. Nowadays, a trader is very often a specialised silicon chip known as an FPGA, or field-programmable gate array, such as the large, square chip at the centre of this photograph, coated with white paste that had held a cover in place.

The FPGAs that do so much of today’s trading are mainly to be found in about two dozen anonymous, warehouse-like buildings in and around Chicago, New York, London, Frankfurt and other major global financial centres. To walk through one of these computer datacentres is to listen to the hum of tens of thousands of computer servers in row upon endless row of metal cages and to glimpse the incomprehensible spaghetti of cables that interconnect the machines packed into those cages. When I first did so, in October 2014, I was still struggling to find a way of understanding the complex new world of ultrafast trading algorithms that was evolving.

I’ve gradually come to realise that one way of making sense of it is to focus on two of the different species of trading algorithm that are run on FPGAs and other forms of ultrafast computer hardware. One species is ‘market-making’ algorithms. Their chief characteristic is that they continuously bid to buy the stock or other financial instrument being traded and also continuously offer to sell it, at a marginally higher price. Consider, for example, the trading of a US stock. At almost any point in time, there will be an array of bids to buy it at, for instance, $31.48, $31.49 and $31.50, and a corresponding array of offers to sell at $31.51, $31.52, and so on. Many, perhaps most, of those bids and offers will have been placed by market-making algorithms. An example of a successful operation by such an algorithm would be to buy a hundred shares (a standard size of trade in US stocks) at $31.50 and to sell them at $31.51. With a profit per share traded of as little as a single cent, a trading firm that specialises in market-making needs to buy and sell on a huge scale to earn any substantial amount of money. The largest such firms trade hundreds of millions or even billions of shares every day, which is one reason the activity is often called ‘high-frequency trading’ or HFT.

The second species of algorithm would be referred to by market participants as ‘taking’, ‘removing’ or ‘aggressive’. Like their market-making cousins, these algorithms constantly process data on price movements, but unlike them they aren’t always trying to trade. They wait until their calculations identify a probable profit opportunity and then they pounce. If, for example, that opportunity involves buying shares, a ‘taking’ algorithm will typically do so by snapping up as many as it can of the offers from market-making algorithms. Should the taking algorithm’s calculations be correct, prices will rise, and – perhaps after a few seconds, or maybe even a few minutes, which is a long time in today’s frantic trading – it will then sell the shares it has just bought. The market-making algorithms from which the taking algorithm has bought those shares have been left nursing a loss. They don’t, however, succumb to the dangerous human temptation to hold on to a trading position in the hope that the loss can be recovered. Unemotionally, they will seek to close that position by buying shares even at the new higher price and even from the ‘taking’ algorithm that has, in market parlance, just ‘picked them off’ or ‘run them over’.

The divide between market-making and taking is not absolute – for example, a fast market-making algorithm can ‘pick off’ its slower siblings.....

....MUCH MORE