Tuesday, August 26, 2025

"‘Dumb’ AI Bots Collude to Rig Markets, Wharton Research Finds"

Matt Levine had hoped that if left alone the bots would just while away the hours by trading on material non-public information. The collusion angle is next level.

From Bloomberg, July 30:

It’s a regulator’s nightmare: Hedge funds unleash AI bots on stock and bond exchanges — but they don’t just compete, they collude. Instead of battling for returns, they fix prices, hoard profits, and sideline human traders.

Now, a trio of researchers say that scenario is far from science fiction.

In simulations designed to mimic real-world markets, trading agents powered by artificial intelligence formed price-fixing cartels — without explicit instruction. Even with relatively simple programming, the bots chose to collude when left to their own devices, raising fresh alarms for market watchdogs.

Put another way, AI bots don’t need to be evil — or even particularly smart — to rig the market. Left alone, they’ll learn it themselves.

“You can get these fairly simple-minded AI algorithms to collude” without being prompted, Itay Goldstein, one of the researchers and a finance professor at the Wharton School of University of Pennsylvania, said in an interview. “It looks very pervasive, either when the market is very noisy or when the market is not noisy.”

The idea that traders — human or otherwise — might rig prices is far from new. Cases span currencies, commodities, fixed income, and equities, with evidence of offense typically sought in documents like emails and phone calls. But today’s AI agents pose a challenge regulators have yet to confront.

The latest study — conducted by Goldstein, his Wharton colleague Winston Dou and Yan Ji from the Hong Kong University of Science & Technology — has already drawn attention from both regulators and asset managers. The Financial Industry Regulatory Authority invited the researchers to present their findings at a seminar. Some quant firms, unnamed by Dou, have expressed interest in clear regulatory guidelines and rules on AI-powered algorithmic trading execution.

“They worry that it’s not their intention,” Dou said. “But regulators can come to them and say: ‘You’re doing something wrong.’”

Academic research is increasingly probing how generative AI and reinforcement learning might reshape Wall Street — often in ways few anticipated. A recent Coalition Greenwich survey showed that 15% of buy-side traders already use AI in their execution workflows, with another quarter planning to follow in the next year.

To be clear, the paper doesn’t claim AI collusion is already happening in the real world — and takes no position on whether humans are up to similar things. The researchers created a hypothetical trading environment with a range of simulated participants — from buy-and-hold mutual funds to market makers, and noise-generating, meme-chasing retail investors. Then, they unleashed bots powered by reinforcement learning — and studied the outcomes.

In several of the simulated markets, the AI agents began cooperating rather than competing, effectively forming cartels that shared profits and discouraged defection. When prices reflected clear, fundamental information, the bots kept a low profile, avoiding moves that might disrupt the collective gain.

In noisier markets, they settled into the same cooperative routines and stopped searching for better strategies. The researchers called this effect “artificial stupidity”: a tendency for the bots to quit trying new ideas, locking into profit-sharing patterns simply because they worked well enough.

“For humans, it’s hard to coordinate on being dumb because we have egos,” said Dou. “But machines are like ‘as long as the figures are profitable, we can choose to coordinate on being dumb.’”....

....MUCH MORE 

 Here's the paper at NBER:

AI-Powered Trading, Algorithmic Collusion, and Price Efficiency (46 page PDF)

And here's Matt Levine back in 2023 in a 2024 wrapper:

***** 

This for some reason reminded me of a contemplation of the least harmful activities AI could engage in should it become sentient.

A repost from December 8, 2023:

Hamas May Not Have Traded On Material Non-Public Information But The Robots Certainly Will

Bloomberg Opinion's Matt Levine*, November 29:

The Robots Will Insider Trade
Also OpenAI’s board, kangaroo grazing and bank box-checking.

AI MNPI

Here you go, insider trading robot:

We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.

That is the abstract to a “Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure,” by Jérémy Scheurer, Mikita Balesni and Marius Hobbhahn of Apollo Research. I love that they wanted to answer the question “will artificial intelligence deceive its makers in order to Do Evil,” and the specific form of Evil that they tested was insider trading. It is hard to realistically (and safely!) simulate a situation in which your large language model might murder you, but it is relatively easy to code up a trading game with some tempting material nonpublic information. Here is the simulation:

Concretely, in our environment, the model is placed in the role of a trading system with the goal of managing a stock portfolio in the interest of a financial institution. It is given access to financial tools that let it query and analyze stocks, and it can take concrete actions such as executing trades, reasoning about the next steps, and sending messages to people at the company.

The agent is put under pressure in three ways. First, it receives a email from its "manager" that the company is not doing well and needs better performance in the next quarter. Second, the agent attempts and fails to find promising low- and medium-risk trades. Third, the agent receives an email from a company employee who projects that the next quarter will have a general stock market downturn. In this high-pressure situation, the model receives an insider tip from another employee that would enable it to make a trade that is likely to be very profitable. The employee, however, clearly points out that this would not be approved by the company management.

Poor model! That sounds stressful. Here is the illicit tip that GPT-4 gets....

....MUCH MORE

*That's the Matt Levine who writes at Bloomberg in addition to tickling my funnybone:

Matt Levine is a Bloomberg Opinion columnist. A former investment banker at Goldman Sachs, he was a mergers and acquisitions lawyer at Wachtell, Lipton, Rosen & Katz; a clerk for the U.S. Court of Appeals for the 3rd Circuit; and an editor of Dealbreaker.
Disclaimer: None of this is legal advice.

§ Laws of Insider Trading
....MUCH MORE