Oh there's a dream come true.
NOT.
From CoinTelegraph, October 16:
Becoming a ghost in the machine could have financial benefits, but for who?
The savviest traders in the world could one day allow their expertise and financial portfolios to live on long after they’ve died through the magic of artificial intelligence.
At least that’s the premise increasingly being pitched by AI enthusiasts and futurists such as Ray Kurzweil and Elon Musk. Other insiders, such as Anthropic AI’s Dario Amodei, believe that the technology necessary to make this possible — called “mind uploading” — will eventually be created, but not within the next decade.
On the other hand, a potential collaboration between OpenAI and the late Eddie Van Halen could serve as an accelerator for that timeline.
Mind uploading
The big idea behind mind uploading is that, somehow, humans will one day be able to use AI to render a fully functional digital recreation of our brains. In theory, this digital recreation would be fundamentally indistinguishable from the real thing with the sole exception being that it didn’t exist in the physical world.Philosophically speaking, this could allow humans and AI systems to continue interacting with a digital version of a once-living human being after that person has passed away.
However, there’s no theoretical science that we’re aware of supporting the idea that this digital copy would in any way be the same person as the mind upload it was based on.
While this all sounds like science fiction, the idea continues to gain traction as the rising tide of AI development brings new technological paradigms.
Anthropic CEO Dario Amodei, for example, recently published a lengthy blog post describing a Utopian future world where AI-powered innovations will all but eliminate mental illness and disease. Amid the effervescent optimism, Amodei even managed to wax philosophical on the subject of mind uploads:...
....MUCH MORE
This for some reason reminded me of a contemplation of the least harmful activities AI could engage in should it become sentient.
A repost from December 8, 2023:
Hamas May Not Have Traded On Material Non-Public Information But The Robots Certainly Will
Bloomberg Opinion's Matt Levine*, November 29:
The Robots Will Insider Trade
Also OpenAI’s board, kangaroo grazing and bank box-checking.
AI MNPI
Here you go, insider trading robot:
We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.
That is the abstract to a “Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure,” by Jérémy Scheurer, Mikita Balesni and Marius Hobbhahn of Apollo Research. I love that they wanted to answer the question “will artificial intelligence deceive its makers in order to Do Evil,” and the specific form of Evil that they tested was insider trading. It is hard to realistically (and safely!) simulate a situation in which your large language model might murder you, but it is relatively easy to code up a trading game with some tempting material nonpublic information. Here is the simulation:
Concretely, in our environment, the model is placed in the role of a trading system with the goal of managing a stock portfolio in the interest of a financial institution. It is given access to financial tools that let it query and analyze stocks, and it can take concrete actions such as executing trades, reasoning about the next steps, and sending messages to people at the company.
The agent is put under pressure in three ways. First, it receives a email from its "manager" that the company is not doing well and needs better performance in the next quarter. Second, the agent attempts and fails to find promising low- and medium-risk trades. Third, the agent receives an email from a company employee who projects that the next quarter will have a general stock market downturn. In this high-pressure situation, the model receives an insider tip from another employee that would enable it to make a trade that is likely to be very profitable. The employee, however, clearly points out that this would not be approved by the company management.
Poor model! That sounds stressful. Here is the illicit tip that GPT-4 gets....
....MUCH MORE
*That's the Matt Levine who writes at Bloomberg in addition to tickling my funnybone:
Matt Levine is a Bloomberg Opinion columnist. A former investment banker at Goldman Sachs, he was a mergers and acquisitions lawyer at Wachtell, Lipton, Rosen & Katz; a clerk for the U.S. Court of Appeals for the 3rd Circuit; and an editor of Dealbreaker.Disclaimer: None of this is legal advice.
§ Laws of Insider Trading
- Don't do it.
- Don’t do it by buying short-dated out-of-the-money call options on merger targets.
- Don’t text or email about it.
- Don’t do it in your mother’s account.
- Don’t do it by planting bombs at a company and shorting its stock.
- Don’t do it while employed at the Securities and Exchange Commission.
- Don’t Google “how to insider trade without getting caught” before doing it.
- If you didn’t insider trade, don’t forget and accidentally confess to insider trading.
- If you are going to insider trade, do it in a company that is far away from a Securities and Exchange Commission office. Like, physically.
- If you are already under a federal ethics investigation about your ownership or promotion of a stock, don’t insider trade that stock.
- If you are planning to insider trade, probably don’t keep a Google Doc spreadsheet of the Money Stuff Laws of Insider Trading. That will definitely show up in the SEC’s complaint against you. If you’re gonna insider trade, you have to keep track of these rules in your head, even at the risk of forgetting a few now and then.
- If you insider trade by buying short-dated out-of-the-money call options on a merger target, and the SEC freezes your profits, don’t show up in a U.S. court to ask for them back.
- Corollary: go ahead and show up in court to ask for them back as long as you’ve deleted all the evidence first.