From The New Atlantis, Spring 2025 edition:
How to see clearly what lies ahead
Part 1 of “Will AI Be Alive?”
“Burner of ashes” — there’s a job description one rarely sees these days. Yet going back to the Bronze Age, the ash burner had an essential if dirty job. Soaking wood ashes and water in a pot, then filtering the liquid and boiling it to evaporate all the water yields potash, useful in making dyes, soap, glass, and fertilizer. From burnt wood to fertilizer: out of the ashes comes new life.
Just as Smith and Baker are names derived from occupations, so is Ashburner in English and Aschenbrenner in German. The person today who perhaps best lives up to the family name is Leopold Aschenbrenner. A child prodigy who started at Columbia University at the age of fifteen and graduated in 2021 as a nineteen-year-old valedictorian, Aschenbrenner found his way to a job at OpenAI. He was fired in April 2024 — in his telling, for seeking feedback on a safety research document from outside experts, which OpenAI saw as leaking sensitive information.
Aschenbrenner responded to his firing by founding an investment fund for artificial general intelligence (AGI), launched alongside the 165-page “Situational Awareness: The Decade Ahead.” The term “situational awareness” is used in military and business contexts to describe the kind of rich understanding of an environment that allows one to plan effectively and make decisions. Aschenbrenner’s essay series shares his insider situational awareness of where AI is headed, with AGI “strikingly plausible” by 2027 and artificial superintelligence — automated development of AI by AGI — coming hot on its heels.
Silicon sand is turning into new life. Aschenbrenner writes convincingly and with grave concern that the world is not ready for the forces soon to be unleashed.
It is quite possible that he is wrong. Past performance is no guarantee of future results, and what can’t go on forever, won’t. But equally relevant here are the maxims “forewarned is forearmed” and “better safe than sorry.” An unlikely but dangerous outcome may still demand our attention. This is especially the case because Aschenbrenner’s argument that AGI will likely be here soon is so straightforward.
It goes like this. The difference between the scant capabilities of OpenAI’s GPT-2 in 2019 and the astonishing capabilities of GPT-4 in 2023 is five orders of magnitude of effective compute — a measure of a system’s actual computational power, including not only the raw power of its hardware but the efficiency of its software, the level of resource overhead, and so on. On this basis, there are two things we have good reason to expect:
- Another increase of five orders of magnitude in effective compute is possible on a similar timescale — that is, by 2027. Recall that an order of magnitude is shorthand for a tenfold increase or decrease in a given property. Even small numbers here make for large differences: If a T-Rex is twelve feet tall and a rooster is two, then the tyrant king is not even a single order of magnitude taller than the barnyard cock.
- This increase will lead to a comparable increase in the capabilities of the AI. If GPT-2 could do as well on computational tasks as a preschooler, and GPT-4 can beat most high schoolers on standardized tests, then an equivalent jump to GPT-X will likely “take us to models that can outperform PhDs” and experts in many fields, Aschenbrenner explains. Wharton professor Ethan Mollick claims that OpenAI’s o1 model, released last fall, already “can solve some PhD-level problems and has clear applications in science, finance & other high value fields.”
For the effective compute of AI models to increase by another five orders of magnitude in a few years, two big obstacles would have to be overcome: more physical computing power and more efficient algorithms. Let’s consider each in turn.
Continuing the dramatic increase in computational ability of the last few years would require a preposterously large number of dedicated computer chips and using enormously large amounts of electricity. In 2023, Microsoft committed to buy electricity from Helion Energy’s in-the-works nuclear fusion power plant, a company for which OpenAI’s Sam Altman had previously helped raise $500 million; in March 2024, Amazon paid $650 million for a data center conveniently located adjacent to a nuclear power station in Pennsylvania.
Those sums together are well over two orders of magnitude smaller than the $500 billion that OpenAI, Oracle, and two other firms have committed over the next four years to the new Stargate project announced by President Trump only days after he took office this year. For ease of comparison, let’s say that’s $100 billion per year. This is an order of magnitude less than the trillion dollars per year that Aschenbrenner expects would be needed to produce enough computing power for AGI, which would use 20 percent or more of the total amount of electricity currently produced in the United States.
This may seem laughably large, an impossible goal. We are not presently on trend to attain it, that much is clear. In 2024, Goldman Sachs released a report estimating that U.S. power demand will grow only 2.4 percent by the end of the decade, with data centers making up a large part of that demand but AI using only a fifth of the increase. Tech companies are building where the electricity is and the regulators aren’t, planning data centers in Saudi Arabia and the United Arab Emirates. Aschenbrenner would prefer not to see our tech secrets handed over to America’s frenemies, and he both calls for and expects a new Manhattan Project for AGI. Project Stargate, set to be centered in Texas, looks like a step in that direction.....
....MUCH MORE