From AI Scenarios, December 3:
Abstract
We model national
strategies and geopolitical outcomes under differing assumptions about
AI development. We put particular focus on scenarios with rapid progress
that enables highly automated AI R&D and provides substantial
military capabilities.
Under non-cooperative assumptions—concretely, if international
coordination mechanisms capable of preventing the development of
dangerous AI capabilities are not established—superpowers are likely to
engage in a race for AI systems offering an overwhelming strategic
advantage over all other actors.
If such systems prove feasible, this dynamic leads to one of three outcomes:
- One superpower achieves an unchallengeable global dominance;
- Trailing superpowers facing imminent defeat launch a preventive or preemptive attack, sparking conflict among major powers;
- Loss-of-control of powerful AI systems leads to catastrophic outcomes such as human extinction.
Middle powers, lacking both the muscle to compete in an AI race and
to deter AI development through unilateral pressure, find their security
entirely dependent on factors outside their control: a superpower must
prevail in the race without triggering devastating conflict,
successfully navigate loss-of-control risks, and subsequently respect
the middle power's sovereignty despite possessing overwhelming power to
do otherwise.
Executive summary
We
model how AI development will shape national strategies and
geopolitical outcomes, assuming that dangerous AI development is not
prevented through international coordination mechanisms. We put
particular focus on scenarios with rapid progress that enables highly
automated AI R&D and provides substantial military capabilities.
Race to artificial superintelligence
If
the key bottlenecks of AI R&D are automated, a single factor will
be driving the advancement of all strategically relevant capabilities:
the proficiency of an actor's strongest AI at AI R&D. This can be
translated into overwhelming military capabilities.
As a result, if international coordination mechanisms capable of
preventing the development of dangerous AI capabilities are not
established, superpowers are likely to engage in a race to artificial
superintelligence (ASI), attempting to be the first to develop AI
sufficiently advanced to offer them a decisive strategic advantage over
all other actors.
This naturally leads to one of two outcomes: either the
"winner" of the AI race achieves permanent global dominance, or it loses
control of its AI systems leading to humanity's extinction or its
permanent disempowerment.
In this race, lagging actors are unlikely to stand by and watch as
the leader gains a rapidly widening advantage. If AI progress turns out
to be easily predictable, or if the leader in the race fails to
thoroughly obfuscate the state of their AI program, at some point it
will become clear to laggards that they are going to lose and they have
one last chance to prevent the leader from achieving permanent global
dominance.
This produces one more likely outcome: one of the laggards in the AI
race launches a preventive or preemptive attack aimed at disrupting the
leader's AI program, sparking a highly destructive major power war.
Middle power strategies
Middle powers generally lack the muscle to compete in an AI race and to deter AI development through unilateral pressure.
While there are some exceptions, none can robustly deter superpowers
from participating in an AI race. Some actors, like Taiwan, the
Netherlands, and South Korea, possess critical roles in the AI supply
chain; they could delay AI programs by denying them access to the
resources required to perform AI R&D. However, superpowers are
likely to develop domestic supply chains in a handful of years.
Some middle powers hold significant nuclear arsenals, and could use
them to deter dangerous AI development if they were sufficiently
concerned. However, any nuclear redlines that can be imposed on
uncooperative actors would necessarily be both hazy and terminal (as
opposed to incremental), rendering the resulting deterrence exceedingly
shaky.
Middle powers in this predicament may resort to a strategy we call Vassal's Wager:
allying with one superpower in the hopes that they "win" the ASI race.
However, with this strategy, a middle power would surrender most of
their agency and wager their national security on factors beyond their
control. In order for this to work out in a middle power's favor, the
superpower "patron" must simultaneously be the first to achieve
overwhelming AI capabilities, avert loss-of-control risks, and avoid war
with their rivals.
Even if all of this were to go right, there would be no guarantee
that the winning superpower would respect the middle power's
sovereignty. In this scenario, the "vassals" would have absolutely no
recourse against any actions taken by an ASI-wielding superpower.
Risks from weaker AI
We
consider the cases in which AI progress plateaus before reaching
capability levels that could determine the course of a conflict between
superpowers or escape human control. While we are unable to offer
detailed forecasts for this scenario, we point out several risks:
- Weaker AI may enable new disruptive military capabilities (including capabilities that break mutual assured destruction);
- Widespread automation may lead to extreme concentration of power as unemployment reaches unprecedented levels;
- Persuasive AI systems may produce micro-targeted manipulative media at a massive scale.
Being a democracy or a middle power puts an actor at increased risk
from these factors. Democracies are particularly vulnerable to large
scale manipulation by AI systems, as this could undermine public
discourse. Additionally, extreme concentration of power is antithetical
to their values.
Middle powers are also especially vulnerable to automation. The
companies currently driving the frontier of AI progress are based in
superpower jurisdictions. If this trend continues, and large parts of
the economy of middle powers are automated by these companies, middle
powers will lose significant diplomatic leverage....
....MUCH MORE, that's just the summary.
Here's the version at the Social Science Research Network:
Modeling the Geopolitics of AI Development23 Pages
Posted: 3 Dec 2025