Saturday, May 3, 2025

News You Can Use: "....How AI-enabled coups could allow a tiny group to seize power"

From the 80000 Hours pod:

....Highlights

"No person rules alone" — except now they might

Rob Wiblin: What is it structurally about AI as a technology that allows it to facilitate seizures of power by small groups?

Tom Davidson: The key thing in my mind is that it’s surprisingly plausible that you could get a really tiny group of people — possibly just one person — that has an extreme degree of control over how the technology is built and how it’s used.

Let me say a few things about why that might be the case. Today there are already massive capital costs needed to develop frontier systems: it costs hundreds of millions of dollars for the computer chips that you’d need. So that means that already there’s only a handful of companies that can afford to get into that game and large barriers to entry.

And I think that is going to increase as a factor over time, as these initial training runs are getting more and more expensive. And even with the move away from pretraining towards more agentic training like with o1, still we expect that there’s going to be a lot of compute used to generate that synthetic data to train the most capable systems.

There’s also kind of a broad economic feature of AI that it has these massive economies of scale, which means huge upfront development costs and then very small marginal costs of serving an extra customer. And also, AIs produced by different companies are pretty similar to one another. There are small differences between Claude and GPT-4, but not massive.

And economically speaking, those features tend to favour a kind of natural monopoly where there’s just one company that kind of serves the whole market. I’m not saying that these economic features will necessarily push all the way to there being just one frontier AI developer, but I think that there are these broad structural arguments to think that there could be a consolidation of that market — like what we’ve seen, for example, in the semiconductor supply chain over previous decades: now only TSMC is able to produce the smallest node chips.

So those are economic factors. I think there are some political factors that could lead to centralisation of AI development as well. People have raised reasonable national security grounds for centralising AI development. It could allow us to secure AI weights more against potential foreign adversaries. So I think there’s a chance that’s convincing.

People have also thought it might be good for AI safety to have just one centralised project, so you don’t have racing between different projects.

There’s also some more AI-specific reasons that you could have a real centralisation in terms of AI development. This idea of recursive improvement, which I talked about last time I was on the podcast, is the idea that at some point — I think maybe very soon — AI will be able to fully replace the technical workers at top AI developers like OpenAI. And when that happens, even if we were previously in a situation where the top AI developer is only a little bit ahead of the laggard, it could be that once you automate AI research, that gap quickly becomes very big — because whoever automates AI research first gets a big speed boost. So even if it seems like there’s multiple projects that are all developing frontier systems, then there could be a year over which actually now there’s only really one game in town, in terms of the very best system.

So this is all to say that we could very easily end up in a world where there’s just one organisation that is developing this completely general purpose, highly powerful technology.

Now, you might say that’s OK, because within that organisation there’ll be loads of different people, loads of checks and balances. But there’s actually a plausible technological path to that not being the case, which again relates to how AI could potentially replace the technical researchers at that company.

So today, there’s hundreds of different people that are involved in, for example, developing GPT-5. And if someone wanted to mess with the way that technology is built, so it kind of served the interests of a particular group, it would be quite hard to do, because there’s so many different people that are part of the process. They might notice what’s happening, they might report it.

But once we get to a world where it is technologically possible to replace those researchers with AI systems — which could just be fully obedient, instruction-following AI systems — then you could feasibly have a situation where there’s just one person at the top of the organisation that gives a command: “This is how I want the next AI system to be developed. These are the values I want it to have.” And then this army of loyal, obedient AIs will then do all of the technical work in terms of building the AI system. There don’t have to be, technologically speaking, any humans in the loop doing that work. So that could remove a lot of the natural inbuilt checks and balances within one of these — potentially the only developer of frontier AI.

So pulling that all together is to say that there is a plausible scenario where there’s just one organisation that’s building superhuman AI systems, and there’s actually potentially just one person that’s actually making the significant decisions about how it’s built. That is what I would consider an extreme degree of control over the system.

And even if there’s kind of an appearance that other employees are overlooking parts of the process, there’s still this risk that someone with a lot of access rights and are able to make changes to the system without approvals could secretly have a side project which does a lot of technical work. And that even if employees are overseeing some parts of the process, it could just be that that side project is able to have a significant influence over the shape of the technology without anyone knowing anymore.

Rob Wiblin: So I guess the key thing that distinguishes AI or AGI in this case from previous technologies, is the cliche about even dictators, even people who have seemingly enormous amounts of power: no person rules alone.

Even if you are Vladimir Putin or you’re seemingly controlling an entire country, you can’t go collecting the taxes yourself; you can’t actually hold the guns in the military yourself. You require this enormous number of people to cooperate with you and enforce your power. You have to care about the views of a broader group of people, because they might remove you if they think someone else would serve their interests better.

Tom Davidson: Exactly.

The 3 threat scenarios

Tom Davidson: I distinguish between three broad threat models here, although you can of course get combinations of all three.

Military coup is where there’s this legitimate, established military, but you subvert it by illegitimately seizing control — maybe using a technical backdoor or convincing the military to go along with your power grab. So that’s the first one: military coups.

The second one is something I call “self-built hard power,” which is just what it says on the tin: you’re kind of creating your own armed forces and broad economic might that allows you to overthrow the incumbent regime.

The third one is something that we’ve seen much more recently in more mature democracies, which I’m calling autocratisation. That’s the kind of standard term. The broad story there is that someone is elected to political office and then proceeds to remove the checks and balances on their power, often with a broad mandate from the people who are very discontented with the system as it is.

Rob Wiblin: So on one level, saying AI is going to enable a military coup or something like that sounds a little bit peculiar, a little bit sci-fi-ish. How sceptical should we be coming into this conversation? Are these things very abnormal or very rare? Or should we think of them as more common than perhaps we do on a day-to-day basis?

Tom Davidson: Across the second half of the 20th century, across the globe, military coups were very common. There were more than 200 successful military coups. Now, they were predominantly not in the most mature democracies; they tend to be in states that have some elements of democracy, but not kind of full-blown democracy.

But I do think that with AI technology, there will be new vulnerabilities introduced into the military that could enable military coups — so the historical trend that military coups haven’t happened in democracies may not continue to apply.

In terms of autocratisation, again, the most extreme cases of autocratisation really leading to full-blown authoritarian regimes haven’t started off in mature democracies like the United States. But for example, in Venezuela, that was a pretty healthy democracy for 40 years before Hugo Chavez came into power with a strong socialist mandate for reform — and then over the next 10, 20 years really pretty much removed all of the checks and balances. And now it’s just widely considered to be an authoritarian regime with just the smallest pretence of democracy today....

....MUCH MORE