Friday, December 5, 2025

"Eat Your AI Slop or China Wins"

I prefer "...or you can't have any pudding" but there's no accounting for taste in music.

From The New Atlantis, Summer 2025 edition:

The new cold war means a race with China over AI, biotech, and more. This poses a hard dilemma: win by embracing technologies that make us more like our enemy — or protect ourselves from tech dehumanization but become subjects to a totalitarian menace.  

In “Darwin Among the Machines,” a letter to an editor published in 1863, the English novelist Samuel Butler observed with dread how the technology of his time was degrading humanity. “Day by day,” he wrote, “the machines are gaining ground upon us; day by day we are becoming more subservient to them.” For the ironical Butler, the solution was simple: kill the machines. “War to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species.”

In his later novel Erewhon, Butler imagined a people who take his advice and smash their machines — the inspiration for the “Butlerian Jihad” in Frank Herbert’s Dune. But to make his central conceit plausible, by the loose rules governing a Victorian satire, Butler had to drop the society of Erewhon in the middle of “nowhere” (an anagram of the name), in a remote valley cut off from the rest of the world. The Erewhonians, Butler recognized, would never have survived centuries of Luddism anywhere else: they would have vanquished the machines only to be vanquished by an antagonist lacking their technological caution. In the real world, Butler suggests, we face a choice: Will you preserve your humanity or your security?

This may be just the choice we face today. From Washington, D.C. to Silicon Valley, champions of new technologies often argue, with good reason, that we must embrace them because, if we don’t, the Chinese will — and then where will we be?

Driven by geopolitical pressures to accelerate technological development, particularly in AI and biotech, we seem to have two options: channeling innovation toward humane ends or protecting ourselves against competitors abroad.

To appreciate the difficulty of this choice, we should take a page from military theorists who have wrestled with what is known as the “security dilemma.” Even though it is one of the most important concepts in international relations, it has been given little attention by those grappling with the promises and challenges of new technologies. But we should, because when we apply its core insights to technological development, we realize that achieving a prosperous human future will be even more difficult than we tend to think.

The Dilemma 
The security dilemma, as described by the political scientist Robert Jervis in a 1978 paper, is that “many of the means by which a state tries to increase its security decrease the security of others.” Take two nations that don’t know each other’s warfighting capabilities or intentions. With a questionable neighbor next door, one side reasons, it’s only sensible to build up a reliable defense, just in case. But the other nation has the same thought, and similarly proceeds to boost its armaments as a precaution. Each nation sees the other militarizing, which justifies its own defense build-up. Before long, we have a frantic arms race, each nation building up its military to surpass the growing military next door, spiraling toward a conflict neither actually desires.

The dilemma is this: each nation can either militarize, prompting the other to reciprocate and heightening the risk of an ever-more-violent war; or not militarize, endangering itself before a power that is suspiciously expanding its arsenal. The dilemma suggests that each nation can be all but helplessly compelled to militarize but then is no better off than before, for the other nation is doing the very same. In fact, everyone is worse off, as the stakes and lethality of a looming war continue to rise. Perverse geopolitical incentives drive both sides to rationally pursue a course of action that harms them both.

It’s no coincidence that the dilemma was first articulated in the 1950s, amid the Cold War menace of mutual assured destruction. But its underlying logic applies not only to national defense per se but also to technological innovation broadly.

Consider how geopolitical pressures motivate technological advancement. World War II spurred the development of cryptography and early computers, such as America’s ENIAC and Britain’s Colossus. The Cold War rivalry between America and the Soviet Union prompted their race to be the first to put a man on the Moon. And in the 1980s, Japan’s prowess in the semiconductor industry motivated America to launch state projects, like the SEMATECH consortium, to remain competitive.

Just as with the security dilemma, deciding not to act in response to these pressures is a recipe for failure, because it risks making one as helpless against one’s competitors as the armored knight against the musket. Whichever side is technologically superior will gain the upper hand — economically, geopolitically, and, down the road, militarily. So each side, if it hopes to survive, must adopt the more sophisticated technology.

But the risk of this trajectory is not only to other nations, as with militarization itself — it is potentially to one’s own people. As every modern society has come to experience, technological innovation, despite the countless ways it has improved our lives, can also bring not just short-term economic instability and job loss, but also long-term social fracture, loss of certain human skills and agency, the undermining of traditions, and the empowerment of the state over its own people.

In what we might call the “technological security dilemma,” each nation faces a choice: either pursue technological advancement to the utmost, forcing your competitors to reciprocate, even if such advancement jeopardizes your own citizens’ wellbeing; or refuse to do so — say, out of a noble concern that it threatens your people’s form of life — and allow yourself to be surpassed by an adversary without the same concern for its people, or for yours.

As Palantir’s Alex Karp and Nicholas Zamiska recently put it in their book The Technological Republic, “Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed.” So if a nation won’t accept one horn of the dilemma, allowing its geopolitical standing to falter and putting itself at the mercy of the more advanced nation, then it must choose the other, adopting an aggressive approach to technological development, no matter what wreckage may result.

The question is whether that nation can long enjoy both its tech dominance and its humanity.

China or U.S. — ‘There Is No Third Option’

Today, the technological security dilemma is the very situation America finds itself in with China.

Consider artificial intelligence. Venture capitalist Marc Andreessen writes that “AI will save the world,” but also that it could become “a mechanism for authoritarian population control,” ushering in an unprecedentedly powerful techno-surveillance state. It all depends on who is leading the industry: “The single greatest risk of AI,” he writes, “is that China wins global AI dominance and we — the United States and the West — do not.” In a similar vein, Sam Altman once said on Twitter — when he was the new president of Y Combinator in 2014 — that “AI will be either the best or the worst thing ever.” The difference, he said a decade later, now writing as CEO of OpenAI in the Washington Post, is whether the AI race is won by America with its “democratic vision,” or by China with its “authoritarian” one. Our own good AI maximalism is thus “our only choice” for countering their bad AI maximalism.

But in that case, the AI boosters’ arguments for its benefits and their refutations of popular fears are almost beside the point. As the technological security dilemma suggests, even if all of AI’s speculated downsides were to come about — mass unemployment, retreat into delusional virtuality, learned helplessness among all who can no longer function without ChatGPT, and so forth — we would still need to accelerate AI to stay ahead of China.

A world run by China’s AI-powered digital authoritarianism would indeed be a nightmare for the United States and everyone else, marked by a total disregard for individual privacy, automated predictive policing that renders civil liberties obsolete, and a global social credit system that blacklists noncompliant individuals from applying for credit, accessing their bank accounts, or using the subway. How then can we afford to deliberate about AI’s impact, much less slow down its advancement? Its potential domestic harms, the dilemma suggests, are the necessary price to pay for our national security.

https://www.thenewatlantis.com/wp-content/uploads/2025/06/Bellafiore-facial-recognition-1920x1276.jpg 

It would therefore be a mistake to dismiss the arguments from Andreessen and Altman as nothing more than self-serving P.R. tactics to lobby for government favors. They are getting at something fundamental: in a technological arms race, the only rational action is to try to win. Once China has entered the race for technological dominance in AI, America, if it wishes to maintain its own political independence and avoid becoming China’s vassal state, has no choice but to enter the race as well, no matter what damage results. As Altman puts it, “there is no third option.”....

....MUCH MORE 

I need some vassal states.