Friday, January 31, 2025

"The Failed Strategy of Artificial Intelligence Doomers"

From Palladium Magazine, January 31:

In recent decades, a growing coalition has emerged to oppose the development of artificial intelligence technology, for fear that the imminent development of smarter-than-human machines could doom humanity to extinction. The now-influential form of these ideas began as debates among academics and internet denizens, which eventually took form—especially within the Rationalist and Effective Altruist movements—and grew in intellectual influence over time, along the way collecting legible endorsements from authoritative scientists like Stephen Hawking and Geoffrey Hinton.

Ironically, by spreading the belief that superintelligent AI is achievable and supremely powerful, these “AI Doomers,” as they came to be called, inspired the creation of OpenAI and other leading artificial intelligence labs whose technology they argue will destroy us all. Despite this, they have continued nearly the same advocacy strategy, and are now in the process of persuading Western governments that superintelligent AI is achievable and supremely powerful. To this end, they have created organized and well-funded movements to lobby for regulation, and their members are staffing key positions in the U.S. and British governments.

Their basic argument is that more intelligent beings can outcompete less intelligent beings, just as humans outcompeted mastodons or saber-toothed tigers or neanderthals. Computers are already ahead of humans in some narrow areas, and we are on track to create a superintelligent artificial general intelligence (AGI) which can think as broadly and creatively in any domain as the smartest humans. “Artificial general intelligence” is not a technical term, and is used differently by different groups to mean everything from “an effectively omniscient computer which can act independently, invent unthinkably powerful new technologies, and outwit the combined brainpower of humanity” to “software which can substitute for most white-collar workers” to “chatbots which usually don’t hallucinate.” 

AI Doomers are concerned with the former scenario, where computer systems outreason, outcompete, and doom humanity to extinction. The AI Doomers are only one of several factions that oppose AI and seek to cripple it via weaponized regulation. There are also factions concerned about “misinformation” and “algorithmic bias,” which in practice means they think chatbots must be censored to prevent them from saying anything politically inconvenient. Hollywood unions oppose generative AI for the same reason that the longshoremen’s union opposes automating American ports and insists on requiring as much inefficient human labor as possible. Many moralists seek to limit “AI slop” for the same reasons that moralists opposed previous new media like video games, television, comic books, and novels—and I can at least empathize with this last group’s motives, as I wasted much of my teenage years reading indistinguishable novels in exactly the way that 19th century moralists warned against. In any case, the AI Doomers vary in their attitudes towards these factions. Some AI Doomers denounce them as Luddites, some favor alliances of convenience, and many stand in between.

Most members of the “AI Doomer” coalition initially called themselves by the name of “AI safety” advocates. However, this name was soon co-opted by these other factions with concerns smaller than human extinction. The AI Doomer coalition has far more intellectual authority than AI’s other opponents, with the most sophisticated arguments and endorsements from socially-recognized scientific and intellectual elites, so these other coalitions continually try to appropriate and wield the intellectual authority gathered by the AI Doomer coalition. Rather than risk being misunderstood, or fighting a public battle over the name, the AI Doomer coalition abandoned the name “AI safety” and rebranded itself to “AI alignment.” Once again, this name was co-opted by outsiders and abandoned by its original membership. Eliezer Yudkowsky coined the term “AI Notkilleveryoneism” in an attempt to establish a name that could not be co-opted, but unsurprisingly it failed to catch on among those it was intended to describe.

Today, the coalition’s members do not agree on any name for themselves. “AI Doomers,” the only widely understood name for them, was coined by their rhetorical opponents and is considered somewhat offensive by many of those it refers to, although some have adopted it themselves for lack of a better alternative. While I regret being rude, this essay will refer to them as “AI Doomers” in the absence of any other clear, short name.

Whatever name they go by, the AI Doomers believe the day computers take over is not far off, perhaps as soon as three to five years from now, and probably not longer than a few decades. When it happens, the superintelligence will achieve whatever goals have been programmed into it. If those goals are aligned exactly to human values, then it can build a flourishing world beyond our most optimistic hopes. But such goal alignment does not happen by default, and will be extremely difficult to achieve, if its creators even bother to try. If the computer’s goals are unaligned, as is far more likely, then it will eliminate humanity in the course of remaking the world as its programming demands. This is a rough sketch, and the argument is described more fully in works like Eliezer Yudkowsky’s essays and Nick Bostrom’s Superintelligence....

....MUCH MORE