Saturday, July 1, 2023

"AI Creators Want Us to Believe AI Is an Existential Threat. Why?"

There is a technique used by practitioners of restraint-of-trade known as "pulling up the ladder behind you" where, once you are onboard you don't allow anyone else to join you,

From Undark, June 22:

A public fixation on extinction from AI could empower industry insiders and distract from AI’s more immediate harms.

The warning consisted of a single sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The pithy statement, published in May by the nonprofit Center for AI Safety, was signed by a number of influential people, including a sitting member of Congress, a former state supreme court justice, and an array of technology industry executives. Among the signatories were many of the very individuals who develop and deploy artificial intelligence today; hundreds of cosigners — from academia, industry, and civil society — identified themselves as “AI Scientists.” 

Should we be concerned that the people who design and deploy AI are now sounding the alarm of existential risk, like a score of modern-day Oppenheimers? Yes — but not for the reasons the signatories imagine.

As a law professor who specializes in AI, I know and respect many of the people who signed the statement. I consider some to be mentors and friends. I think most of them are genuinely concerned that AI poses a risk of extinction on a level with pandemic and nuclear war. But, almost certainly, the warning statement is motivated by more than mere technical concerns — there are deeper social, societal, and (yes) market forces at play. It’s not hard to imagine how a public fixation on the risk of extinction from AI would benefit industry insiders while harming contemporary society.

How do the signers think this extinction would happen? Based on prior public remarks, it’s clear that some imagine a scenario wherein AI gains consciousness and intentionally eradicates humankind. Others envision a slightly more plausible path to catastrophe, wherein we grant AI vast control over human infrastructures, defense, and markets, and then a series of black swan events destroys civilization.

The risk of these developments — be they Skynet or Lemony Snicket — is low. There is no obvious path between today’s machine learning models — which mimic human creativity by predicting the next word, sound, or pixel — and an AI that can form a hostile intent or circumvent our every effort to contain it.

Regardless, it is fair to ask why Dr. Frankenstein is holding the pitchfork. Why is it that the people building, deploying, and profiting from AI are the ones leading the call to focus public attention on its existential risk? Well, I can see at least two possible reasons.

The first is that it requires far less sacrifice on their part to call attention to a hypothetical threat than to address the more immediate harms and costs that AI is already imposing on society. Today’s AI is plagued by error and replete with bias. It makes up facts and reproduces discriminatory heuristics. It empowers both government and consumer surveillance. AI is displacing labor and exacerbating income and wealth inequality. It poses an enormous and escalating threat to the environment, consuming an enormous and growing amount of energy and fueling a race to extract materials from a beleaguered Earth.

These societal costs aren’t easily absorbed....

....MUCH MORE