Sunday, August 27, 2023

"Artificial General Intelligence Is Possible and Deadly"

I should probably have saved this for Halloween. It's a scary story but I don't know if it's true, or not.

From Palladium Magazine, August 10:

During a round of introductions at a recent dinner party, we were polled for takes on the subject of artificial intelligence. Some attendees were researchers, some were company founders or investors, and others worked at think tanks or as commentators. They were all optimistic.

Answers ranged from enthusiasm about new technology, to excitement at how many people’s ambitions it would enable, to resentment against the people trying to regulate their work. When my turn came, I realized with horror that even my considered opinion was going to be a rather hot take in this company. I tried to soften the blow:

“AI is a very promising technological program that will probably kill us all.”

The interjections and quips around the table confirmed the heat. “Oh no, Wolf, I didn’t know you were a doomer!” exclaimed the jolly executive beside me. I replied that I was not necessarily a doomer, and told him I would explain over dinner.

At one point in the discussion, the question came up of whether a much more advanced artificial intelligence could escape the control of its creators and become an existential challenger to humankind. One attendee claimed that no such thing could ever happen because a machine was just a tool. Humans would always hold the decisive power. 

I asked what I believe to be the crucial question: “What if you get into a fight with your AI tool? If a program is actually intelligent, especially in a way superior to humans, you might have to fight a war against it to shut it down, and you might not be able to win.” 

He looked at me like I had asked about going to war against an ordinary garden rake. “That’s impossible. Only humans can have that kind of agency.”

I found this attitude puzzling, especially from someone who has spent much of his career fighting with software, and who seemed to take the premise of advanced artificial intelligence seriously. To be honest, I was stumped. But his attitude is not unreasonable. To definitively defend or refute any position on the subject is a tangled mess because the whole conversation is so speculative. No one has built a real artificial intelligence superior to humans or demonstrated a robust scientific theory of it, so it is hard to ground one’s predictions in much more than speculation.

For seven decades now, the goal of the artificial intelligence field has been to produce computer programs capable of every cognitive task that humans can do, including open-ended research that is inclusive of AI itself, and creative, high-agency action in the world. The latest developments in deep learning and transformers have been impressive, to say the least. But these results are not enough to prove much about the possibility of the larger goals, about the essential nature of AI, its implications, or what we should be doing about it.

I first got deep into the subject of advanced artificial intelligence back in 2011. Before AlexNET and GPU-based deep learning, AI was a much more niche subject, but the conversation had been going on for decades. The discourse was composed of science fiction fans, transhumanists, and mostly sober algorithms researchers. All were chasing this holy grail of the computer revolution.

Their built-up canon of explicit arguments and less-articulated speculations have had an immense influence on the present discourse. However, the hype, memes, and politics around AI since deep learning have obscured the original kernels of careful thought, making it hard to have a productive conversation.

Through the many discussions I’ve had with friends, acquaintances, and experts on the subject, it has become clear to me that very few of the essential ideas about AI are commonly understood or even well-articulated. It is understandable that the critics and disbelievers of the idea are not familiar with the best arguments of its devotees—or don’t bother to distinguish them from the half-baked science fiction ideology that exists alongside them. You can’t be an expert in every branch of kookery you reject. 

But even among the most fervent “believers” in artificial intelligence, and among its most sophisticated critics, there is little shared consensus on key assumptions, arguments, or implications. They all have their own incompatible paradigms of careful thought, and they can’t all be right....

....MUCH MORE

Or as the philosopher said:

Two men say they're Jesus. One of them must be wrong.
Industrial Disease, Dire Straits, 1982   



And previously on the AGI channel:
‘We Might Need To Regulate Concentrated Computing Power’...

Chomsky: "The False Promise of ChatGPT"

Far Beyond ChatGPT: Artificial General Intelligence

Beyond ChatBots: "GPT-4, AGI, and the Hunt for Superintelligence"

Puny Human, I Scoff At Your AI, Soon You Will Know The Power Of Artificial 'Super' Intelligence. Tremble and Weep
[insert 'Bwa Ha Ha' here, if desired]

"Seven Varieties of Stupidity"
I've tried at least four of these, probably five....

"Here Comes Artificial Intelligence Barbie"
The intelligence curve is so exaggerated it risks giving little girls an unrealistic image of I.Q.'s they'll meet in the real world....