Sunday, April 16, 2023

Will AI Be The Death Of Me? You?

When first I saw that Niall Ferguson had set himself in business as a consulting historian (shades of Sherlock) I thought "How do you hustle up business?":

INT. Corridors of Power - Morning 
President: So gentlemen we are agreed? 
General: Ma'am, I'm still not sure. I think we better ask an historian

But the trans-Atlantic scholar seems to be doing quite well for himself. Here he is at Bloomberg Opinion, April 9:

The Aliens Have Landed, and We Created Them
The Cassandras are out in force claiming artificial intelligence will be the end of mankind. They have a very good point.

It is not every day that I read a prediction of doom as arresting as Eliezer Yudkowsky’s in Time magazine last week. “The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances,” he wrote, “is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ … If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” Do I have your attention now?

Yudkowsky is not some random Cassandra. He leads the Machine Intelligence Research Institute, a nonprofit in Berkeley, California, and has already written extensively on the question of artificial intelligence. I still remember vividly, when I was researching my book Doom, his warning that someone might unwittingly create an AI that turns against us — “for example,” I suggested, “because we tell it to halt climate change and it concludes that annihilating Homo sapiens is the optimal solution.” It was Yudkowsky who some years ago proposed a modified Moore’s law: Every 18 months, the minimum IQ necessary to destroy the world drops by one point.

Now Yudkowsky has gone further. He believes we are fast approaching a fatal conjuncure, in which we create an AI more intelligent than us, which “does not do what we want, and does not care for us nor for sentient life in general. … The likely result of humanity facing down an opposed superhuman intelligence is a total loss.”

He is suggesting that such an AI could easily escape from the internet “to build artificial life forms,” in effect waging biological warfare on us. His recommendation is clear. We need a complete, global moratorium on the development of AI.

This goes much further than the open letter signed by Elon Musk, Steve Wozniak (the Apple co-founder) and more than 15,000 other luminaries that calls for a six-month pause in the development of AIs more powerful than the current state of the art. But their motivation is the same as Yudkowsky’s: the belief that developing AI with superhuman capabilities in the absence of any international regulatory framework risks catastrophe. The only real difference is that Yudkowsky doubts that such a framework can be devised inside half a year. He is almost certainly right about that....

....MUCH MORE

The movie vignette first appeared in March 2022's "The U.S. Is Implementing The RAND Corporation Strategy To Cripple Russia"