Friday, July 7, 2023

Puny Human, I Scoff At Your AI, Soon You Will Know The Power Of Artificial 'Super' Intelligence. Tremble and Weep

[insert 'Bwa Ha Ha' here, if desired]

From James Pethokoukis at his Faster, Please! substack:

After AGI: Is artificial 'super' intelligence coming soon?

To reimagine Nobel laureate economist Robert Lucas’ famous quote about economic growth: “Once you start thinking about superintelligence, it's hard to think about anything else.”

And with good reason. First, the notion can make for great storytelling. As a fan of science fiction, I’ve been thinking about the idea of supersmart machines, both good (Lt. Commander Data) and evil (Skynet), for a long time.

While many of those stories are dystopian, they don’t have to be. I was really struck when reading this optimistic passage from the 2014 book Superintelligence: Paths, Dangers, Strategies by technology philosopher Nick Bostrom:

If we classify AI as capital, then with the invention of machine intelligence that can fully substitute for human work, wages would fall to the marginal cost of such machine-substitutes, which—under the assumption that the machines are very efficient—would be very low, far below human subsistence-level income. The income share received by labor would then dwindle to practically nil. But this implies that the factor share of capital would become nearly 100% of total world product. Since world GDP would soar following an intelligence explosion (because of massive amounts of new labor-substituting machines but also because of technological advances achieved by superintelligence, and, later, acquisition of vast amounts of new land through space colonization), it follows that the total income from capital would increase enormously. If humans remain the owners of this capital, the total income received by the human population would grow astronomically, despite the fact that in this scenario humans would no longer receive any wage income. The human species as a whole could thus become rich beyond the dreams of Avarice.

A brief, definitional note: Although the terms “artificial general intelligence” and “superintelligence” (or “artificial super intelligence” or “super machine intelligence”) are sometimes used interchangeably — understandably because they all refer to highly intelligent AI — that conflation should be avoided....

....MUCH MORE 

We've visited Bostrom and the Future of Humanity Institute at Oxford off and on for a decade. Although he sometimes comes off as a bit whack-a-doodle I did make this comment on one of our posts:

...If interested we mentioned Bostrom in last week's "The Roubini Cascade: Are we heading for a Greater Depression?". His Future of Humanity Institute at Oxford seems to have moved on from the mundane climate cataclysm or cosmic fireball ending everything, to a very real, very serious examination of whether or not Artificial Intelligence may be the nail in humanity's coffin, so to speak....  

That was in "Risk/Prop Bets: How unlikely is a doomsday catastrophe?"  which also included this bonus advice:

Before we get into the meat of the matter, some things to know upfront. From the introduction to 2018's "Tips And Tricks For Investing In 'End of the World Funds'":

As unauthorized representatives for Long or Short Capital's End of the World Puts this is an area of profound interest from which we have gleaned some insight:

1) Should the world end, collecting on your bet can be a challenge. Know your counterparty!
     And possibly more important, demand collateral!
2) The swings in end of the world product prices can be dramatic.
3) Prognosticators have predicted 100,000 of the last 0 termination events....

And from arXiv (astrophysics) at Cornell:

How unlikely is a doomsday catastrophe?

Max Tegmark (MIT), Nick Bostrom (Oxford)
Numerous Earth-destroying doomsday scenarios have recently been analyzed, including breakdown of a metastable vacuum state and planetary destruction triggered by a "strangelet'' or microscopic black hole. We point out that many previous bounds on their frequency give a false sense of security: one cannot infer that such events are rare from the the fact that Earth has survived for so long, because observers are by definition in places lucky enough to have avoided destruction. We derive a new upper bound of one per 10^9 years (99.9% c.l.) on the exogenous terminal catastrophe rate that is free of such selection bias, using planetary age distributions and the relatively late formation time of Earth....

All in all it is good to see he's flipped 180 degrees and has gone far beyond even the upbeat message of Keynes' "Economic Possibilities for our Grandchildren" and Churchill's 'broad sunlit uplands'. I believe I shall I shall have this bit tattooed  in an appropriately discreet location:

"If humans remain the owners of this capital, the total income received by the human population would grow astronomically, despite the fact that in this scenario humans would no longer receive any wage income. The human species as a whole could thus become rich beyond the dreams of Avarice."