From Frankly Fukuyama at Persuasion, October 8:
AI has many answers—but it can’t by itself build a new society.
I have a longtime friend whom I’ve known since my college days, who made his money as an investor and entrepreneur at the edge of the tech world. One constant about him over the years has been his endless admiration for people he regards as “very smart.” He means this in a very specific way: they are very good at math, and have done well for themselves making money using their brainpower.
He’s not alone in this preoccupation. Silicon Valley is a virtual cathedral for the worship of geniuses—initially people like Steve Jobs, Bill Gates, and Elon Musk—who have built world-beating companies around applications of technology. That technology has now moved onto AI, where Sam Altman, Demis Hassabis, and Yann LeCun have become the new icons of brilliance.
And what this generation is building is, indeed, intelligence. There is a race currently on for artificial general intelligence (AGI), a machine that will have the cognitive capabilities of a human being. Indeed, more than that: cutting-edge machines are “growing” rather than being programmed, and are reportedly capable of modifying themselves to extend their own capabilities. They will not stop at human intelligence, but will become smarter than human beings. This type of “superintelligence” will then lead to huge advances in science, technology, and the economy. There are already achievements along these lines, like Hassabis’ Alphafold project that has solved protein-folding problems that seemed beyond the capabilities of earlier technologies. There are serious discussions taking place now about a future, not that far away, where advanced economies using superintelligent AI will be able to achieve substantially higher growth rates of 10, 15, 20 percent per year, compared to the 2-3 percent that’s considered substantial today. Material deprivation will disappear and be replaced by schemes to subsidize those whose livelihoods have been displaced by AGI, like universal basic income.
There are several problems with these speculations. The first is one I’m not in a position to evaluate: whether AGI or superintelligence are even possible. Writers like Eric Larsen have suggested that while LLMs are good at culling enormous stores of existing knowledge, they lack the kind of speculative insight that the cognitive scientist C. S. Peirce labeled “abduction” that is required for true innovative discovery.
But let us assume for the moment that AGI will come about, and that machines will become more intelligent in certain respects than human beings. There are powerful reasons to believe that this capability will be transformative in many ways, but may not produce explosive economic growth as the AI cheerleaders expect.
The reason for this skepticism is that the binding constraint on economic growth today is simply not insufficient intelligence or cognitive ability. Even absent smart machines, human beings today collectively have more cognitive ability than at any prior point in human history. The binding constraint has to do with how that intelligence interacts with the material world in a myriad of ways. Economic growth depends ultimately on the ability to build real objects in the real world. A smart machine may be able to come up with a plan for a better mousetrap, but to actually fabricate that mousetrap requires capabilities beyond any machine’s control.
At a macro level, we are already running into the constraint of too many dollars chasing too little stuff. As environmental doomsayers have been arguing for years, there are ultimately material limits to growth. The one most obviously in front of us is global warming, but there are many others. The planet does not have the resources to sustain 8 billion people with an American standard of living; indeed, at 10 percent annual growth, China, America, and Europe would soon run out of everything, including agricultural land, water, energy, and almost everything else.
At a micro level, there is a problem translating the work of smart machines into material goods. Product innovation has always depended on a prolonged iterative process whereby a designer tries out ideas, fails, and modifies the design in response. No amount of superintelligence will ever be sufficient to simulate the behavior of material objects under the conditions of the existing material world, as generations of builders and tinkerers know....
....MUCH MORE