Saturday, April 22, 2023

Beyond ChatBots: "GPT-4, AGI, and the Hunt for Superintelligence"

From IEEE Spectrum, April 19:

Neuro expert Christof Koch weighs AI progress against its potential threats

For decades, the most exalted goal of artificial intelligence has been the creation of an artificial general intelligence, or AGI, capable of matching or even outperforming human beings on any intellectual task. It’s an ambitious goal long regarded with a mixture of awe and apprehension, because of the likelihood of massive social disruption any such AGI would undoubtedly cause. For years, though, such discussions were theoretical. Specific predictions forecasting AGI’s arrival were hard to come by.

But now, thanks to the latest large language models (LLMs) from the AI research firm OpenAI, the concept of an artificial general intelligence suddenly seems much less speculative. OpenAI’s latest LLMs—GPT-3.5, GPT-4, and the chatbot/interface ChatGPT—have made believers out of many previous skeptics. However, as spectacular tech advances often do, they seem also to have unleashed a torrent of misinformation, wild assertions, and misguided dread. Speculation has erupted recently about the end of the World Wide Web as we know it, end-runs GPT guardrails, and AI chaos agents doing their worst (the latter of which seems to be little more than clickbait sensationalism). There were scattered musings that GPT-4 is a step toward machine consciousness, and, more ridiculously, that GPT-4 is itself “slightly conscious.” There were also assertions that GPT-5, which OpenAI’s CEO Sam Altman said last week is not currently being trained, will itself be an AGI.

“The number of people who argue that we won’t get to AGI is becoming smaller and smaller.”
—Christof Koch, Allen Institute

To provide some clarity, IEEE Spectrum contacted Christof Koch, chief scientist of the Mindscope Program at Seattle’s Allen Institute. Koch has a background in both AI and neuroscience and is the author of three books on consciousness as well as hundreds of articles on the subject, including features for IEEE Spectrum and Scientific American.

Christof Koch on...

What would be the important characteristics of an artificial general intelligence as far as you’re concerned? How would it go beyond what we have now?

Christof Koch: AGI is ill defined because we don’t know how to define intelligence. Because we don’t understand it. Intelligence, most broadly defined, is sort of the ability to behave in complex environments that have multitudes of different events occurring at a multitude of different timescales, and successfully learning and thriving in such environments.

I’m more interested in this idea of an artificial general intelligence. And I agree that even if you’re talking about AGI, it’s somewhat nebulous. People have different opinions….

Koch: Well, by one definition, it would be like an intelligent human, but vastly quicker. So you can ask it—like ChatGPT—you can ask it any question, and you immediately get an answer, and the answer is deep. It’s totally researched. It’s articulated and you can ask it to explain why. I mean, this is the remarkable thing now about ChatGPT, right? It can give you its train of thought. In fact, you can ask it to write code, and then you can ask it, please explain it to me. And it can go through the program, line by line, or module by module, and explain what it does. It’s a train-of-thought type of reasoning that’s really quite remarkable....

....MUCH MORE

If interested see also February 5's "Far Beyond ChatGPT: Artificial General Intelligence"