A quick hit from MIT Technology Review's Hype Correction series:
....01: LLMs are not everything
In some ways, it is the hype around large language models, not AI as a whole, that needs correcting. It has become obvious that LLMs are not the doorway to artificial general intelligence, or AGI, a hypothetical technology that some insist will one day be able to do any (cognitive) task a human can.
Even an AGI evangelist like Ilya Sutskever, chief scientist and cofounder at the AI startup Safe Superintelligence and former chief scientist and cofounder at OpenAI, now highlights the limitations of LLMs, a technology he had a huge hand in creating. LLMs are very good at learning how to do a lot of specific tasks, but they do not seem to learn the principles behind those tasks, Sutskever said in an interview with Dwarkesh Patel in November.
It’s the difference between learning how to solve a thousand different algebra problems and learning how to solve any algebra problem. “The thing which I think is the most fundamental is that these models somehow just generalize dramatically worse than people,” Sutskever said.
It’s easy to imagine that LLMs can do anything because their use of language is so compelling. It is astonishing how well this technology can mimic the way people write and speak. And we are hardwired to see intelligence in things that behave in certain ways—whether it’s there or not. In other words, we have built machines with humanlike behavior and cannot resist seeing a humanlike mind behind them.
That’s understandable. LLMs have been part of mainstream life for only a few years. But in that time, marketers have preyed on our shaky sense of what the technology can really do, pumping up expectations and turbocharging the hype. As we live with this technology and come to understand it better, those expectations should fall back down to earth.
02: AI is not a quick fix to all your problems....
....MUCH MORE
If interested see also December 20's "AI: "A brief history of Sam Altman’s hype" (MIT Technology Review's Hype Correction series)".
The intro to and outro from December 20, 2024's "What is an AI agent? A computer scientist explains the next wave of artificial intelligence tools"
We've been saying it (sometimes literally*) for quite a while, chatbots are not the be-all and end-all of artificial intelligence....
AI: Chatbots Are Sooo 2023; Here Comes Interactive AI
"ChatBots Are Not The Be-All And End-All Of Artificial Intelligence":
Far from it.
And all the focus on ChatBots and LLMs are more than just a distraction, they are a perverse representation of what AI is doing and will do and could potentially cost you money or opportunity or both....
ChatBots Are For Children: "What’s Ahead for OpenAI? Project Strawberry, Orion, and GPT Next"
IEEE Spectrum - "What Are AI Agents?"
"First impressions of ChatGPT o1: An AI designed to overthink it"
CoinTelegraph has developed an artisanal, homebrew AI specialty. Here's one of our previous visits:
AI Use Case: Biological Immortality By 2030This would be a pretty good answer to the question "What is the use case for AI?"But I don't buy it. AI will be like the nanotech revolution that never was, never that is, in the sense of a nanotech industry. Instead, as with nanotech, AI will be embedded in the processes and protocols of every facet of human existence and we won't even notice it.
"AI agents are the 'next frontier' and will change our working lives forever"
Former Google CEO Schmidt On The Ever-Increasing Tempo Of AI
Also:
Where Is Artificial Intelligence Going From Here: One Of The Gurus Speaks
Related, October 2025:
Google AI says no:
While AI can be a powerful tool for detecting patterns that might indicate a bubble, it cannot definitively determine if a bubble exists. The complexity of human behavior and unpredictable events means AI models are best used as a component of analysis, not a replacement for human judgment...