Saturday, December 20, 2025

AI: "A brief history of Sam Altman’s hype" (MIT Technology Review's Hype Correction series)

From MIT Technology Review:

Why it’s time to reset our expectations for AI

Truly, I feel kind of stupid even asking the question, like a spoiled brat who has too many toys at Christmas. AI is mind-blowing. It’s one of the most important technologies to have emerged in decades (despite all its many many drawbacks and flaws and, well, issues).

At the same time I can't help feeling a little bit: Is that it?

If you feel the same way, there’s good reason for it: The hype we have been sold for the past few years has been overwhelming. We were told that AI would solve climate change. That it would reach human-level intelligence. That it would mean we no longer had to work!

Instead we got AI slop, chatbot psychosis, and tools that urgently prompt you to write better email newsletters. Maybe we got what we deserved. Or maybe we need to reevaluate what AI is for.

That’s the reality at the heart of a new series of stories, published today, called Hype Correction. We accept that AI is still the hottest ticket in town, but it’s time to re-set our expectations.

As my colleague Will Douglas Heaven puts it in the package’s intro essay, “You can’t help but wonder: When the wow factor is gone, what’s left? How will we view this technology a year or five from now? Will we think it was worth the colossal costs, both financial and environmental?” 

Elsewhere in the package, James O’Donnell looks at Sam Altman, the ultimate AI hype man, through the medium of his own words. And Alex Heath explains the AI bubble, laying out for us what it all means and what we should look out for.

Michelle Kim analyzes one of the biggest claims in the AI hype cycle: that AI would completely eliminate the need for certain classes of jobs. If ChatGPT can pass the bar, surely that means it will replace lawyers? Well, not yet, and maybe not ever. 

Similarly, Edd Gent tackles the big question around AI coding. Is it as good as it sounds? Turns out the jury is still out. And elsewhere David Rotman looks at the real-world work that needs to be done before AI materials discovery has its breakthrough ChatGPT moment.

Meanwhile, Garrison Lovely spends time with some of the biggest names in the AI safety world and asks: Are the doomers still okay? I mean, now that people are feeling a bit less scared about their impending demise at the hands of superintelligent AI? And Margaret Mitchell reminds us that hype around generative AI can blind us to the AI breakthroughs we should really celebrate.

Let’s remember: AI was here before ChatGPT and it will be here after. This hype cycle has been wild, and we don’t know what its lasting impact will be. But AI isn’t going anywhere. We shouldn't be so surprised that those dreams we were sold haven’t come true—yet.

The more likely story is that the real winners, the killer apps, are still to come. And a lot of money is being bet on that prospect. So yes: The hype could never sustain itself over the short term. Where we’re at now is maybe the start of a post-hype phase. In an ideal world, this hype correction will reset expectations. 

Let’s all catch our breath, shall we?

This story first appeared in The Algorithm, our weekly free newsletter all about AI. Sign up to read past editions here.

Again, the intro and the rest of the issue:

https://www.technologyreview.com/supertopic/hype-correction/ 

And December 15:

Here’s how pinning a utopian vision for AI on LLMs kicked off the hype cycle that’s causing fears of a bubble today. 

Each time you’ve heard a borderline outlandish idea of what AI will be capable of, it often turns out that Sam Altman was, if not the first to articulate it, at least the most persuasive and influential voice behind it. 

For more than a decade he has been known in Silicon Valley as a world-class fundraiser and persuader. OpenAI’s early releases around 2020 set the stage for a mania around large language models, and the launch of ChatGPT in November 2022 granted Altman a world stage on which to present his new thesis: that these models mirror human intelligence and could swing the doors open to a healthier and wealthier techno-utopia.


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


Throughout, Altman’s words have set the agenda. He has framed a prospective superintelligent AI as either humanistic or catastrophic, depending on what effect he was hoping to create, what he was raising money for, or which tech giant seemed like his most formidable competitor at the moment. 

Examining Altman’s statements over the years reveals just how much his outlook has powered today’s AI boom. Even among Silicon Valley’s many hypesters, he’s been especially willing to speak about open questions—whether large language models contain the ingredients of human thought, whether language can also produce intelligence—as if they were already answered. 

What he says about AI is rarely provable when he says it, but it persuades us of one thing: This road we’re on with AI can go somewhere either great or terrifying, and OpenAI will need epic sums to steer it toward the right destination. In this sense, he is the ultimate hype man.

To understand how his voice has shaped our understanding of what AI can do, we read almost everything he’s ever said about the technology (we requested an interview with Altman, but he was not made available). 

His own words trace how we arrived here....

....MUCH MORE  

The intro to and outro from December 20, 2024's "What is an AI agent? A computer scientist explains the next wave of artificial intelligence tools

We've been saying it (sometimes literally*) for quite a while, chatbots are not the be-all and end-all of artificial intelligence.... 

***** 

*Most recently [a/o exactly one year ago today]:

AI: Chatbots Are Sooo 2023; Here Comes Interactive AI

"ChatBots Are Not The Be-All And End-All Of Artificial Intelligence":

Far from it.
And all the focus on ChatBots and LLMs are more than just a distraction, they are a perverse representation of what AI is doing and will do and could potentially cost you money or opportunity or both....

ChatBots Are For Children: "What’s Ahead for OpenAI? Project Strawberry, Orion, and GPT Next"

IEEE Spectrum - "What Are AI Agents?" 

"First impressions of ChatGPT o1: An AI designed to overthink it"

CoinTelegraph has developed an artisanal, homebrew AI specialty. Here's one of our previous visits:

AI Use Case: Biological Immortality By 2030
This would be a pretty good answer to the question "What is the use case for AI?"

But I don't buy it. AI will be like the nanotech revolution that never was, never that is, in the sense of a nanotech industry. Instead, as with nanotech, AI will be embedded in the processes and protocols of every facet of human existence and we won't even notice it.

 "AI agents are the 'next frontier' and will change our working lives forever"

 Former Google CEO Schmidt On The Ever-Increasing Tempo Of AI

Also:

Where Is Artificial Intelligence Going From Here: One Of The Gurus Speaks

Related, October 2025:

Can AI Identify An AI Bubble?

Google AI says no:

While AI can be a powerful tool for detecting patterns that might indicate a bubble, it cannot definitively determine if a bubble exists
. The complexity of human behavior and unpredictable events means AI models are best used as a component of analysis, not a replacement for human judgment...