Via Threads, November 13:
yannlecun
6d
6 days agoI don't wanna say "I told you so", but I told you so.
Quote: "Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training - the phase of training an AI model that uses a vast amount of unlabeled data to understand language patterns and structures - have plateaued." ...reuters.com/techn…
***
6d
6 days ago
So just to be clear...
@yannlecun you're saying your boss just bet the house on something, you already knew was gonna fail?
Normally I would dismiss your predictions, because you're pretty much the Reverse-Cramer of Ai. (despite your contributions)
But you claim @ilyasu2 said this (who, contrary to yourself, actually is right a lot)
Sure wonder how @zuck and @cleoabram Feel about the biggest bet for @meta now...
The Mr. Metaverse poster, Aragorn Meulendijks, seems a bit of an arrogant dijk.
LeCun responded:
No. You misunderstand.
The following things are simultaneously, true:
1. LLM are very useful.
2. Training them will require increasing computing resources over the next few years.
3. But LLMs will *not* reach human-level intelligence.
4. New architectures are needed.
5. I've been saying this for many years, long before LLMs captured everyone's attention.
6. Meta-FAIR has been focusing on these new architectures for a while now.
7. Watch the Columbia Lecture if you want to know more.
Here's the Reuters story LeCun linked to:
OpenAI and others seek new path to smarter AI as current methods hit limitations