From Sequoia, January 14:
Saddle up: Your dreams for 2030 just became possible for 2026.
Years ago, some leading researchers told us that their objective was AGI. Eager to hear a coherent definition, we naively asked “how do you define AGI?”. They paused, looked at each other tentatively, and then offered up what’s since become something of a mantra in the field of AI: “well, we each kind of have our own definitions, but we’ll know it when we see it.”
This vignette typifies our quest for a concrete definition of AGI. It has proven elusive.
While the definition is elusive, the reality is not. AGI is here, now.
Coding agents are the first example. There are more on the way.
Long-horizon agents are functionally AGI, and 2026 will be their year.
Blissfully Unencumbered by the Details
Before we go any further, it’s worth acknowledging that we do not have the moral authority to propose a technical definition of AGI.We are investors. We study markets, founders, and the collision thereof: businesses.
Given that, ours is a functional definition, not a technical definition. New technical capabilities beg the Don Valentine question: so what?
The answer resides in real world impact.
A Functional Definition of AGI
AGI is the ability to figure things out. That’s it.**We appreciate that such an imprecise definition will not settle any philosophical debates. Pragmatically speaking, what do you want if you’re trying to get something done? An AI that can just figure stuff out. How it happens is of less concern than the fact that it happens.
A human who can figure things out has some baseline knowledge, the ability to reason over that knowledge, and the ability to iterate their way to the answer.
An AI that can figure things out has some baseline knowledge (pre-training), the ability to reason over that knowledge (inference-time compute), and the ability to iterate its way to the answer (long-horizon agents).
The first ingredient (knowledge / pre-training) is what fueled the original ChatGPT moment in 2022. The second (reasoning / inference-time compute) came with the release of o1 in late 2024. The third (iteration / long-horizon agents) came in the last few weeks with Claude Code and other coding agents crossing a capability threshold.
Generally intelligent people can work autonomously for hours at a time, making and fixing their mistakes and figuring out what to do next without being told. Generally intelligent agents can do the same thing. This is new.
What Does It Mean to Figure Things Out?....
....MUCH MORE