Not there yet but some very smart people think it's close.
From IEEE Spectrum, May 7:
Recursive self-improvement is emerging, but humans are still in the loop
The field of artificial intelligence was built on the premise that machines might someday improve themselves. In 1966, the English mathematician I. J. Good wrote that “an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.” AI researchers have long seen recursive self-improvement, or RSI, as something to both desire and fear. Today, advances in AI are raising the question of whether parts of that process are already underway.
RSI means many things to many people. Some use the idea as a bogeyman to scare up regulation, while others brandish it in marketing. For some, it means a fully autonomous loop, while for others it’s nearly any use of tech to build tech.
Safest to say it’s a spectrum. At its strictest, researchers use the term to describe systems that can improve not just their outputs but the process by which they improve—generating ideas, evaluating results, and modifying their own methods with zero human direction. By that standard, many of today’s systems fall short. They can help build better AI, but they still rely on humans to set goals, define success, and decide which changes to keep. The question is not whether self-improvement exists in some form today, but how much of the loop has actually been closed.
Stepping-Stones to Self-Improvement
Researchers have spent decades putting in place the elements of RSI. Machine learning (ML) algorithms automatically tune the parameters of programs that can play games or even create new programs. ML methods called evolutionary algorithms diversify and iterate on design solutions, including other algorithms. Over the last decade, “AutoML” has automated aspects of the pipeline in which ML models such as neural networks are structured, trained, and evaluated.Today, large language models (LLMs) such as GPT, Gemini, Claude, and Grok extend this trend. One of their biggest use cases is to write code, including the code to produce future versions of themselves. In February, OpenAI reported that GPT‑5.3‑Codex was instrumental in creating itself, helping to debug training, manage deployment, and analyze evaluation results. Anthropic claims that the majority of its code is now written by Claude Code. These systems still rely on humans to direct and verify the work.
Last year, Google DeepMind announced a system called AlphaEvolve, “a coding agent for scientific and algorithmic discovery.” It uses LLMs to guide the evolution of solutions, such as optimizing neural-network architectures, data-center scheduling, and chip design. It’s not a fully recursive loop, as people still need to decide what problems AlphaEvolve should solve and how to evaluate its performance. But each breakthrough enhances scientists’ ability to make further AI breakthroughs.
“It’s also a very collaborative process” between humans and machines, says Matej Balog, a computer scientist at Google DeepMind who worked on AlphaEvolve. “Often you look at what the system discovers, and you actually learn from that discovery.” The system has already surprised the team. “Our mission is to use AI to discover new algorithms that have evaded human intuition,” Balog says. “I think we have the first demonstrations that this is not a wild dream.”
Meanwhile, the co-leads of Google DeepMind’s earlier chip-design system, AlphaChip, have launched a startup called Ricursive Intelligence to use AI to design AI chips. “We expect that we can dramatically reduce the design cycle from one or two years to days,” says cofounder Azalia Mirhoseini. Phase 1 is to help human designers. Phase 2 is to automate the process for companies without in-house designers. In Phase 3, the company will recursively use AI to design better chips to train better AI—though still under human supervision, says cofounder Anna Goldie....
....MUCH MORE
Recently:
January 28 - So it Begins: "Silicon Valley Wants to Build A.I. That Can Improve A.I. on Its Own"
May 8 - AI: "Are we just 18 months away from everything changing?"
And related:
December 2025 - Introducing Unified Model Collapse
Possibly also of interest:
May 2025 - News You Can Use: "....How AI-enabled coups could allow a tiny group to seize power"