As GTC, San Jose, 2026 wraps up, some commentary.
First up, Yahoo Finance, March 19:
Nvidia's changing its strategic approach to AI, going all in on inferencing and agents
Jensen Huang took the stage at Nvidia’s (NVDA) GTC event in San Jose, Calif., on Monday, clad in his usual leather jacket, to provide the world with an update about what the world’s most valuable company has been cooking up over the last few months.
Huang was as indefatigable as ever as he ran through his roughly two-and-a-half-hour keynote in front of some 30,000 attendees. But what’s come to be known as the Super Bowl of AI featured a noticeable shift in Nvidia’s overall AI strategy — a deeper focus on inferencing, or powering AI models, and agents.
Nvidia’s chips are traditionally known for their general-purpose use. They can train and run AI models, power robots, and serve as the backbone of self-driving cars.
And while Nvidia’s offerings are still the industry standard, upstart chip companies like Cerebras and Groq have begun designing and rolling out processors geared specifically toward running AI models, creating a potential threat to Nvidia’s formidable AI moat.
Huang and company answered that at GTC with a slew of announcements meant to prove Nvidia is the inferencing leader to beat, including the debut of its Groq 3 chip and rack system.
Nvidia didn’t just go further with its inferencing capabilities, though. The company also showed off its addition to the much-hyped world of OpenClaw high-powered AI agents.
OpenClaw, which debuted as Clawd in November 2025 before being renamed Moltbot and finally OpenClaw in January, has taken off thanks to its ability to run AI agents powered by different AI models on users’ machines via apps like WhatsApp, Discord, Slack, and others.
Now, Nvidia is getting in on the buzz with its NemoClaw platform designed to improve the security and privacy of the agents.
“They are evolving in a big way, not only in inference, agentic, too,” TECHnalysis Research founder and chief analyst Bob O’Donnell told Yahoo Finance.
“The switch to OpenClaw, and now NemoClaw, to me, is even more indicative of this. It just shows how quickly they are reacting to the market.”
Nvidia moves further into inferencing
Nvidia’s decision to include Groq 3 as one of the seven chip platforms that make up Vera Rubin is part of its effort to stay ahead of the broader industry.Nvidia signed a $20 billion deal with Groq in December, hiring founder Jonathan Ross, president Sunny Madra, and other members of the Groq team and giving Nvidia access to Groq’s intellectual property.
The results of the deal are Nvidia’s new Groq 3 language processing unit (LPU) and Groq 3 LPX server rack. That’s right, Nvidia now has graphics processing units (GPUs), LPUs, and central processing units (CPUs). It’s a lot of units....
....MUCH MORE
And at Barron's March 19:
Nvidia Is Giving Apple Vibes. Why That Spells Big Things for the Stock.
The artificial-intelligence revolution has entered a new phase, one in which running AI models, known as inference, is taking over as the main source of demand for AI computing. Nvidia was the winner of round one when training the AI models drove chip sales. But things change quickly in tech, and the company still has to convince the market and customers that it remains indispensable.
CEO Jensen Huang devoted his keynote address at Nvidia’s GTC conference this past week to make the case. He reminded everyone that Nvidia had spent two decades building an ecosystem of hardware and software that makes its platform the least costly for AI. By the end of his speech, Huang had delivered a vision of Nvidia that reminded me of just one other company: Apple.
For years, Wall Street didn’t appreciate that Apple was more than just a hardware firm. Apple’s version of consumer technology provides a carefully thought-out bundle. The hardware is expensive, but it comes with a lot of free software and services that bring everything together seamlessly. In the end, the platform is sticky and full of value.
This is sometimes called Apple’s “walled garden.” iPhones, Macs, and Watches work like one because Apple controls the entire technology stack: the chips, the devices, the operating systems, the applications, and the cloud services. It’s all developed together, so it all just works together.
You’re free to leave the garden through a well-hidden gate, but the flowers are nice and the sun is shining, so why would you?
Nvidia is employing that Apple model of full control in an entirely different market: AI computing. More and more, Nvidia is moving toward being a full platform with an ecosystem of hardware, software, and partnerships that could be sticky like Apple’s, notwithstanding growing competition in the AI chip market.
It begins with Nvidia controlling as many layers of data center infrastructure as it can, what CEO Jensen Huang calls “extreme codesign.” A lot of attention is paid to Nvidia GPU chips, the workhorses of AI data centers, but there are five other Nvidia chips inside its coming Vera Rubin AI server, each with a crucial role in making a product that can’t be matched. The chips work better because they are designed together to work together.
Nvidia also makes data center network switches that alleviate a key computing bottleneck. In the last quarter, networking sales were responsible for 16% of Nvidia revenue, up from 8% the year before. It’s now the fastest-growing unit in Nvidia’s reporting.
This year, Nvidia will integrate a new server design built around AI inference chips from start-up Groq. Vera Rubin will work in concert with Groq on demanding inference tasks. Creating a data center with mixed servers that collaborate with each other is a thorny problem that Nvidia solved with software called Dynamo. Nvidia’s hardware still leads the industry, but the deepest part of the company’s moat is all the software it’s created to run on its hardware.
Huang began his GTC keynote by talking about the 20th anniversary of Nvidia’s most important software known as CUDA, or Compute Unified Device Architecture. In 2004, Nvidia hired Ian Buck, an engineer fresh out of Stanford University, to create a way for programmers to use Nvidia GPUs for a lot more than just computer graphics and gaming. Two years later, CUDA was born.
Nvidia kept developing the software, and by 2012, AI researchers had made Nvidia’s platform their preferred kit. A whole generation of researchers grew up on it. When ChatGPT triggered the generative AI craze in 2022, no one was more prepared for it than Nvidia.
Buck remains a Nvidia employee.
Nvidia has continued to build the ecosystem on top of the GPU-CUDA combination. The company’s online code portfolio has 700 repositories, including specialized software for engineering, physics, weather, and medical science, along with tools for AI training, inference, and agents. These are active projects with new versions rolling out all the time. Over a third of the repositories have received updates in the past month.
Nvidia is also the world’s largest contributor to open-source AI models with 715 of them available for download....
....MUCH MORE
Also at Barron's, March 18:
Sure, Nvidia Stock Is Stuck. But Don’t Ignore Its Huge Cash Returns
The stock is down $2.45 (-1.36%) at $177.95.
This week:
- As Nvidia's Developer Confab Kicks Off, Some Hard-Won Insight (NVDA)
- Highlights Of Jensen Huang's GTC Keynote Speech, March 16, 2026 (NVDA)
- Transcript: Analyst Q&A, Nvidia GTC, March 17, 2026 (NVDA)
Sadly, I don't think we'll see anything as insightful as 2024's "Nvidia CEO Jensen Huang debuts new $8,990 lizard-embossed leather jacket, also says something about AI GPUs: (NVDA)
Although....looking back to 2016's "Huh, This NVIDIA Company May Be On To Something (NVDA)" it's possible I''ll come up with something.
After a series of all-time highs last week the stock looks set to open up a couple pennies at $44.35.
From the Wall Street Journal:
New Chips Propel Machine Learning
Divide by 40 to account for the stock splits and we see $1.11 on the old stock.