The transcriber bot got tripped up on the fiscal year, we are in 2026.
In pre-market trad the stock is down $2.98 (-1.64%), about half of yesterday's afterhours worst, at $178.62.
From Investing.com, skipping past the introductory and background material (well worth the reader's time):
....Colette Kress, Executive Vice President and Chief Financial Officer, NVIDIA: Thank you, Toshiya. We delivered another record quarter while navigating what continues to be a dynamic external environment. Total revenue was 46,700,000,000.0, exceeded our outlook as we grew sequentially across all market platforms. Data center revenue grew 56% year over year. Data center revenue also grew sequentially despite the 4,000,000,000 decline in h ’20 revenue.
NVIDIA’s Blackwell platform reached record levels, growing sequentially by 17%. We began production shipments of GB 300 in q two. Our full stack AI solutions for cloud service providers, neo clouds, enterprises, and sovereigns are all contributing to our growth. We are at the beginning of an industrial revolution that will transform every industry. We see 3 to $4,000,000,000,000 in AI infrastructure spend in the by the end of the decade.
The scale and scope of these build outs present significant long term growth opportunities for NVIDIA. The g b 200 NBL system is seeing widespread adoption with deployments at CSPs and consumer Internet companies, Lighthouse model builders, including OpenAI, Meta, and Mastral are using the g b 200 NBL 72 at data center scale for both training next generation models and serving inference models in production. The new Blackwell Ultra platform has also had a strong quarter, generating tens of billions in revenue. The transition to the g b 300 has been seamless for major cloud service providers due to its shared architecture, software, and physical footprint with the g b 200, enabling them to build and deploy g b 300 racks with ease. The transition to the new g b 300 rack based architecture has been seamless.
Factory builds in late July and early August were successfully converted to support the g b 300 ramp. And today, full production is underway. The current run rate is back at full speed, producing approximately 1,000 racks per week. This output is expected to accelerate even further throughout the third quarter as additional capacity comes online. We expect widespread market availability in the second half of the year as CoreWeave prepares to bring their g v 300 instance to market as they are already seeing 10x more inference performance on reasoning models compared to h 100.
Compared to the previous hopper generation, g v 300 and d l 72 AI factories promise a 10 x improvement in token per watt energy efficiency, which translates to revenues as data centers are power limited. The chips of the Rubin platform are in fab. The Vera CPU, Rubin GPU, c x nine Super NIC, NVLink one forty four scale up switch, Spectrum X scale out and scale across switch, and the silicon photonics processor. Rubin remains on schedule for volume production next year. Rubin will be our third generation NVLink RackScale AI supercomputer with a mature and full scale supply chain.
This keeps us on track with our pace of an annual product cadence and continuous innovation across compute, networking, systems, and software. In late July, the US government began reviewing licenses for sales of h 20 to China customers. While a select number of our China based customers have received licenses over the past few weeks, we have not shipped any h 20 based on those licenses. USG officials have expressed an expectation that the USG will receive 15% of the revenue generated from licensed h 20 sales. But to date, the USG has not published a regulation codifying such requirement.
We have not included h 20 in our q three outlook as we continue to work through geopolitical issues. If geopolitical issues reside, we should ship $2,000,000,000 to $5,000,000,000 in h twenty revenue in q three. And if we add more orders, we can bill more. We continue to advocate for the US government to approve Blackwell for China. Our products are designed and sold for beneficial commercial use, and every license sale we make will benefit The US economy, The US leadership.
In highly competitive markets, we want to win the support of every developer. America’s AI technology stack can be the world’s standard if we race and compete globally. Notably in the quarter was an increase in hopper January and h 200 shipments. We also sold approximately 650,000,000 of h 20 in q two to an unrestricted customer outside of China. The sequential increase in Hopper demand indicates the breadth of data center workloads that run on accelerated computing and the power of CUDA libraries and full stack optimizations, which continuously enhance the performance and economic value of platform.
As we continue to deliver both Hopper and Blackwell GPUs, we are focusing on meeting the soaring global demand. This growth is fueled by capital expenditures from the cloud to enterprises, which are on track to invest 600,000,000,000 in data center infrastructure and compute this calendar year alone, nearly doubling in two years. We expect annual AI infrastructure investments to continue growing driven by the several factors, reasoning agentic AI requiring orders of magnitude more training and inference compute, global build outs for sovereign AI, enterprise AI adoption, and the arrival of physical AI and robotics. Blackwell has set the benchmark as it is the new standard for AI inference performance. The market for AI inference is expanding rapidly with reasoning and agentic AI gaining traction across industries.
Blackwell’s RackScale NVLink and CUDA full stack architecture addresses this by redefining the economics of inference. New NV f p four four bit precision and NVLink 72 on the g b 300 platform delivers a 50 x increase in energy efficiency per token compared to Hopper, enabling companies to monetize their compute at unprecedented scale. For instance, a 3,000,000 investment in g v 200 infrastructure can generate 30,000,000 in token revenue, a 10 x return. NVIDIA software innovation, combined with the strength of our developer ecosystem, has already improved Blackwell’s performance by more than two x since its launch. Advances in CUDA, TensorRT LLM, and Dynamo are unlocking maximum efficiency.
CUDA library contributions from the open source community along with NVIDIA’s open libraries and frameworks are now integrated into millions of workflows. This plow this powerful flywheel of collaborative innovation between NVIDIA and global community contribution strengthens NVIDIA’s performance leadership. NVIDIA is a top contributor to OpenAI models, data, and software. Blackwell has introduced a groundbreaking numerical approach to large language model pretraining Using NV f p four, computations on the g b 300 can now achieve seven x faster training than the h 100, which uses f p eight. This innovation delivers the accuracy of 16 bit precision with the speed and efficiency of four bit, setting a new standard for AI factor efficiency and scalability.
The AI industry is quickly adopting this revolutionary technology with major players such as AWS, Google Cloud, Microsoft Azure, and OpenAI, as well as Cohere, Mistral, Kimi AI, Perplexity, Reflection, and Runway, already embracing it. NVIDIA’s performance leadership was further validated in the latest ML Perth training benchmarks where the g b 200 delivered a clean sweep. Be on the lookout for the upcoming m MLPerf inference results in September, which will include benchmarks based on the Blackwell Ultra. NVIDIA RTX Pro servers are in full production for the world system makers. These are air cooled PCIe based systems integrated seamlessly into standard IT environments and run traditional enterprise IT applications as well as the most advanced agentic and physical AI applications.
Nearly 90 companies, including many global leaders, are already adopting RTX Pro servers. Hitachi uses them for real time simulation and digital twins, Lilly for drug discovery, Hyundai for factory design and AV validation, and Disney for immersive storytelling. As enterprises modernize data centers, RTX Pro servers are poised to become a multibillion dollar product line. Sovereign AI is one on the rise as the nation’s ability to develop its own AI using domestic infrastructure data and talent presents a significant opportunity for NVIDIA. NVIDIA is at the forefront of landmark initiatives across The UK and Europe.
The European Union plans to invest €20,000,000,000 to establish 20 AI factories across France, Germany, Italy, and Spain, including five gigafactories to increase its AI compute infrastructure by tenfold. In The UK, the is Umbard AI supercomputer powered by NVIDIA was unveiled at the country’s most powerful AI system, delivering 21 exaflots of AI performance to accelerate breakthroughs in fields of drug discovery and climate modeling. We are on track to achieve over 20,000,000,000 in sovereign AI revenue this year, more than double than that of last year. Networking delivered record revenue of 7,300,000,000.0, and escalating demands of AI compute clusters necessitate high efficiency and low latency networking. This represents a 46% sequential and 98% year on year increase with strong demand across Spectrum X Ethernet, InfiniBand, and NVLink.
Our Spectrum X enhanced Ethernet solutions provide the highest throughput and lowest latency network for Ethernet AI workloads. Spectrum X Ethernet delivered double digit sequential and year over year growth with annualized revenue exceeding 10,000,000,000. At Hotchips, we introduced Spectrum XGS Ethernet, a technology designed to unify disparate data centers into gigascale AI super factories. Corweave is an initial adopter of the solution, which is project projected to double GPU to GPU communication speed. InfiniBand revenue nearly doubled sequentially, fueled by the adoption of XDR technology, which provides double the bandwidth improvement over its predecessor, especially valuable for the model builders.
The world’s fastest switch, NVLink, with 14 x the bandwidth of PCIe Gen five delivered strong growth as customers deployed Brace Blackwell NVLink Rack Scale systems. The positive reception to NVLink Fusion, which allows semi custom AI infrastructure, has been widespread. Japan’s upcoming Fugaku Next will integrate Fujitsu’s CPUs with our architecture via NVLink Fusion. It will run a range of workloads, including AI, supercomputing, and quantum computing. Fugaku next joins a rapidly expanding list of leading quantum supercomputing and research centers running on NVIDIA’s CUDA Q quantum platform, including ULEC, AIST, NNF, and NERSC, supported by over 300 ecosystem partners, including AWS, Google Quantum AI, Quantinuum, QEra, and SciQuantum.
Just in THOR, our new robotics computing platform is now available. THOR delivers an order of magnitude greater AI performance and energy efficiency than NVIDIA AGX Orin. It runs the latest generative and reasoning AI models at the edge in real time, enabling state of the art robotics. Adoption of NVIDIA’s robotics full stack platform is growing at rapid rate. Over 2,000,000 developers and 1,000 plus hardware software applications and sensor partners taking our platform to market.
Leading enterprises across industries have adopted Thor, including Agility Robotics, Amazon Robotics, Boston Dynamics, Caterpillar, Figure, Hexagon, Medtronic, and Meta. Robotic applications require exponentially more compute on the device and in infrastructure representing a significant long term demand driver for our data center platform. NVIDIA Omniverse with Cosmos is our data center physical AI digital twin platform built for development of robot and robotic systems. This quarter, we announced a major expansion of our partnership with Siemens to enable AI automatic factories, leading European robotics companies, including Agile Robots, Neurorobotics, and Universal Robots, are building their latest innovations with the Omniverse platform. Transitioning to a quick summary of our revenue by geography.
China declined on a sequential basis to low single digits percentage of data center revenue. Note, our q three outlook does not include h 20 shipments to China customers. Singapore revenue represented 22% of second quarter’s billed revenue as customers have centralized their invoicing in Singapore. Over 99% of data center compute revenue billed to Singapore was for US based customers. Our gaming revenue was a record 4,300,000,000.0, a 14% sequential increase and a 49% jump year on year.
This was driven by the ramp of Blackwell GeForce GPUs as strong sales continued as we increased supply availability. This quarter, we shipped GeForce RTX fifty sixty desktop GPU. It brings double the performance along with advanced ray tracing, neural rendering, and AI powered DLSS four gameplay to millions of gamers worldwide. Blackwell is coming to GeForce NOW in September. This is GeForce NOW’s most significant upgrade, offering RTX fifty eighty class performance, minimal latency, and five k resolution at 120 frames per second.
We are also doubling the GeForce NOW catalog to over 4,500 titles, the largest library of any cloud gaming service. For AI enthusiasts, on device AI performs the best RTX GPUs. We partnered with OpenAI to optimize their open source GPT models for high quality, fast, and efficient inference on millions of RTX enabled window devices. With the RTX platform stack, Window developers can create AI applications designed to run on the world’s largest AI PC user base. Professional visualization revenue reached 601,000,000, a 32% year on year increase.
Growth was driven by an adoption of the high end RTX workstation GPUs and AI powered workload like design, simulation, and prototyping. Key customers are leveraging our solutions to accelerate their operations. Activision Blizzard uses RTX workstations to enhance creative workflows, while robotics innovator Figure AI powers its humanoid robots with RTX embedded GPUs. Automotive revenue, which includes only in car compute revenue, was 586,000,000, up 69% year on year, primarily driven by self driving solutions. We have begun shipments of NVIDIA Thor SoC, the successor to Orin.
Thor’s arrival coincides with the industry’s accelerating shift to vision, language, model architecture, generative AI, and higher levels of autonomy. Thor is the most successful robotics and AV computer we’ve ever created. Thor willpower. Our full stack drive AV software platform is now in production, opening up billions to new revenue opportunities for NVIDIA while improving vehicle safety and autonomy. Now moving to the rest of our p and l.
GAAP gross margin was 72.4%, and non GAAP gross margin was 72.7%. These figures include a 180,000,000 or 40 basis point benefit from relief releasing previously reserved h 20 inventory. Excluding this benefit, non GAAP gross margins would have been 72.3%, still exceeding our outlook. GAAP operating expenses rose eight percent and six percent on a non GAAP basis sequentially. This increase was driven by higher compute and infrastructure costs as well as higher compensation and benefit costs.
To support the ramp of Blackwell and Blackwell Ultra, inventory increased sequentially from 11,000,000,000 to 15,000,000,000 in q two. While we prioritize funding our growth and strategic initiatives, in q two, we returned 10,000,000,000 to shareholders through share repurchases and cash dividends. Our board of directors recently approved a 60,000,000,000 share repurchase authorization to add to our remaining 14,700,000,000.0 of authorization at the end of q two. Okay. Let me turn it to the outlook for the third quarter.
Total revenue is expected to be $54,000,000,000 plus or minus 2%. This represents over $7,000,000,000 in sequential growth. Again, we do not assume any h 20 shipments to China customers in our outlook. GAAP and non GAAP gross margins are expected to be 73.3%, 73.5%, respectively, plus or minus 50 basis points. We continue to expect to exit the year with non GAAP gross margins in the mid seventies.
GAAP and non GAAP operating expenses are expected to be approximately 5,900,000,000.0 and 4,200,000,000.0, respectively. For the full year, we expect operating expenses to grow in the high thirties range year over year, up from our prior expectations of the mid thirties. We are accelerating investments in the business to address the magnitude of growth opportunities that lie ahead. GAAP and non GAAP other income and expenses are expected to be an income of approximately 500,000,000, excluding gains and losses from nonmarketable and public held equity securities. GAAP and non GAAP tax rates are expected to be 16.5, plus or minus 1%, excluding any discrete items.
Further financial data are included in the CFO commentary and other information available on our website. In closing, let me highlight upcoming events for the financial community. We will be at the Goldman Sachs Technology Conference on September 8 in San Francisco. Our annual NDR will commence the October. GTC data center begins on October 27 with Jensen’s keynote scheduled for the twenty eighth.
We look forward to seeing you at these events. Our earnings call to discuss the results of our 2026 is scheduled for November 19. We will now open the call for questions. Operator, would you please poll for questions?
Sarah, Conference Operator: Thank you. Your first question comes from C. J. Muse with Cantor Fitzgerald. Your line is open.
C.J. Muse, Analyst, Cantor Fitzgerald: I guess with wafer in to rack out lead times of twelve months, you confirmed on the call today that Rubin is on track for ramp in the second half. And obviously, many of these investments are multiyear projects contingent upon power, cooling, etcetera. I was hoping perhaps you could you take a high level view and speak to, you know, your vision for growth in into 2026. And as part of that, if you could kinda comment between network and data center would be very helpful. Thank you.
Jensen Huang, President and Chief Executive Officer, NVIDIA: Yeah. Thanks, CJ. At the highest level of growth drivers would be the evolution, the the introduction, if you will, of reasoning agentic AI. You know, where chatbots used to be one shot, you give it a prompt, and it would generate the answer. Now the AI does research.
It thinks and does a plan, and it might use tools. And so it’s called long thinking, and the longer it thinks, oftentimes, it produces better answers. And the amount of computation necessary for one shot versus reasoning agentic AI models could be a 100 times, a thousand times, and potentially even more as the amount of research and basically reading and comprehension that it goes off to do. And so the amount of computation that has that has resulted in AgenTic AI has grown tremendously. And, of course, the effectiveness has also grown tremendously.
Because of Agentic AI, the amount of hallucination has dropped significantly. You can now use it you can now use tools and perform tasks. Enterprises have been opened up. As a result of agentic AI and vision language models, we now are seeing a breakthrough in physical AI, in robotics, autonomous systems. So the the last year, AI has made tremendous progress, and agentic systems, reasoning systems, is completely revolutionary.
Now we built the Blackwell MVLink 72 system, a rack scale computing system, for this moment. We’ve been working on it for several years. This last year, we transitioned from MVLink eight, which is a node scale computing, each node is a computer, to now NVLink 72 where each rack is a computer. That disaggregation of NVLink 72 into a rack scale system was extremely hard to do, but the results are extraordinary. We’re seeing orders of magnitude speed up and, therefore, energy efficiency and, therefore, cost effectiveness of token generation because of NVLink 72.
And so over the next over the next couple of years, you’re gonna over well, you asked about longer term. Over the next five years, we’re gonna scale into with Blackwell, with Rubin, and follow ons to scale into effectively a 3 to $4,000,000,000,000 AI infrastructure opportunity. The last couple of years, you have seen that CapEx has grown in just the top four CSPs by has doubled and grown to about $600,000,000,000. So we’re in the beginning of this build out, and the AI technology advances has really enabled AI to be able to adopt and solve problems to many different industries....
....MUCH MORE, so much more.
Here's the release from the company:
NVIDIA Announces Financial Results for Second Quarter Fiscal 2026