Wednesday, November 19, 2025

Nvidia Q3 2026 Earnings Call Transcript, November 19, 2025 (NVDA)

From Motley Fool Transcription, Nov. 19:

Call participants

    President and Chief Executive Officer — Jensen Huang
    Executive Vice President and Chief Financial Officer — Colette Kress
    Vice President, Investor Relations — Toshiya Hari 

Risks

  • China revenue constraints -- Colette Kress said, "Sizable purchase orders never materialized in the quarter due to geopolitical issues and the increasingly competitive market in China," directly impacting data center compute product shipments to China.
  • Rising input costs -- Colette Kress stated, "input costs are on the rise but we are working to hold gross margins in the mid-seventies," highlighting margin pressure into fiscal 2027.
  • Inventory grew 32% quarter over quarter while supply commitments increased 63% sequentially.

Takeaways

  • Total revenue -- $57 billion, up 62% year over year, with sequential growth of $10 billion or 22%.
  • Data center revenue -- Record $51 billion, representing a 66% year-over-year increase, driven mainly by strong AI infrastructure demand.
  • Networking revenue -- $8.2 billion, up 162% year over year, with growth in NVLink, InfiniBand, and Spectrum X Ethernet.
  • Compute segment -- Grew 56% year over year in Q3 (ended Oct. 26, 2025), driven principally by the ramp of GB 300 GPUs, while Hopper platform delivered approximately $2 billion in segment revenue in Q3.
  • Gaming revenue -- $4.3 billion, up 30% year over year, attributed to heightened gamer demand.
  • Professional visualization revenue -- $760 million, up 56% from the prior year, with DGX Spark cited as a key contributor.
  • Automotive revenue -- $592 million, up 32% year over year, primarily driven by self-driving solutions.
  • GAAP gross margin -- 73.4%, while non-GAAP gross margin reached 73.6%, both exceeding previous outlook due to data center mix, improved cycle time, and cost structure.
  • Operating expenses -- GAAP operating expenses rose 8% sequentially; non-GAAP operating expenses increased 11% sequentially, mainly due to infrastructure compute, compensation, and engineering/development costs.
  • Inventory and supply commitments -- Inventory increased 32% and supply commitments rose 63% quarter over quarter, as the company prepared for growth.
  • Q4 revenue outlook -- Expected total revenue of $65 billion plus or minus 2%, implying 14% sequential growth at the midpoint, driven by continued Blackwell architecture momentum.
  • Gross margin guidance -- Expected mid-seventies gross margins for both GAAP and non-GAAP, despite rising input costs.
  • AI platform demand -- Management highlighted fully utilized GPU installed base and "clouds are sold out," indicating persistent supply-demand imbalance.
  • Blackwell platform -- GB 300 made up roughly two-thirds of total Blackwell revenue; GB 300 is now leading the product transition with broad customer adoption.
  • China data center assumptions -- Management stated, "we are not assuming any data center compute revenue from China" in the Q4 outlook.
  • Strategic partnerships and investments -- Announced deals with partners including AWS, Humane, Suzuki, Intel (NASDAQ: INTC), Arm (NASDAQ: ARM), and Anthropic; up to 5 million GPUs associated with new AI factory projects.
  • Rubin platform -- On schedule for 2026 ramp, with first silicon delivered and a focus on backward compatibility and rapid ecosystem adoption.
  • Performance leadership -- Management cited 5x faster time to train vs. Hopper using Blackwell Ultra and 10x higher performance per watt and 10x lower cost per token versus H200 on DeepSeek r1 benchmarks.
  • Strategic investments -- Continued investment in AI model builders such as OpenAI and Anthropic to deepen ecosystem reach and performance optimization.
  • Supply chain expansion -- First Blackwell wafer produced on U.S. soil in partnership with TSMC (NYSE: TSM); ongoing efforts to broaden supply redundancy and resilience.

Summary

Nvidia (NVDA +2.85%) reported revenue of $57 billion with significant growth across all business segments, particularly in data center operations. Management highlighted visibility to a half a trillion dollars in Blackwell and Rubin revenue from the start of this year through the end of calendar year 2026. The GPU installed base was described as fully utilized and "the clouds are sold out," reinforcing exceptionally strong demand conditions. Networking revenue rose 162% year over year, and GB 300 shipments have overtaken prior Blackwell products, indicating rapid customer adoption of next-generation architectures. Despite geopolitical constraints limiting shipments to China, the company expects gross margin stability in the mid-seventies for the coming year, even as input costs rise and inventory levels increase. Anticipated Q4 revenue of $65 billion, up 14% sequentially at the midpoint, reflects ongoing momentum in AI infrastructure buildout fueled by landmark deals and broad-based enterprise adoption.

  • Colette Kress said, "we shipped $50 billion this quarter," positioning the company toward a $500 billion opportunity for Blackwell and Rubin platforms through 2026, with aggregate future demand likely to increase.
  • Jensen Huang described three fundamental platform shifts—CPU to GPU acceleration, generative AI mainstreaming, and agentic AI emergence—as core drivers of multi-year infrastructure investment.
  • Company collaboration highlights included a strategic partnership with Anthropic, driving adoption of Nvidia architecture for the first time and targeting one gigawatt of compute capacity, scaling upwards.
  • Customers such as AWS and Humane announced plans to deploy up to 150,000 AI accelerators, with xAI and Humane co-developing a flagship 500 megawatt data center, illustrating tangible high-scale deployments.
  • Management cited broad industry engagement, with new platform launches and deeper integrations across hyperscalers, enterprise software providers, and robotics innovators propelling the CUDA ecosystem’s expansion.

Industry glossary

  • CUDA: Nvidia’s parallel computing platform and programming model enabling GPUs to accelerate specialized applications for artificial intelligence and scientific computing.
  • Blackwell: Nvidia’s current-generation AI GPU architecture, including products such as GB 200 and GB 300, optimized for advanced AI workloads.
  • Rubin: The company’s forthcoming data center AI platform, positioned as the next major architecture after Blackwell with backward compatibility and enhanced performance.
  • NVLink: Nvidia’s proprietary high-speed interconnect enabling fast data transfer across GPUs within a computing cluster.
  • Spectrum X Ethernet: High-performance Ethernet switching family from Nvidia, tailored for AI and accelerated data center networking.
  • Agentic AI: Artificial intelligence systems capable of autonomous reasoning, planning, and tool use, as described by management as a new wave in computing.
  • MLPerf: Industry-standard AI benchmarking suite cited for performance measurement of training and inference in machine learning models.

Full Conference Call Transcript

Toshiya Hari: Good afternoon, everyone, and welcome to NVIDIA Corporation's conference call for the 2026. With me today from NVIDIA Corporation are Jensen Huang, president and chief executive officer, and Colette Kress, executive vice president and chief financial officer. I'd like to remind you that our call is being webcast live on NVIDIA Corporation's 2026. The content of today's call is NVIDIA Corporation's property. It cannot be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially.

For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, 11/19/2025, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette.

Colette Kress: Thank you, Toshiya. We delivered another outstanding quarter with revenue of $57 billion, up 62% year over year and a record sequential revenue growth of $10 billion or 22%. Our customers continue to lean into three platform shifts fueling exponential growth for accelerated computing, powerful AI models, and agentic applications. Yet we are still in the early innings of these transitions that will impact our work across every industry. Currently, we have visibility to a half a trillion dollars in Blackwell and Rubin revenue from the start of this year through the end of calendar year 2026.

By executing our annual product cadence and extending our performance leadership through full stack design, we believe NVIDIA Corporation will be the superior choice for the $3 to $4 trillion in annual AI infrastructure build we estimate by the end of the decade. Demand for AI infrastructure continues to exceed our expectations. The clouds are sold out, and our GPU installed base, both new and previous generations, including Blackwell, Hopper, and Ampere, is fully utilized. Record Q3 data center revenue of $51 billion increased 66% year over year, a significant feat at our scale.

Compute grew 56% year over year driven primarily by the GB 300 ramp while networking more than doubled given the onset of NVLink scale up and robust double-digit growth across Spectrum X Ethernet and Quantum X InfiniBand. The world hyperscalers, a trillion-dollar industry, are transforming search recommendations, and content understanding from classical machine learning to generative AI. NVIDIA CUDA excels at both and is the ideal platform for this transition, driving infrastructure investment measured in hundreds of billions of dollars. At Meta, AI recommendation systems are delivering higher quality and more relevant content, leading to more time spent on apps such as Facebook and Threads.

Analyst expectations for the top CSPs and hyperscalers in 2026 aggregate CapEx have continued to increase and now sit roughly at $600 billion, more than $200 billion higher relative to the start of the year. We see the transition to accelerated computing and generative AI across current hyperscale workloads contributing toward roughly half of our long-term opportunity. Another growth pillar is the ongoing increase in compute spend driven by foundation model builders such as Anthropic, Mastral, OpenAI, Reflection, Safe Superintelligence, Thinking Machines Lab, and xAI. All scaling, compute aggressively to scale intelligence. The three scaling laws pretraining, post-training, and inference remain intact.....

....MUCH MORE 

As we've noted elsewhere the analyst questions seem a cut above the run-of-the-mill conference queries (take elsewhere to mean "who knows where," this is the 40th or 42nd of these things we've sat through)