Wednesday, August 28, 2024

Nvidia Q2 2025 Earnings Call Transcript, August 28, 2024 (NVDA)

From Motley Fool Transcribing, August 28:

NVDA earnings call for the period ending June 30, 2024.

Contents:

    Prepared Remarks
    Questions and Answers
    Call Participants

Prepared Remarks:....

*****

....During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. Let me highlight an upcoming event for the financial community. We will be attending the Goldman Sachs Communacopia and Technology Conference on September 11 in San Francisco, where Jensen will participate in a keynote fireside chat.

Our earnings call to discuss the results of our third quarter of fiscal 2025 is scheduled for Wednesday, November 20, 2024. With that, let me turn the call over to Colette.

Colette M. Kress -- Chief Financial Officer, Executive Vice President

Thanks, Stewart. Q2 was another record quarter. Revenue of $30 billion was up 15% sequentially and up 122% year on year and well above our outlook of $28 billion. Starting with Data Center.

Data Center revenue of $26.3 billion was a record, up 16% sequentially and up 154% year on year, driven by strong demand for NVIDIA Hopper, GPU computing, and our networking platforms. Compute revenue grew more than 2.5x. Networking revenue grew more than 2x from the last year. Cloud service providers represented roughly 45% of our Data Center revenue, and more than 50% stemmed from the consumer Internet and enterprise companies.

Customers continue to accelerate their Hopper architecture purchases while gearing up to adopt Blackwell. Key workloads driving our Data Center growth include generative AI model training and inferencing; video, image, and text data pre and post processing with CUDA and AI workloads; synthetic data generation; AI-powered recommender systems; SQL and Vector database processing as well. Next-generation models will require 10 to 20 times more compute to train with significantly more data. The trend is expected to continue.

Over the trailing four quarters, we estimate that inference drove more than 40% of our Data Center revenue. CSPs, consumer Internet companies, and enterprises benefit from the incredible throughput and efficiency of NVIDIA's inference platform. Demand for NVIDIA is coming from frontier model makers, consumer Internet services, and tens of thousands of companies and start-ups building generative AI applications for consumers, advertising, education, enterprise and healthcare, and robotics. Developers desire NVIDIA's rich ecosystem and availability in every cloud.

CSPs appreciate the broad adoption of NVIDIA and are growing their NVIDIA capacity given the high demand. NVIDIA H200 platform began ramping in Q2, shipping to large CSPs, consumer Internet, and enterprise company. The NVIDIA H200 builds upon the strength of our Hopper architecture and offering over 40% more memory bandwidth compared to the H100. Our Data Center revenue in China grew sequentially in Q2 and a significant contributor to our Data Center revenue.

As a percentage of total Data Center revenue, it remains below levels seen prior to the imposition of export controls. We continue to expect the China market to be very competitive going forward. The latest round of MLPerf inference benchmarks highlighted NVIDIA's inference leadership with both NVIDIA Hopper and Blackwell platform combining to win gold medals on all tasks. At Computex, NVIDIA, with the top computer manufacturers, unveiled an array of Blackwell architecture-powered systems and NVIDIA networking for building AI factories and data centers.

With the NVIDIA MGX modular reference architecture, our OEMs and ODM partners are building more than 100 Blackwell-based systems designed quickly and cost-effectively. The NVIDIA Blackwell platform brings together multiple GPU, CPU, DPU, NVLink, and Link Switch and the networking chips, systems, and NVIDIA CUDA software to power the next generation of AI across the cases, industries, and countries. The NVIDIA GB200 NVL72 system with the fifth-generation NVLink enables all 72 GPUs to act as a single GPU and deliver up to 30x faster inference for LLM's workloads and unlocking the ability to run trillion-parameter models in real time. Hopper demand is strong, and Blackwell is widely sampling.

We executed a change to the Blackwell GPU mass to improve production yields. Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal year '26. In Q4, we expect to get several billion dollars in Blackwell revenue. Hopper shipments are expected to increase in the second half of fiscal 2025.

Hopper supply and availability have improved. Demand for Blackwell platforms is well above supply, and we expect this to continue into next year. Networking revenue increased 16% sequentially. Our Ethernet for AI revenue, which includes our Spectrum-X end-to-end Ethernet platform, doubled sequentially with hundreds of customers adopting our Ethernet offerings.

Spectrum-X has broad market support from OEM and ODM partners and is being adopted by CSPs, GPU cloud providers, and enterprises, including xAI to connect the largest GPU compute cluster in the world. Spectrum-X supercharges Ethernet for AI processing and delivers 1.6x the performance of traditional Ethernet. We plan to launch new Spectrum-X products every year to support demand for scaling compute clusters from tens of thousands of GPUs today to millions of DPUs in the near future. Spectrum-X is well on track to begin a multibillion-dollar product line within a year.

Our sovereign AI opportunities continue to expand as countries recognize AI expertise and infrastructure at national imperatives for their society and industries. Japan's National Institute of Advanced Industrial Science and Technology is building its AI Bridging Cloud Infrastructure 3.0 supercomputer with NVIDIA. We believe sovereign AI revenue will reach low double-digit billions this year. The enterprise AI wave has started.

Enterprises also drove sequential revenue growth in the quarter. We are working with most of the Fortune 100 companies on AI initiatives across industries and geographies. A range of applications are fueling our growth, including AI-powered chatbots, generative AI copilots, and agents to build new monetizable business applications and enhance employee productivity. Amdocs is using NVIDIA generative AI for their smart agent, transforming the customer experience and reducing customer service costs by 30%.

ServiceNow is using NVIDIA for its Now Assist offering, the fastest-growing new product in the company's history. SAP is using NVIDIA to build dual copilots. Cohesity is using NVIDIA to build their generative AI agent and lower generative AI development costs. Snowflake, serves over 3 billion queries a day for over 10,000 enterprise customers, is working with NVIDIA to build copilots.

And lastly, is using NVIDIA AI Omniverse to reduce end-to-end cycle times for their factories by 50%. Automotive was a key growth driver for the quarter as every automaker developing autonomous vehicle technology is using NVIDIA in their data centers. Automotive will drive multibillion dollars in revenue across on-prem and cloud consumption and will grow as next-generation AV models require significantly more compute. Health care is also on its way to being a multibillion-dollar business as AI revolutionizes medical imaging, surgical robots, patient care, electronic health record processing, and drug discovery.

During the quarter, we announced a new NVIDIA AI foundry service to supercharge generative AI for the world's enterprises with Meta's Llama 3.1 collection of models. This marks a watershed moment for enterprise AI. Companies for the first time can leverage the capabilities of an open-source frontier-level model to develop customized AI applications to encode their institutional knowledge into an AI flywheel to automate and accelerate their business. Accenture is the first to adopt the new service to build custom Llama 3.1 models for both its own use and to assist clients seeking to deploy generative AI applications.

NVIDIA NIMs accelerate and simplify model deployment. Companies across healthcare, energy, financial services, retail, transportation, and telecommunications are adopting NIMs, including Aramco, Lowes, and Uber. AT&T realized 70% cost savings and eight times latency reduction after moving into NIMs for generative AI, call transcription, and classification. Over 150 partners are embedding NIMs across every layer of the AI ecosystem.

We announced NIM Agent Blueprint, a catalog of customizable reference applications that include a full suite of software for building and deploying enterprise generative AI applications. With NIM Agent Blueprint, enterprises can refine their AI applications over time, creating a data-driven AI flywheel. The first NIM Agent Blueprints include workloads for customer service, computer-aided drug discovery, and enterprise retrieval augmented generation. Our system integrators, technology solution providers, and system builders are bringing NVIDIA NIM Agent Blueprints to enterprises.

NVIDIA NIM and NIM Agent Blueprints are available through the NVIDIA AI Enterprise software platform, which has great momentum. We expect our software, SaaS, and support revenue to approach a $2 billion annual run rate exiting this year, with NVIDIA AI Enterprise notably contributing to growth. Moving to gaming and AI PC. Gaming revenue of $2.88 billion increased 9% sequentially and 16% year on year.

We saw sequential growth in console, notebook, and desktop revenue, and demand is strong and growing and channel inventory remains healthy. Every PC with RTX is an AI PC. RTX PCs can deliver up to 1,300 AI tops and are now over 200 RTX AI laptops designed from leading PC manufacturers. With 600 AI-powered applications and games and an installed base of 100 million devices, RTX is set to revolutionize consumer experiences with generative AI.

NVIDIA ACE, a suite of generative AI technologies is available for RTX AI PCs. Megabreak is the first game to use NVIDIA ACE, including our small language model, Nemotron 4B optimized on device inference. The NVIDIA gaming ecosystem continues to grow. Recently added RTX and DLSS titles include Indiana Jones and The Great Circle, Awakening, and Dragon Age: The Vanguard.

The GeForce NOW library continues to expand with total catalog size of over 2,000 titles, the most content of any cloud gaming service. Moving to pro visualization. Revenue of $454 million was up 6% sequentially and 20% year on year. Demand is being driven by AI and graphic use cases, including model fine-tuning and Omniverse-related workloads.

Automotive and manufacturing were among the key industry verticals driving growth this quarter. Companies are racing to digitalize workflows to drive efficiency across their operations. The world's largest electronics manufacturer, Foxconn, is using NVIDIA Omniverse to power digital twins of the physical plants that produce NVIDIA Blackwell systems. And several large global enterprises, including Mercedes-Benz, signed multiyear contracts for NVIDIA Omniverse Cloud to build industrial digital twins of factories.

We announced new NVIDIA USD NIMs and connectors to open Omniverse to new industries and enable developers to incorporate generative AI copilots and agents into USD workloads, accelerating our ability to build highly accurate virtual worlds. WPP is implementing the USD NIM microservices in its generative AI-enabled content creation pipeline for customers such as The Coca-Cola Company. Moving to automotive and robotics. Revenue was $346 million, up 5% sequentially and up 37% year on year.

Year-on-year growth was driven by the new customer ramp in self-driving platforms and increased demand for AI cockpit solutions. At the consumer -- at the Computer Vision and Pattern Recognition Conference, NVIDIA won the Autonomous Brand Challenge in the end-to-end driving upscale category, outperforming more than 400 entries worldwide. Boston Dynamics, BYD Electronics, Figure, Intrinsyc, Siemens, and Teradyne Robotics are using the NVIDIA Isaac robotics platform for autonomous robot arms, humanoids, and mobile robots. Now, moving to the rest of the P&L.

GAAP gross margins were 75.1% and non-GAAP gross margins were 75.7%, down sequentially due to a higher mix of new products within Data Center and inventory provisions for low-yielding Blackwell material. Sequentially, GAAP and non-GAAP operating expenses were up 12%, primarily reflecting higher compensation-related costs. Cash flow from operations was $14.5 billion. In Q2, we utilized cash of $7.4 billion toward shareholder returns in the form of share repurchases and cash dividends, reflecting the increase in dividend per shareholder.

Our board of directors recently approved a $50 billion share repurchase authorization to add to our remaining $7.5 billion of authorization at the end of Q2. Let me turn the outlook for the third quarter. Total revenue is expected to be $32.5 billion, plus or minus 2%. Our third-quarter revenue outlook incorporates continued growth of our Hopper architecture and sampling of our Blackwell products.

We expect Blackwell production ramp in Q4. GAAP and non-GAAP gross margins are expected to be 74.4% and 75%, respectively, plus or minus 50 basis points. As our Data Center mix continues to shift to new products, we expect this trend to continue into the fourth quarter of fiscal 2025. For the full year, we expect gross margins to be in the mid-70% range.

GAAP and non-GAAP operating expenses are expected to be approximately $4.3 billion and $3.0 billion, respectively. Full-year operating expenses are expected to grow in the mid- to upper 40% range as we work on developing our next generation of products. GAAP and non-GAAP other income and expenses are expected to be about $350 million, including gains and losses from nonaffiliated investments and publicly held equity securities. GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1%, excluding any discrete items.

Further financial details are included in the CFO commentary and other information available on our IR website. We are now going to open the call for questions. Operator, would you please help us poll for questions?

Questions & Answers:....

....MUCH MORE

The stock ended the after-hours session down another $8.66 (-6.89%) at $116.95, bringing the day's cumulative loss to $11.35 (-8.85%).

More to come tomorrow.
Earlier:
Nvidia Beats On Top And Bottom Lines, Shares Drop (NVDA)