Wednesday, May 1, 2024

Nvidia-Backed CoreWeave Raises $1.1 Billion At A $19 billion Valuation

As has been pointed out by people smarter than I, sometimes the AI biz does seem a bit incestuous.

From cloud mavens, SiliconAngle, May 1:

CoreWeave Inc., the operator of a cloud platform optimized for graphics card workloads, today announced that it has closed a $1.1 billion funding round.

The Series C raise reportedly values the company at $19 billion. That’s up from the $7 billion it was worth following a $642 million secondary sale in December. Fidelity Management, which led that deal, also joined in the funding round CoreWeave announced today along with Coatue, Lykos Global Management, Altimeter Capital and Magnetar.

CoreWeave operates a public cloud that provides access to about a dozen different Nvidia Corp. graphics processing units. It targets two main use cases: artificial intelligence and graphics rendering. CoreWeave claims that its platform allows customers to run such workloads more cost-efficiently than established public clouds and with better performance.

Some of the GPUs the company offers, such as the H100, are built from the ground up for AI workloads. Its cloud also features other Nvidia chips such as the A40, which is mainly geared towards computer graphics professionals. 

Unlike their AI-optimized counterparts, the A40 and the other rendering-optimized GPUs that CoreWeave provides include RT Cores. Those are circuits optimized for ray tracing, a rendering technique used to simulate lighting effects such as shadows and motion blur. The method involves shining virtual light rays on an object and studying how those rays bounce back to find the most realistic-looking pixel settings....

....MUCH MORE

Previously:
June 2023
Chips: "Nvidia Leads, Habana Challenges on MLPerf GPT-3 Benchmark" (NVDA; INTC)

From EE Times, June 26:

The latest round of MLPerf training benchmarks includes GPT-3, the model ChatGPT is based on, for the first time. The GPT-3 training crown was claimed by cloud provider CoreWeave using more than 3,000 Nvidia H100 GPUs. What’s more surprising is that there were no entries from previous training submitters Google, Graphcore and others, or other competitors like AMD. It was left to Intel’s Habana Labs to be the only challenger to Nvidia on GPT-3 with its Gaudi2 accelerator.

CoreWeave used 3,584 Nvidia HGX-H100s to train a representative portion of GPT-3 in 10.94 minutes (this is the biggest number of GPUs the cloud provider could make available at one time, and is not the full size of its cluster). A portion of GPT-3 is used for the benchmark since it would be impractical to insist submitters train the entirety of GPT-3, which could take months and cost millions of dollars. Submitters instead train an already partially-trained GPT-3 from a particular checkpoint until it converges to a certain accuracy. The portion used is about 0.4% of the total training workload for GPT-3; based on CoreWeave’s 10.94 minutes score, 3,584 GPUs would take almost two days to train the whole thing.....

December 2023
Cloud: GPU's as a Service Gets Big Backers (GaaS)
March 2024
"Nvidia CEO Becomes Kingmaker by Name-Dropping Stocks" (NVDA+++++++)
March 2024
Google Cloud Is Losing Top Executives
March 26
AI In The Cloud: "CoreWeave Is in Talks for Funding at $16 Billion Valuation"