We've mentioned CoreWeave a few times, some links after the jump.
From Bloomberg, March 22:
CoreWeave, a cloud computing provider that’s among the hottest startups in the artificial intelligence race, is in talks to raise equity capital in a transaction that would more than double its valuation to $16 billion, according to people with knowledge of the matter.
The Roseland, New Jersey-based company, led by CEO Michael Intrator, is discussing selling both new and existing shares, and employees may tender some of their holdings, said the people, who asked not to be identified discussing confidential information. Terms haven’t been finalized and could still change, one of the people cautioned.
A CoreWeave representative declined to comment.
A $16 billion post-money valuation would eclipse CoreWeave’s $7 billion mark in a transaction last year. The company said in December that it had closed a $642 million minority stake sale to an investor group comprising Fidelity Management & Research Co., Investment Management Corporation of Ontario, Jane Street and JPMorgan Asset Management, among others....
....MORE
Previously:June 2023
Chips: "Nvidia Leads, Habana Challenges on MLPerf GPT-3 Benchmark" (NVDA; INTC)
From EE Times, June 26:
December 2023The latest round of MLPerf training benchmarks includes GPT-3, the model ChatGPT is based on, for the first time. The GPT-3 training crown was claimed by cloud provider CoreWeave using more than 3,000 Nvidia H100 GPUs. What’s more surprising is that there were no entries from previous training submitters Google, Graphcore and others, or other competitors like AMD. It was left to Intel’s Habana Labs to be the only challenger to Nvidia on GPT-3 with its Gaudi2 accelerator.
CoreWeave used 3,584 Nvidia HGX-H100s to train a representative portion of GPT-3 in 10.94 minutes (this is the biggest number of GPUs the cloud provider could make available at one time, and is not the full size of its cluster). A portion of GPT-3 is used for the benchmark since it would be impractical to insist submitters train the entirety of GPT-3, which could take months and cost millions of dollars. Submitters instead train an already partially-trained GPT-3 from a particular checkpoint until it converges to a certain accuracy. The portion used is about 0.4% of the total training workload for GPT-3; based on CoreWeave’s 10.94 minutes score, 3,584 GPUs would take almost two days to train the whole thing.....
Cloud: GPU's as a Service Gets Big Backers (GaaS)
March 2024
"Nvidia CEO Becomes Kingmaker by Name-Dropping Stocks" (NVDA+++++++)
March 2024
Google Cloud Is Losing Top Executives