Thursday, February 22, 2024

For Those Who Can't Afford An Nvidia H-100 Chip: "Lambda Snags $320 Million To Grow Its Rent-A-GPU Cloud"

A single chip will set you back over $40,000. For every one else there's Cloud GPU as a Service.

From TheNextPlatform, February 16:

Riding high on the AI hype cycle, Lambda – formerly known as Lambda Labs and well known to readers of The Next Platform – has received a $320 million cash infusion to expand its GPU cloud to support training clusters spanning thousands of Nvidia’s top specced accelerators.

This will include deploying “tens of thousands” of Nvidia GPUs, including the existing H100 accelerators and impending G200 heavy weight GPU accelerators as well as the hybrid GH200 superchips. The G200s are important because they push UBM3e capacity up to 141 GB and aggregate bandwidth per device up to 4.8 TB/sec.

Alongside the accelerators, the funding will also support the deployment of faster Quantum-2 InfiniBand networking. The latest of these switches are capable of driving up to 400 Gb/sec of bandwidth to each port.

Lambda hasn’t said exactly how many accelerators the Series C funding round will afford them, or how much money it already has in the bank to invest in GPUs. However we expect Lambda should be able to acquire somewhere around 10,000 accelerators within that $320 million budget and leave a little left over for networking and other supporting infrastructure. With more aggressive discounting from Nvidia for the GPUs, that will leave more money for networking and storage to support the GPU systems....

....MUCH MORE