Wednesday, November 16, 2016

NVIDIA Builds Its Very Own Supercomputer, Enters The Top500 List At #28 (NVDA)

Now they're just showing off.

The computer isn't going to be a product line or anything that generates immediate revenues but it puts the company in a very exclusive club and may lead to some in-house breakthroughs in chip design going forward.
The stock is up $4.97 (+5.77%) at $91.16.

To be clear, this isn't someone using NVDA's graphics processors to speed up their supercomputer as the Swiss did with the one they let CERN use and which is currently the eighth fastest in the world or the computer that's being built right now at Oak Ridge National Laboratory and is planned to be the fastest in the world (but may not make it, China's Sunway TaihuLight is very, very fast).

And this isn't the DIY supercomputer we highlighted back in May 2015:
...Among the fastest processors in the business are the one's originally developed for video games and known as Graphics Processing Units or GPU's. Since Nvidia released their Tesla hardware in 2008 hobbyists (and others) have used GPU's to build personal supercomputers.
Here's Nvidias Build your Own page.
Or have your tech guy build one for you....
Nor is it the $130.000 supercomputer NVIDIA came up with for companies to get started in Deep Learning/AI.

No, this is NVIDIA's very own supercomputer.

Here's the brand new list (they come out every six months):
Top500 List - November 2016

And here is NVIDIA's 'puter, right behind one of the U.S. Army's machines and just ahead of Italian energy giant ENI's machine:

28

NVIDIA Corporation
United States
DGX SATURNV - NVIDIA DGX-1, Xeon E5-2698v4 20C 2.2GHz, Infiniband EDR, NVIDIA Tesla P100
Nvidia





Some of our prior posts on the Top500.

Here's NVDA's story, from Extreme Tech:

Nvidia builds its own supercomputer, claims top efficiency spot in the TOP500


Every six months, the TOP500 team releases a list of the 500 most powerful supercomputers in the world based on their results in the long-running Linpack benchmark. These machines are typically owned by governments or are built as public-private partnerships between various government and industry partners who share costs and computer time. This year, Nvidia has made its own entry — and no, I don’t mean that Nvidia is powering someone else’s system, or that the company collaborated with a different firm. Nvidia built its own frickin’ supercomputer.

The new DGX SaturnV contains 60,512 CPU cores (the machine relies on Intel’s Xeon E5-2698v4 for its CPUs) and 63,488GB of RAM. The machine is actually a cluster of 125 DGX-1 systems — that’s the AI processing “supercomputer in a box” that Nvidia unveiled last year, and the first machine to feature the company’s Pascal GPUs (the full GP100 configuration). According to Nvidia, the new machine is 2.3x more energy efficient than the closest Xeon Phi system of equivalent performance and it delivers 9.46GFLOPS/watt, a 42% improvement over the most efficient system unveiled last June. That’s a huge improvement in a relatively short period of time, though I do want to note an important caveat to these kinds of figures. One thing we’ve covered before in our previous discussions of exascale computing is how ramping compute clusters upwards creates very different constraints than we typically consider when talking about desktops or even servers. Factors like interconnect power consumption, total memory loadout, and memory architecture all play a significant part in how these metrics play out.

DGX-1-Server

In other words: Nvidia’s new performance/watt metrics are great for Pascal and a huge achievement, but I don’t think we can read much about the potential power efficiency of Xeon Phi without seeing something more closely akin to an apples-to-apples comparison. It’s also interesting that Nvidia chose to use Intel Xeons for its own power efficiency push than OpenPOWER, despite being a fairly vocal supporter of OpenPOWER and NVLink. Given that the new supercomputer relies on Nvidia’s DGX-1, however, it probably made more sense to build its own server clusters towards the x86 ecosystem rather than trying to launch a new AI and compute platform around Power at this time....MORE