Monday, June 4, 2018

Nvidia debuts cloud server platform to unify AI and HPC (NVDA)

This is a pretty big deal.
High Performance Computing has been a separate and distinct class of machines since the University of Manchester's Atlas computer went online in 1962.
And now NVIDIA is blurring the line in a few different ways.

First off, existing and planned supercomputers use graphics processing units to accelerate processing speeds at the very top of the heap. Here's January 2018's "China set to launch its new supercomputer"
NVIDIA watches, some commentary after the jump...

.... On the last Top500 list of the world's fastest supercomputers, 87 systems use NVIDIA GPUs as accelerators. Not the top two however. You have to go down to #3, Switzerland's Piz Daint which was upgraded with NVIDIA chips in 2016. China is using an entirely different architecture which we touched on in June 2016's "Milestone: China Builds The (NEW) World's Fastest Supercomuter Using Only Chinese Components (and other news) INTC; NVDA; IBM".
Then there was NVIDIA's own supercomputer highlighted in November 2016:

Now they're just showing off.

The computer isn't going to be a product line or anything that generates immediate revenues but it puts the company in a very exclusive club and may lead to some in-house breakthroughs in chip design going forward.
The stock is up $4.97 (+5.77%) at $91.16.

To be clear, this isn't someone using NVDA's graphics processors to speed up their supercomputer as the Swiss did with the one they let CERN use and which is currently the eighth fastest in the world or the computer that's being built right now at Oak Ridge National Laboratory and is planned to be the fastest in the world (but may not make it, China's Sunway TaihuLight is very, very fast)....
Before that they explained how to build your own (remember, this was pre-media-rapture, hence the explanatory tone) supercomputer, relayed in May 2015's:
Nvidia Wants to Be the Brains Of Your Autonomous Car (NVDA)
...Among the fastest processors in the business are the one's originally developed for video games and known as Graphics Processing Units or GPU's. Since Nvidia released their Tesla hardware in 2008 hobbyists (and others) have used GPU's to build personal supercomputers.
Here's Nvidias Build your Own page.
Or have your tech guy build one for you....
NVIDIA really started blurring the lines  with their mini-supercomputer for training AI in 2016:
Technology Review on NVIDIA's Pint-Sized Supercomputer (NVDA)
$129,000

In April 2018 the company had a new offering that stunned reviewers into caveman-speak:
UPDATED—NVIDIA Wants to Be the Brains Behind the Surveillance State (NVDA)
The company just rolled out a $399,000 two-petaflop supercomputer that every little totalitarian and his brother is going to lust after to run their surveillance-city smart-city data slurping dreams.

The coming municipal data centers will end up matching the NSA in total storage capacity and NVIDIA wants to be the one sifting through it all. More on this down the road, for now here's the beast.

From Hot Hardware:
NVIDIA Unveils Beastly 2 Petaflop DGX-2 AI Supercomputer With 32GB Tesla V100 And NVSwitch Tech (Updated)...
Ummm, beast fast.

For reference the #500 supercomputer in the world is an HPE machine that clocks in at a theoretical maximum speed of 712.9 teraflops at NASA/Goddard Space Flight Center's Climate Simulation Platform, part of the 3.5 petaflop Discover system.

That means you can buy an off-the-shelf supercomputer that is almost three times faster than the a 'puter on the Top500 list. Anyway, that's a much longer than usual introduction, here's the story at Silicon Angle, May 30:

Nvidia debuts cloud server platform to unify AI and high-performance computing
Hoping to maintain the high ground in artificial intelligence and high-performance computing, Nvidia Corp. late Tuesday debuted a new computing architecture that it claims will unify both fast-growing areas of the industry.
The announcement of the HGX-2 cloud-server platform (pictured), made by Nvidia Chief Executive Jensen Huang at its GPU Technology Conference in Taipei, Taiwan, is aimed at many new applications that combine AI and HPC.

“We believe the future requires a unified platform for AI and high-performance computing,” Paresh Kharya, product marketing manager for Nvidia’s accelerated-computing group, said during a press call Tuesday.

Others agree. “I think that AI will revolutionize HPC,” Karl Freund, a senior analyst at Moor Insights & Strategy, told SiliconANGLE. “I suspect many supercomputing centers will deploy HGX2 as it can add dramatic computational capacity for both HPC and AI.”

More specifically, the new architecture enables applications involving scientific computing and simulations, such as weather forecasting, as well as both training and running of AI models such as deep learning neural networks, for jobs such as image and speech recognition and navigation for self-driving cars. “These models are being updated at an unprecedented pace,” sometimes as often as hourly, Kharya said.

The HGX architecture, powered by Nvidia’s graphics processing units, or GPUs, is a data center design used in Microsoft Corp.’s Project Olympus initiative, Facebook Inc.’s Big Basin systems and Nvidia’s own DGX-1 AI supercomputers as well as services from public cloud computing leader Amazon Web Services Inc. The first version of the architecture, the HGX-1, was announced a year ago.

Essentially, the HGX-2, which consists of 16 of Nvidia’s high-end V100 GPUs, provides a building block for computer makers to create the systems that. Using Nvidia’s NVLink chip interconnect system, it makes the 16 GPUs look like one, the company said, delivering 2 petaflops, or 2 quadrillion floating point operations per second, a standard computing speed measure.

“Basically, you can now use HGX as a pool of 16 GPUs as if it were a single very large compute resource,” Freund explained.

Nvidia also said today that its own recently announced DGX-2 AI supercomputer was the first system to use HGX-2. It will sell for $399,000 when it’s available in the third quarter. Huang joked on a livestream of his conference keynote that it’s a “great value,” though he appeared to mean it as well....MUCH MORE
The stock is up  $6.99 (+2.71%) at $264.61.

See also:
March 2018 
Exascale Computers: Competing With China Is About to Get Serious
...The 200 petaflop/second (quadrillions of calculations per second) Summit supercomputer being built at Oak Ridge National Laboratory will blow past the current world's fastest, the 93 petaflop/s Sunway TaihuLight but this latest monster is planned to be five times faster still.

At that point, if you were to use it for, saaay, training artificial intelligence, you are going places that the human mind literally can't comprehend much less forecast or, dream on, guide....