Tuesday, April 3, 2018

UPDATED—NVIDIA Wants to Be the Brains Behind the Surveillance State (NVDA)

Update below.
Origianal post:

The company just rolled out a $399,000 two-petaflop supercomputer that every little totalitarian and his brother is going to lust after to run their surveillance-city smart-city data slurping dreams.
The coming municipal data centers will end up matching the NSA in total storage capacity and NVIDIA wants to be the one sifting through it all. More on this down the road, for now here's the beast.
From Hot Hardware:

NVIDIA Unveils Beastly 2 Petaflop DGX-2 AI Supercomputer With 32GB Tesla V100 And NVSwitch Tech (Updated)
Of the over 28,000 attendees at NVIDIA’s GTC 2018 GPU Technology Conference, many converged on the San Jose Convention Center this week to learn about advancements in AI and Machine Learning that the company would bring to the table for developers, researchers and service providers in the field. Today, NVIDIA CEO Jensen Huang took to the stage to unveil a number of GPU-powered innovations for Machine Learning, including a new AI super computer and an updated version of the company’s powerful Tesla V100 GPU that now sports a hefty 32 Gigabytes of on-board HBM2 memory.

A follow-on to last year’s DGX-1 AI supercomputer, the new NVIDIA DGX-2 can be equipped with double the number of Tesla V100 32GB processing modules for double the GPU horsepower and a whopping 4 times the amount or memory space, for processing datasets of dramatically larger batch sizes. Again, each Tesla V100 now sports 32GB of HMB2, where previous generation Tesla V100 was limited to 16GB. The additional memory can afford factors of multiple improvements in throughput due to the data being stored in local memory on the GPU complex, versus having to fetch out of much higher latency system memory, as the GPU crunches data iteratively. In addition, NVIDIA also attacked the problem of scalability for its DGX server product by developing a new switch fabric for the DGX-2 platform.....MORE
The data sifting is so fast that data storage companies are starting to supercharge their systems with GPU's using the older DGX-1.
From TechTarget's SearchStorage:

Pure Storage AIRI is AI-ready infrastructure that integrates Pure's all-flash FlashBlade NAND storage blades and four Nvidia DGX-1 artificial intelligence supercomputers.
Pure Storage is elbowing into AI-based storage with FlashBlade, a use case that's a natural progression for the scale-out unstructured array.

The all-flash pioneer this week teamed with high-performance GPU specialist Nvidia to unveil Pure Storage AIRI, a preconfigured stack developed to accelerate data-intensive analytics at scale.

AIRI stands for AI-ready infrastructure. The product integrates a single 15-blade Pure Storage FlashBlade array fed by four Nvidia DGX-1 deep learning supercomputers. Connectivity comes from two remote direct memory access 100 Gigabit Ethernet switches from Arista Networks.
In this product iteration, Pure uses 15 midrange 17 TB FlashBlade NAND blades. Pure Storage claims a half rack of AIRI compute and storage is equivalent to about 50 standard data center racks....MORE
Finally, from Tiernan Ray at Barron's Tech Trader:

Nvidia: One Analyst Thinks It’s Decimating Rivals in A.I. Chips
Nvidia's CEO Jen-Hsun Huang is taking away the oxygen from competitors in A.I., Rosenblatt analyst Hans Mosesmann tells Barron's, by a combination of chip performance that's hard to match and software technology that others can't offer.
The fastest-growing part of chip maker Nvidia’s (NVDA) business is its “data center” chips product line, driven in part by sales of graphics chips — “GPUs” — that are widely used for artificial intelligence tasks such as machine learning.

That division looks to have a very bright future, according to one analyst who attended Nvidia’s annual “GTC” conference last week.

“What Nvidia did with their announcements last week was to cause everyone, including Intel (INTC), but also startups, to re-examine their roadmaps,” says Hans Mosesmann of Rosenblatt Securities.

I chatted with Mosesmann by phone on Friday. Mosesmann, who has a Buy rating on Nvidia stock, and a $300 price target, foresees the company having something of a lock on the A.I. chip market.
"Nvidia has reset the level of performance,” he told me.
Nvidia’s data center business totaled $606 million in revenue, or 21% of its total, and more than doubled from a year earlier. (For more details on Nvidia’s revenue trends, see the company’s presentation on its investor relations Web site.)

Nvidia, in Mosesmann's thinking, keeps upping the ante. Not only turning up performance of chips, but also redefining the battle by making it about software, and about system-level expertise in A.I., not just about the chip itself:
[Nvidia CEO] Jen-Hsun [Huang] is very clever in that he sets the level of performance that is near impossible for people to keep up with. It’s classic Nvidia — they go to the limits of what they can possibly do in terms of process and systems that integrate memory and clever switch technology and software and they go at a pace that makes it impossible at this stage of the game for anyone to compete.

Everyone has to ask, Where do I need to be in process technology and in performance to be competitive with Nvidia in 2019. And do I have a follow-on product in 2020? That’s tough enough. Add to that the problem of compatibility you will have to have with 10 to 20 frameworks [for machine learning.] The only reason Nvidia has such an advantage is that they made the investment in CUDA [Nvidia’s software tools].

A lot of the announcements at GTC were not about silicon, they were about a platform. It was about things such as taking memory [chips] and putting it on top of Volta [Nvidia’s processor], and adding to that a switch function. They are taking the game to a higher level, and probably hurting some of the system-level guys. Jen-Hsun is making it a bigger game.
An immediate result, Mosesmann believes, is that a lot of A.I. chip startups, companies that include Graphcore and Cerebras,are going to have a very hard time keeping up.
“He’s destroying these companies,” says Mosesmann of the young A.I. hopefuls. “These private companies have to go back and get another $50 million [of funding]."

“He's taking all the oxygen out of the room,” says Mosesmann.
For the established competitors such as Intel, Mosesmann sees plenty of attempts at A.I. suddenly rendered moot.

Intel bought A.I. chip startup Nervana Systems in 2016 for $400 million. I’ve written a bunch about how Nervana is becoming Intel’s A.I. focus....MUCH MORE
Update: "'Nvidia's Slightly Terrifying Metropolis Platform Paves the Way for Smarter Cities' (NVDA)"