Friday, June 19, 2020

"AI chips in 2020: Nvidia and the challengers"

Not immediately actionable but a nice reference.
Plus there's a link to a piece by Tiernan Ray who, when he was writing at Barron's impressed by both helming the Tech Trader column after Eric Savitz left and by being able to get answers out of NVDA's Jensen Huang that no one, reporters, analysts, whomever, was getting.

A major piece by at ZD Net, May 21:
Now that the dust from Nvidia's unveiling of its new Ampere AI chip has settled, let's take a look at the AI chip market behind the scenes and away from the spotlight
Few people, Nvidia's competitors included, would dispute the fact that Nvidia is calling the shots in the AI chip game today. The announcement of the new Ampere AI chip in Nvidia's main event, GTC, stole the spotlight last week.

There's been ample coverage, including here on ZDNet. Tiernan Ray provided an in-depth analysis of the new and noteworthy with regards to the chip architecture itself. Andrew Brust focused on the software side of things, expanding on Nvidia's support for Apache Spark, one of the most successful open-source frameworks for data engineering, analytics, and machine learning.
Let's pick up from where they left off, putting the new architecture into perspective by comparing against the competition in terms of performance, economics, and software.

Nvidia's double bottom line
The gist of Ray's analysis is on capturing Nvidia's intention with the new generation of chips: To provide one chip family that can serve for both "training" of neural networks, where the neural network's operation is first developed on a set of examples, and also for inference, the phase where predictions are made based on new incoming data.

Ray notes this is a departure from today's situation where different Nvidia chips turn up in different computer systems for either training or inference. He goes on to add that Nvidia is hoping to make an economic argument to AI shops that it's best to buy an Nvidia-based system that can do both tasks.

"You get all of the overhead of additional memory, CPUs, and power supplies of 56 servers ... collapsed into one," said Nvidia CEO Jensen Huang. "The economic value proposition is really off the charts, and that's the thing that is really exciting."

Jonah Alben, Nvidia's senior VP of GPU Engineering, told analysts that Nvidia had already pushed Volta, Nvidia's previous-generation chip, as far as it could without catching fire. It went even further with Ampere, which features 54 billion transistors, and can execute 5 petaflops of performance, or about 20 times more than Volta.

So, Nvidia is after a double bottom line: Better performance and better economics. Let us recall that recently Nvidia also added support for Arm CPUs. Although Arm processor performance may not be on par with Intel at this point, its frugal power needs make them an attractive option for the data center, too, according to analysts.

On the software front, besides Apache Spark support, Nvidia also unveiled Jarvis, a new application framework for building conversational AI services. To offer interactive, personalized experiences, Nvidia notes, companies need to train their language-based applications on data that is specific to their own product offerings and customer requirements.

However, building a service from scratch requires deep AI expertise, large amounts of data, and compute resources to train the models, and software to regularly update models with new data. Jarvis aims to address these challenges by offering an end-to-end deep learning pipeline for conversational AI.

Jarvis includes state-of-the-art deep learning models, which can be further fine-tuned using Nvidia NeMo, optimized for inference using TensorRT, and deployed in the cloud and at the edge using Helm charts available on NGC, Nvidia's catalog of GPU-optimized software.

Intel and GraphCore: high profile challengers
Working backward, this is something we have noted time and again for Nvidia: Its lead does not just lay in hardware. In fact, Nvidia's software and partner ecosystem may be the hardest part for the competition to match. The competition is making moves too, however. Some competitors may challenge Nvidia on economics, others on performance. Let's see what the challengers are up to.
Intel has been working on its Nervana technology for a while. At the end of 2019, Intel made waves when it acquired startup Habana Labs for $2 billion. As analyst Karl Freund notes, after the acquisition Intel has been working on switching its AI acceleration from Nervana technology to Habana Labs.....
....MUCH MORE