Monday, April 1, 2019

"The next AI explosion will be defined by the chips we build for it" (NVDA)

From MIT's Technology Review, March 26:

Specialized AI chips are the future, and chipmakers are scrambling to figure out which designs will prevail.
Hardware design, rather than algorithms, will help us achieve the next big breakthrough in AI. That’s according to Bill Dally, Nvidia’s chief scientist, who took the stage Tuesday at EmTech Digital, MIT Technology Review’s AI conference. “Our current revolution in deep learning has been enabled by hardware,” he said.

As evidence, he pointed to the history of the field: many of the algorithms we use today have been around since the 1980s, and the breakthrough of using large quantities of labeled data to train neural networks came during the early 2000s. But it wasn’t until the early 2010s—when graphics processing units, or GPUs, entered the picture—that the deep-learning revolution truly took off.

“We have to continue to provide more capable hardware, or progress in AI will really slow down,” Dally said.

Nvidia is now exploring three main paths forward: developing more specialized chips; reducing the computation required during deep learning; and experimenting with analog rather than digital chip architectures.

Nvidia has found that highly specialized chips designed for a specific computational task can outperform GPU chips that are good at handling many different kinds of computation. The difference, Dally said, could be as much as a 20% increase in efficiency for the same level of performance.
Dally also referenced a study that Nvidia did to test the potential of “pruning”—the idea that you can reduce the number of calculations that must be performed during training, without sacrificing a deep-learning model’s accuracy. Researchers at the company found they were able to skip around 90% of those calculations while retaining the same learning accuracy. This means the same learning tasks can take place using much smaller chip architectures....MUCH MORE