Monday, March 25, 2019

Chips: The Accelerator Wall—A New Problem for a Post-Moore’s Law World (GPU; ASIC; FPGA)

As grandmother used to say, if it's not one tham ding it's another.
From the brainiacs at IEEE Spectrum:

Specialized chips and circuits may not save the computer industry after all
Accelerators are already everywhere: The world’s Bitcoin is mined by chips designed to speed the cryptocurrency’s key algorithm, nearly every digital something that makes a sound uses hardwired audio decoders, and dozens of startups are chasing speedy silicon that could make deep learning AI omnipresent. This kind of specialization, where common algorithms once run as software on CPUs are made faster by recreating them in hardware, has been thought of as a way to keep computing from stagnating after Moore’s Law peters out in one or two more chip generations.

But it won’t work. At least, it won’t work for very long. That’s the conclusion that Princeton University associate professor of electrical engineering David Wentzlaff and his doctoral student Adi Fuchs come to in research to be presented at the IEEE International Symposium on High-Performance Computer Architecture this month. Chip specialization, they calculate, can’t produce the kinds of gains that Moore’s Law could. Progress on accelerators, in other words, will hit a wall just like shrinking transistors will, and it will happen sooner than expected.

To prove their point, Fuchs and Wentzlaff had to figure out how much of recent performance gains comes from chip specialization and how much comes from Moore’s Law. That meant examining more than 1,000 chip data sheets and teasing out what part of their improvement from generation to generation was due to better algorithms and their clever implementation as circuits. In other words, they were looking to quantify human ingenuity.

So they did what engineers do: They made it into a dimensionless quantity. Chip specialization return, as they called it, answers the question: “How much did a chip’s compute capabilities improve under a fixed physical budget” of transistors?

Using this metric, they evaluated video decoding on an application specific integrated circuit (ASIC), gaming frame rate on a GPU, convolutional neural networks on an FPGA, and Bitcoin mining on an ASIC. The results were not heartening: Gains in specialized chips are greatly dependent on there continuing to be more and better transistors available per square millimeter of silicon. In other words, without Moore’s Law, chip specialization’s powers are limited....MORE
Previously on specialized chips:
Watch Out NVIDIA: "Amazon, Huawei efforts show move to AI-centric chips continues"
Jan. 24
Watch Out Nvidia, Xilinx Is Performing (reports, beats, pops) XLNX; NVDA
Xilinx with their field-programmable gate array approach versus Nvidia's more generalist chips is an example of the type of competition experts were predicting NVDA would be facing, some over two years ago, links after the jump...
Dec 2018
A Dip Into Chips: "AI Chip Architectures Race To The Edge"
Sept. 2018
"Why Alibaba is betting big on AI chips and quantum computing"
Sept 2018 
Hot Chips 2018 Symposium on High Performance Chips
Sept 2018 
Chips: "A Rogues Gallery of Post-Moore’s Law Options"
Aug 18
Ahead of NVIDIA Earnings: The Last of the "Easy" Comparisons (NVDA)
Beating that Q3 2017 EPS, 90 cents, by double i.e. $1.80 or more, is doable but AI and data centers will have to pick up the slack from the Q1 and Q2 cryptocurrency bump that started declining with Bitmain and other miners use of ASICs rather than GPU's.

Going forward the trend toward specialist proprietary chips, see Tesla's development of their own chips etc, etc will leave NVIDIA with a couple holes in the potential addressable markets they will want to fill.

Additionally, the smaller pups, some still in stealth, are nipping at the big dog's heels, making it more expensive for NVIDIA to maintain their edge in architecture....
Aug 2018
Artificial Intelligence Chips: Past, Present and Future
May 2018
Chipmakers Battle To Power Artificial Intelligence In Cloud Data Centers" (AMD; NVDA; XLNX; INTC)
May 2018 
"Xilinx Analyst Day Plays Heavy on AI" (XLNX)
May 2018
"Intel vs. Nvidia: Future of AI Chips Still Evolving" (INTC; NVDA)
Dec. 2017
“'The Ultimate Trading Machine' from Penguin Computing sets Record for Low Latency"
Oct. 2017
"The Natural Evolution of Artificial Intelligence" 

"Nvidia CEO is 'more than happy to help' if Tesla's A.I. chip doesn't pan out" (NVDA; TSLA)

And April 2017
We've said NVIDIA probably has a couple year head start but this bears watching, so to speak....
***
The only reason for Tesla to do this is that NVIDIA's chips are general purpose whereas specialized chips are making inroads in stuff like crypto mining (ASICs), Google's Tensor Processing Units (TPUs) for machine learning and Facebook's hardware efforts.

In May, 2018 we met a person who knows about this stuff:

 AI VC: "We Are Here To Create" 
Sometimes the competition is just plain intimidating/scary/resistance-is-futile, smart.

June 2016 
Machine Learning: JP Morgan Compares Google's New Chip With NVIDIA's (GOOG; NVDA)

And many more, use the "search blog" box, upper left if interested.