After being pronounced dead this past February - in Nature, no less - Moore’s Law seems to be having a very weird afterlife. Within the space of the last thirty days we've seen:
Intel hasn’t lost the plot. In fact, most of the problems in Moore’s Law have come from Intel’s slavish devotion to a single storyline: more transistors and smaller transistors are what everyone needs. That push toward ‘general purpose computing’ gave us thirty years of Wintel, but that no longer looks to be the main game. The CPU is all grown up.
- Intel announce some next-generation CPUs that aren’t very much faster than the last generation of CPUs;
- Intel delay, again, the release of some of its 10nm process CPUs; and
- Apple’s new A10 chip, powering iPhone 7, is as one of the fastest CPUs ever.
Meanwhile, in the five years between iPhone 4S and iPhone 7, Apple has written its obsessive-compulsive desire for complete control into silicon. Every twelve months another A-series System-on-a-Chip makes its way into the Apple product line, and every time performance increases enormously.
You might think that’s to be expected - after all, those kinds of performance improvements are what Moore’s Law guarantees. But the bulk of the speed gains in the A-series (about a factor of twelve over the last five years) don’t come from making more, smaller transistors. Instead, they come from Apple’s focus on using only those transistors needed for their smartphones and tablets.
Although the new A10 hosts an ARM four-core big.LITTLE CPU, every aspect of Apple’s chip is highly tuned to both workload and iOS kernel-level task management. It’s getting hard to tell where Apple’s silicon ends and its software begins.
And that’s exactly the point.
The cheap and easy gains of the last fifty years of Moore’s Law gave birth to a global technology industry. The next little while - somewhere between twenty and fifty years out - will be dominated by a transition from software into hardware, a confusion of the two so complete it will literally become impossible to know where the boundary between the two lies.
Apple isn’t alone; NVIDIA has been driving its GPUs through the same semiconductor manufacturing process nodes that Intel pioneers, growing more, smaller transistors to draw pretty pictures on displays, while simultaneously adding custom bits to move some of the work previously done in software - such as rendering stereo pairs for virtual reality displays - into the hardware. A process that used to cost 2x the compute for every display frame now comes essentially for free.
Longtime watchers of the technology sector will note this migration from software into hardware has been a feature of computing for the last fifty years. But for all that time the cheap gains of ever-faster CPUs versus the hard work of designing and debugging silicon circuitry meant only the most important or time-critical tasks migrated into silicon....MORE
Monday, September 26, 2016
Zombie Moore's Law shows Hardware is Eating Software (INTC; NVDA)
From The Register: