Sunday, May 10, 2020

"AI Designs Computer Chips for More Powerful AI" (GOOG)

And the rich get richer and it ain't you. See also yesterday's "Flywheel Effect: Why Positive Feedback Loops are a Meta-Competitive Advantage" if interested.

From the Physics arXiv via Discover, April 30:

Google’s breakthrough could dramatically accelerate the design cycle for intelligent machines. 
One emerging trend in chip design is a move away from bigger, grander designs that double the number of transistors every 18 months, as Moore’s Law stipulates. Instead, there is growing interest in specialized chips for specific tasks such as AI and machine learning, which are advancing rapidly on scales measured in weeks and months.

But chips take much longer than this to design, and that means new microprocessors cannot be designed quickly enough to reflect current thinking. “Today’s chips take years to design, leaving us with the speculative task of optimizing them for the machine learning models of two to five years from now,” lament Azalia Mirhoseini, Anna Goldie and colleagues at Google, who have come up with a novel way to speed up this process.

Their new approach is to use AI itself to speed up the process of chip design.  And the results are impressive. Their machine learning algorithm can do in six hours what a human chip designer would take weeks to achieve, even when using modern chip-design software.

And the implications are significant. “We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI with each fueling advances in the other,” say Mirhoseini, Goldie and colleagues.
Microchip design is a complex and lengthy process. It begins with human designers setting out the basic requirements for the chip: its size, its function, how it will be tested, and so on. After that, the team maps out an abstract design for the way data flows through the chip and the logic operations that must be performed on it.

Hugely Complex Networks
The result is an abstract, but hugely complex, network of logic gates and combinations of logic gates with specific known functions, called macros. This network, which may have billions of components, is known as a “netlist.”...
....MORE