Friday, October 8, 2021

Chips: Neuromorphic Computing

From the journal Science, September 30:

Learning curve
New brain-inspired chips could provide the smarts for autonomous robots and self-driving cars 

Hillsboro, Oregon—Though he catches flak for it, Garrett Kenyon, a physicist at Los Alamos National Laboratory, calls artificial intelligence (AI) “overhyped.” The algorithms that underlie everything from Alexa’s voice recognition to credit card fraud detection typically owe their skills to deep learning, in which the software learns to perform specific tasks by churning through vast databases of examples. These programs, Kenyon points out, don’t organize and process information the way human brains do, and they fall short when it comes to the versatile smarts needed for fully autonomous robots, for example. “We have a lot of fabulous devices out there that are incredibly useful,” Kenyon says. “But I would not call any of that particularly intelligent.”

Kenyon and many others see hope for smarter computers in an upstart technology called neuromorphic computing. In place of standard computing architecture, which processes information linearly, neuromorphic chips emulate the way our brains process information, with myriad digital neurons working in parallel to send electrical impulses, or spikes, to networks of other neurons. Each silicon neuron fires when it receives enough spikes, passing along its excitation to other neurons, and the system learns by reinforcing connections that fire regularly while paring away those that don’t. The approach excels at spotting patterns in large amounts of noisy data, which can speed learning. Because information processing takes place throughout the network of neurons, neuromorphic chips also require far less shuttling of data between memory and processing circuits, boosting speed and energy efficiency.

Neuromorphic computing isn’t new. Yet, progress has been slow, with chipmakers reluctant to invest in the technology without a proven market, and algorithm developers struggling to write software for an entirely new computer architecture. But the field appears to be maturing as the capabilities of the chips increase, which has attracted a growing community of software developers.

This week, Intel released the second generation of its neuromorphic chip, Loihi. It packs in 1 million artificial neurons, six times more than its predecessor, which connect to one another through 120 million synapses. Other companies, such as BrainChip and SynSense, have also recently rolled out new neuromorphic hardware, with chips that speed tasks such as computer vision and audio processing. Neuromorphic computing “is going to be a rock star,” says Thomas Cleland, a neurobiologist at Cornell University. “It won’t do everything better. But it will completely own a fraction of the field of computing.”

Intel’s venture into neuromorphic architecture edges the chip giant away from its famous general purpose computer chips, known as central processing units (CPUs). In recent years, the pace of advances in CPUs’ silicon technology has begun to slow. That has led to a proliferation of specialty computer chips, such as graphical processing units (GPUs) and dedicated memory chips, each tailored to a specific job. Neuromorphic chips may extend this trend. They excel at processing the vast data sets needed to give computers senses, such as vision and smell, says Mike Davies, who leads Intel’s neuromorphic research. That, along with their energy efficiency, appears to make them ideally suited for mobile devices that have a limited power supply and are untethered from traditional computer networks....

....MUCH MORE