Tuesday, March 4, 2014

"As Machines Get Smarter, Evidence They Learn Like Us"

The post immediately below on Cliff Asness and the nature of his reality reminded me of Renaissance Technologies' Jim Simons and the nature of his reality, part of which is funding the Simons Foundation which puts out Quanta Magazine which had an article I'd been meaning to post.

From Quanta:

Studies suggest that computer models called neural networks, which are used in a growing number of applications, may 'learn' to recognize patterns in data using the same algorithms as the human brain.
Adam Hester
Studies suggest that computer models called neural networks, which are used in a growing number of applications, may learn to recognize patterns in data using the same algorithms as the human brain.
The brain performs its canonical task — learning — by tweaking its myriad connections according to a secret set of rules. To unlock these secrets, scientists 30 years ago began developing computer models that try to replicate the learning process. Now, a growing number of experiments are revealing that these models behave strikingly similar to actual brains when performing certain tasks. Researchers say the similarities suggest a basic correspondence between the brains’ and computers’ underlying learning algorithms.

The algorithm used by a computer model called the Boltzmann machine, invented by Geoffrey Hinton and Terry Sejnowski in 1983, appears particularly promising as a simple theoretical explanation of a number of brain processes, including development, memory formation, object and sound recognition, and the sleep-wake cycle.

“It’s the best possibility we really have for understanding the brain at present,” said Sue Becker, a professor of psychology, neuroscience, and behavior at McMaster University in Hamilton, Ontario. “I don’t know of a model that explains a wider range of phenomena in terms of learning and the structure of the brain.”

Hinton, a pioneer in the field of artificial intelligence, has always wanted to understand the rules governing when the brain beefs a connection up and when it whittles one down — in short, the algorithm for how we learn. “It seemed to me if you want to understand something, you need to be able to build one,” he said. Following the reductionist approach of physics, his plan was to construct simple computer models of the brain that employed a variety of learning algorithms and “see which ones work,” said Hinton, who splits his time between the University of Toronto, where he is a professor of computer science, and Google.
Multilayer neural networks consist of layers of artificial neurons with weighted connections between them. Input data fed into the network sends a cascade of signals through the layers, and a learning algorithm dictates whether to increase or decrease the weight of each connection. The result is a network more attuned to the patterns that exist in data.
Multilayer neural networks consist of layers of artificial neurons with weighted connections between them. Input data fed into the network sends a cascade of signals through the layers, and a learning algorithm dictates whether to increase or decrease the weight of each connection. The result is a network more attuned to the patterns that exist in data.
During the 1980s and 1990s, Hinton — the great-great-grandson of the 19th-century logician George Boole, whose work is the foundation of modern computer science — invented or co-invented a collection of machine learning algorithms. The algorithms, which tell computers how to learn from data, are used in computer models called artificial neural networks — webs of interconnected virtual neurons that transmit signals to their neighbors by switching on and off, or “firing.” When data are fed into the network, setting off a cascade of firing activity, the algorithm determines based on the firing patterns whether to increase or decrease the weight of the connection, or synapse, between each pair of neurons.

For decades, many of Hinton’s computer models languished. But thanks to advances in computing power, scientists’ understanding of the brain and the algorithms themselves, neural networks are playing an increasingly important role in neuroscience. Sejnowski, head of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies in La Jolla, Calif., said: “Thirty years ago, we had very crude ideas; now we are beginning to test some of those ideas.”

Brain Machines
Hinton’s early attempts at replicating the brain were limited. Computers could run his learning algorithms on small neural networks, but scaling the models up quickly overwhelmed the processors. In 2005, Hinton discovered that if he sectioned his neural networks into layers and ran the algorithms on them one layer at a time, which approximates the brain’s structure and development, the process became more efficient.
Although Hinton published his discovery in two top journals, neural networks had fallen out of favor by then, and “he was struggling to get people interested,” said Li Deng, a principal researcher at Microsoft Research in Washington state. Deng, however, knew Hinton and decided to give his “deep learning” method a try in 2009, quickly seeing its potential. In the years since, the theoretical learning algorithms have been put to practical use in a surging number of applications, such as the Google Now personal assistant and the voice search feature on Microsoft Windows phones....MORE
Also from Quanta:
"The Future Fabric of Data Analysis"
Data Driven: The New Big Science: Chapter 4 Biology in the Era of Big Data
Previously in the Data Driven: The New Big Science series:
Chapter 1: Who's Driving?
Imagining Data Without Division
Chapter 2: Digits in the Sky
A Digital Copy of the Universe, Encrypted
Chapter 3: Revolutionary Algorithms
The Mathematical Shape of Things to Come