Wednesday, July 3, 2013

Researcher Dreams Up Machines That Learn Without Humans

Uh oh.
From wired:
A visual representation of Yoshua Bengio’s “learning model,” where machines “think” about faces and emotions. 
Image: Courtesy Yoshua Bengio

Yoshua Bengio recently had a vision — a vision of how to build computers that learn like people do.
It happened at an academic conference in May, and he was filled with excitement — perhaps more so than he’d ever been during his decades-long career in “deep learning,” an emerging field of computer science that seeks to engineer machines that mimic how the human brain processes information. Or, rather, how we assume the brain processes information.

In his hotel room, Bengio started furiously scribbling mathematical equations that captured his new ideas. Soon he was bouncing these ideas off various colleagues, including deep learning pioneer Yann LeCun of New York University. Judging from their response, Bengio knew he was onto something big.
When he made it back to his laboratory at the University of Montreal — home to one of the biggest concentrations of deep-learning researchers — Bengio and his team went to work turning his equations into functional, intelligent algorithms. About a month later, that hotel-room vision morphed into what he believes is one of the most important breakthroughs of his career, one that could accelerate the quest for artificial intelligence.

In short, Bengio has developed new ways for computers to learn without much input from us humans. Typically, machine learning requires “labeled data” — information that’s been categorized by real people. If you want a computer to learn what a cat looks like, you must first show it what a cat looks like. Bengio seeks to eliminate this step.

“Today’s models can be trained on huge quantities of data, but that’s not enough,” says Bengio, who together with LeCun and Google’s Geoffrey Hinton is one of the original musketeers of deep learning. “We need to discover learning algorithms that can take better advantage of all this unlabeled data that’s sitting out there.”

Currently, the most widely used deep-learning models — so called artificial neural networks harnessed by the likes of search giants Google and Baidu — use a combination of labeled and unlabeled data to make sense of the world. But unlabeled information far outweighs the amount people have been able to manually label, and if deep learning is to turn the corner, it must tackle areas where labeled data is scarce, including language translation and image recognition.....MORE