Wednesday, March 16, 2016

So, What Will Google’s Winning Go Algorithm Do Now That It's Won the Million Bucks?

Probably not the hookers and blow you see in the less cerebral/more raucous world of high-stakes chess.

From the journal Nature:
AlphaGo’s techniques could have broad uses, but moving beyond games is a challenge.
Following the defeat of one of its finest human players, the ancient game of Go has joined the growing list of tasks at which computers perform better than humans. In a 6-day tournament in Seoul, watched by a reported 100 million people around the world, the computer algorithm AlphaGo, created by the Google-owned company DeepMind, beat Go professional Lee Sedol by 4 games to 1. The complexity and intuitive nature of the ancient board game had established Go as one the greatest challenges in artificial intelligence (AI). Now the big question is what the DeepMind team will turn to next.

AlphaGo’s general-purpose approach — which was mainly learned, with a few elements crafted specifically for the game — could be applied to problems that involve pattern recognition, decision-making and planning. But the approach is also limited. “It’s really impressive, but at the same time, there are still a lot of challenges,” says Yoshua Bengio, a computer scientist at the University of Montreal in Canada.

Lee, who had predicted that he would win the Google tournament in a landslide, was shocked by his loss. In October, AlphaGo beat European champion Fan Hui. But the version of the program that won in Seoul is significantly stronger, says Jonathan Schaeffer, a computer scientist at the University of Alberta in Edmonton, Canada, whose Chinook software mastered draughts in 2007: “I expected them to use more computational resources and do a lot more learning, but I still didn’t expect to see this amazing level of performance.”

The improvement was largely down to the fact that the more AlphaGo plays, the better it gets, says Miles Brundage, a social scientist at Arizona State University in Tempe, who studies trends in AI. AlphaGo uses a brain-inspired architecture known as a neural network, in which connections between layers of simulated neurons strengthen on the basis of experience. It learned by first studying 30 million Go positions from human games and then improving by playing itself over and over again, a technique known as reinforcement learning. Then, DeepMind combined AlphaGo’s ability to recognize successful board configurations with a ‘look-ahead search’, in which it explores the consequences of playing promising moves and uses that to decide which one to pick.

Next, DeepMind could tackle more games. Most board games, in which players tend to have access to all information about play, are now solved. But machines still cannot beat humans at multiplayer poker, say, in which each player sees only their own cards. The DeepMind team has expressed an interest in tackling Starcraft, a science-fiction strategy game, and Schaeffer suggests that DeepMind devise a program that can learn to play different types of game from scratch. Such programs already compete annually at the International General Game Playing Competition, which is geared towards creating a more general type of AI. Schaeffer suspects that DeepMind would excel at the contest. “It’s so obvious, that I’m positive they must be looking at it,” he says....MORE
HT: the ubernerds at the Association for Computing Machinery