I haven’t played Go but have been reading about it over the last little while since it turned out that an AI learned to beat the world champion in it. The significance of this is that those who knew the game of Go believed that it was out of a machine’s reach to learn to play it better than the best human. This is because, unlike chess, brute computational force was not a competitive advantage. Go is too complex and, thus, there is an art to playing it.
Even though I don’t know much about the game, it is interesting to read accounts of how DeepMind’s AlphaGo was able to defeat Lee Sedol.At first, Fan Hui thought the move was rather odd. But then he saw its beauty.
“It’s not a human move. I’ve never seen a human play this move,” he says. “So beautiful.” It’s a word he keeps repeating. Beautiful. Beautiful. Beautiful.There were no complaints about the unfair computational ability here. Instead, an appreciation of something new — in my reading, a new strategy. In other words, an innovation.How did the innovation arise? For it to be one, no human knew of it beforehand including those ‘programming’ (if that is the right word now), AlphaGo. This seems to be the case. Instead, AlphaGo learned to play it by being fed the data for thousands of games including those presumably played by Lee Sedol. Interestingly, this is a similar approach to how top Go players are trained. But AlphaGo was also learning as it played Fan Hui, the European champ it defeated a few months ago. Interesting Fan Hui was learning too and through their interaction his game has dramatically improved from 633 to being in the 300s. In other words, AlphaGo had the corpus of knowledge of past Go games and their players but was being trained by someone who was far from the best. It was not like it was the number 2 player in the world or a Top 10 player.This achievement is monumental but the game theorist in my is still unsure if that is it for human Go players. There are things we don’t know. For instance, AlphaGo may have trained to play one person, Lee Sedol, and may well lose to others. Fan Hui has defeated Go in unofficial games. In particular, how would AlphaGo go against people who made more mistakes? My point here is that AlphaGo may have been trained to know who it is playing but what happens when it doesn’t know that. Game theory tells us that your tactics will change depending upon who you play against (just think about the scissor’s biased people in Rock-Paper-Scissors). A human Go champion knows what they are up against but what as AlphaGo know. My assumption here is that it may have known too much. (By the way, this is perhaps the reason why the best chess players in the world combine a team of humans with an AI rather than an AI alone).This suggests some other implications....MORE
Saturday, March 12, 2016
"Go, AI and Game Theory"
From Digitopoly: