At first, Fan Hui thought the move was rather odd. But then he saw its beauty.
“It’s not a human move. I’ve never seen a human play this move,” he says. “So beautiful.” It’s a word he keeps repeating. Beautiful. Beautiful. Beautiful.
The move in question was the 37th in the second game of thehistoric Go match between Lee Sedol, one of the world’s top players, and AlphaGo, an artificially intelligent computing system built by researchers at Google. Inside the towering Four Seasons hotel in downtown Seoul, the game was approaching the end of its first hour when AlphaGo instructed its human assistant to place a black stone in a largely open area on the right-hand side of the 19-by-19 grid that defines this ancient game. And just about everyone was shocked.
“That’s a very surprising move,” said one of the match’s English language commentators, who is himself a very talented Go player. Then the other chuckled and said: “I thought it was a mistake.” But perhaps no one was more surprised than Lee Sedol, who stood up and left the match room. “He had to go wash his face or something—just to recover from it,” said the first commentator.
Even after Lee Sedol returned to the table, he didn’t quite know what to do, spending nearly 15 minutes considering his next play. AlphaGo’s move didn’t seem to connect with what had come before. In essence, the machine was abandoning a group of stones on the lower half of the board to make a play in a different area. AlphaGo placed its black stone just beneath a single white stone played earlier by Lee Sedol, and though the move may have made sense in another situation, it was completely unexpected in that particular place at that particular time—a surprise all the more remarkable when you consider that people have been playing Go for more than 2,500 years. The commentators couldn’t even begin to evaluate the merits of the move.
Then, over the next three hours, AlphaGo went on to win the game, taking a two-games-to-none lead in this best-of-five contest. To date, machines have beaten the best humans at chess and checkers and Othello and Jeopardy!. But no machine has beaten the very best at Go, a game that is exponentially more complex than chess. Now, AlphaGo is one win away.
Fan Hui is in a better position to judge this move than anyone. He’s the three-time European Go champion and the first top player to challenge AlphaGo. The two played a five-game match in October, and he lost all five games. But Fan Hui wasn’t bitter. In the months since, he served as an adviser for the AlphaGo team as they retrained this artificially intelligent system to a significantly higher level. Like the commentators, he initially didn’t know what to make of the move. But after about ten seconds, he says, he saw how the move connected with what came before—how it dovetailed with the 18 other black stones AlphaGo had already played.
The average human will never understand this move. But Fan Hui has the benefit of watching AlphaGo up close for the past several months—playing the machine time and again. And the proof of the move’s value lies in the eventual win for AlphaGo. Over two games, it has beaten the very best by playing in ways that no human would.
Like little else, this path to victory highlights the power and the mystery of the machine learning technologies that underpin Google’s creation—technologies that are already reinventing so many online services inside companies like Google and Facebook, and are poised to remake everything from scientific research to robotics. With these technologies, AlphaGo could learn the game by examining thousands of human Go moves, and then it could master the game by playing itself over and over and over again. The result is a system of unprecedented beauty.
But at the same time, AlphaGo’s triumph stirred a certain sadness in so many of the humans who watched yesterday’s match from the press rooms at the Four Seasons and, undoubtedly, in many of the millions of others who followed the contest on YouTube. After looking so powerful just a few days before, one of our own now seemed so weak.
At the Four Seasons, that feeling of sadness only increased during the post-game press conference, when Lee Sedol said that over the course of the four-hour match he never once felt in control. “Yesterday, I was surprised,” he said through an interpreter, referring to Game One. “But today I am speechless. If you look at the way the game was played, I admit, it was a very clear loss on my part. From the very beginning of the game, there was not a moment in time when I felt that I was leading.”
It wasn’t just move 37. AlphaGo made several other surprising moves, one just as the game was under way. In the end, Lee Sedol said he felt that, unlike in Game One, AlphaGo made no real mistakes. Not one. “I really feel that AlphaGo played the near perfect game,” he said. Later, he seemed to concede match victory to the Google machine, saying he didn’t want the event to end without him winning at least one game. Earlier in the week, before the match began, he was completely sure he would triumph.
Fan Hui once sat in much the same place. As we talk after the match, he clearly feels an enormous empathy for Lee Sedol, complaining about the online critics who have lambasted the Korean’s play. “Be gentle with Lee Sedol,” he says. “Be gentle.” But as hard as it was for Fan Hui to lose back in October and have the loss reported across the globe—and as hard as it has been to watch Lee Sedol’s struggles—his primary emotion isn’t sadness.
As he played match after match with AlphaGo over the past five months, he watched the machine improve. But he also watched himself improve. The experience has, quite literally, changed the way he views the game. When he first played the Google machine, he was ranked 633rd in the world. Now, he is up into the 300s. In the months since October, AlphaGo has taught him, a human, to be a better player. He sees things he didn’t see before. And that makes him happy. “So beautiful,” he says. “So beautiful.”
Source:
http://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/
Featured