Google software beats human Go champion in first match
A Google computer programme has defeated a human opponent in the first of five matches of a complex Chinese board game called Go.
The victory by AlphaGo, the artificial intelligence programme developed by Google DeepMind, over South Korean Go champion Lee Sedol is significant because the ancient Chinese game, one of the most creative and complex to be devised, was difficult for computers to master.
The near-infinite number of board positions in Go requires players to rely on intuition in making their moves. AlphaGo was designed to mimic that intuition in tackling complex tasks.
Artificial intelligence experts had forecast it would take another decade for computers to beat professional Go players, until AlphaGo beat a European Go champion last year.
Commentators said the match was close, with both AlphaGo and Mr Lee making some mistakes. The result was unpredictable until near the end.
Mr Lee's loss was a shock to South Koreans and Go fans. The 33-year-old was confident of a sweeping victory two weeks ago, but sounded less optimistic the day before the match.
"I was very surprised because I did not think that I would lose the game. A mistake I made at the very beginning lasted until the very last," said Mr Lee, who has won 18 world championships since becoming a professional Go player at the age of 12.
He said AlphaGo's strategy was "excellent" from the beginning.
Yoo Chang-hyuk, another South Korean Go master who commentated on the game, described the result as a big shock and said Mr Lee appeared to have been shaken at one point.
Hundreds of thousands of people watched the game live on TV and YouTube.
Computers conquered chess in 1997 in a match between IBM's Deep Blue and chess champion Garry Kasparov, leaving Go as "the only game left above chess", said Demis Hassabis, Google DeepMind's chief executive.
Leading human players rely heavily on intuition and feelings to choose from a near-infinite number of board positions in Go, making the game extremely challenging for the artificial intelligence community.
Mr Hassabis said: "We are very excited about this historic moment. We are very pleased about how AlphaGo performed."
DeepMind team built "reinforcement learning" into AlphaGo, meaning the machine plays against itself and adjusts its own neural networks based on trial-and-error. AlphaGo can also narrow down the search space for the next best move from the near-infinite to something more manageable. It also can anticipate long-term results of each move and predict the winner.