The rise of the machines was on full display Thursday in Seoul when a Google computer again defeated the top-ranked human player of Go, the world’s most complex board game.
The computer’s second consecutive victory over Lee Se-dol, the Go world champion, will be seen as a significant advancement in artificial intelligence.
The South Korean must now win three games in a row against Google DeepMind’s AlphaGo to triumph in the best-of-five series. The computer won its first match against Lee on Wednesday.
If Lee wins the series, he gets $1 million and reasserts his title as global champ; a convincing win by AlphaGo would signal the end of human dominance in the insanely complicated board game. (Google will donate the prize money to charity.)
Lee, 33, holds the highest possible professional ranking for a Go player and has been called “the Roger Federer of Go.”
Go originated thousands of years ago in China. During play, two opponents take turns placing black and white stones on a square grid of 19 lines by 19 lines. The goal is to take territorial control of the board by using pieces to surround those of the other player.
Games can last for hours, and winning requires immense mental stamina, intuition and strategy.
Teaching computers to master Go has been a kind of holy grail for artificial intelligence scientists. There are more possible configurations of the board than atoms in the universe, according to Demis Hassabis, CEO of Google DeepMind, which developed AlphaGo.
“Go is the most profound game that mankind has ever devised,” Hassabis said. “Go is a game primarily about intuition and feel rather than brute calculation, which is what makes it so hard for computers to play well.”
Last October, AlphaGo convincingly defeated the European Go champion, Fan Hui, obliterating him in five consecutive games. The computer’s victory was considered a huge breakthrough, occurring roughly a decade sooner than experts had expected.
Software programs long ago became adept at classic board games like backgammon. Their rapid progress culminated in the historic victory of IBM’s Deep Blue computer over world chess champion Gary Kasparov in 1997.
But it’s taken another two decades for artificial intelligence to get to grips with the mind-boggling complexities of Go. Until recently, software programs could only compete with human amateurs.
Google researchers say they expect AlphaGo’s technology will be put to use in the company’s own apps, and in areas such as medicine.
Google acquired DeepMind in 2014 to bolster its portfolio in artificial intelligence and robotics.