Game over! Computer wins series against Go champion

0
640

SEOUL – A Google-developed computer programme took an unassailable 3-0 lead in its match-up with a South Korean Go grandmaster on Saturday – marking a major breakthrough for a new style of “intuitive” artificial intelligence (AI).

The programme, AlphaGo, secured victory in the five-match series with its third consecutive win over Lee Se-Dol – one the ancient game’s greatest modern players with 18 international titles to his name.

Lee, who has topped the world ranking for much of the past decade and confidently predicted an easy victory when accepting the AlphaGo challenge, now finds himself fighting to avoid a whitewash defeat in the two remaining games on Sunday and Tuesday.

“AlphaGo played consistently from beginning to the end while Lee, as he is only human, showed some mental vulnerability,” said one of Lee’s former coaches, Kwon Kap-Yong.

“The machine was increasingly gaining the upper hand as the series progressed,” Kwon said.

For AlphaGo’s creators, Google DeepMind, victory goes far beyond the $1.0 million dollar prize on offer in Seoul, proving that AI can go beyond superhuman number-crunching.

The most famous AI victory to date came in 1997 when the IBM-developed supercomputer Deep Blue beat Garry Kasparov, the then-world class chess champion, in its second attempt.

But a true mastery of Go, which has more possible move configurations than there are atoms in the universe, had long been considered the exclusive province of humans – until now.

AlphaGo’s creators had described Go as the “Mt Everest” of AI, citing the complexity of the game, which requires a degree of creativity and intuition to prevail over an opponent.

AlphaGo first came to prominence with a 5-0 drubbing of European champion Fan Hui last October, but it had been expected to struggle against 33-year-old Lee.

Creating “general” or multi-purpose, rather than “narrow”, task-specific intelligence, is the ultimate goal in AI – something resembling human reasoning based on a variety of inputs and, crucially, self-learning.

In the case of Go, Google developers realised a more “human-like” approach would win over brute computing power.

The 3,000-year-old Chinese board game involves two players alternately laying black and white stones on a chequerboard-like grid of 19 lines by 19 lines. The winner is the player who manages to seal off more territory.

AlphaGo uses two sets of “deep neutral networks” that allow it to crunch data in a more human-like fashion – dumping millions of potential moves that human players would instinctively know were pointless.

It also employs algorithms that allow it to learn and improve from matchplay experience.

It is able to predict a winner from each move, thus reducing the search base to manageable levels – something co-creator David Silver has described as “more akin to imagination”.

Image: 
Category: 
Publication Date: 
Saturday, March 12, 2016 – 17:10
Send to mobile app: 
Source: 



Hermes ID: 
2 134 851
Hermes ID String: 
PKGAME12
Hermes Author: 
PRABUKM
Story Type: 
Others

Source link