Mastering the game of Go without human knowledge

Oct 19, 2017
6 pages
Published in:
  • Nature 550 (2017) 7676, 354-359
  • Published: Oct 19, 2017

Citations per year

201720192021202320250102030
Abstract: (Springer)
A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo. Starting from zero knowledge and without human data, AlphaGo Zero was able to teach itself to play Go and to develop novel strategies that provide new insights into the oldest of games. To beat world champions at the game of Go, the computer program AlphaGo has relied largely on supervised learning from millions of human expert moves. David Silver and colleagues have now produced a system called AlphaGo Zero, which is based purely on reinforcement learning and learns solely from self-play. Starting from random moves, it can reach superhuman level in just a couple of days of training and five million games of self-play, and can now beat all previous versions of AlphaGo. Because the machine independently discovers the same fundamental principles of the game that took humans millennia to conceptualize, the work suggests that such principles have some universal character, beyond human bias.
  • Computational science
  • Computer science
  • Reward
  • The Elements of Statistical Learning: Data Mining, Inference, and Prediction
    • J. Friedman
      ,
    • T. Hastie
      ,
    • R. Tibshirani
  • ImageNet classification with deep convolutional neural networks. In Adv. Neural Inf. Process. Syst. Vol. 25 Pereira, F., Burges, C. J. C., Bottou, L. & Weinberger, K. Q. (eds.)
    • A. Krizhevsky
      ,
    • I. Sutskever
      ,
    • G. Hinton
  • He, K
    • X. Zhang
      ,
    • S. Ren
      ,
    • J. Sun
      • Pattern Recognition 770 (2016) 778
  • Building Expert Systems
    • F. Hayes-Roth
      ,
    • D. Waterman
      ,
    • D. Lenat
  • Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. In Adv. Neural Inf. Process. Syst. Vol. 27 Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. D. & Weinberger, K. Q. (eds.)
    • X. Guo
      ,
    • S.P. Singh
      ,
    • H. Lee
      ,
    • R.L. Lewis
      ,
    • X. Wang
  • Asynchronous methods for deep reinforcement learning. In Proc. 33rd Int. Conf. Mach. Learn. Vol. 48 Balcan, M. F. & Weinberger, K. Q. (eds.)
    • V. Mnih
  • Reinforcement learning with unsupervised auxiliary tasks. In 5th Int. Conf. Learn. Representations
    • M. Jaderberg
  • Learning to act by predicting the future. In 5th Int. Conf. Learn. Representations
    • A. Dosovitskiy
      ,
    • V. Koltun
  • in Challenges for Computational Intelligence ( Duch, W. & Man´dziuk, J
    • J. Man´dziuk
  • Efficient selectivity and backup operators in Monte-Carlo tree search. In 5th Int. Conf. Computers and Games Ciancarini, P. (eds.)
    • R. Coulom
  • Bandit based Monte-Carlo planning. In 15th Eu. Conf
    • L. Kocsis
      ,
    • C. Szepesvári
      • Machine Learning 282 (2006) 293
  • A survey of Monte Carlo tree search methods. IEEE Trans. Comput. Intell. AI Games 4, 1-49
    • C. Browne
  • in The Handbook of Brain Theory and Neural Networks Ch. 3 Arbib, M. (eds.)
    • Y. LeCun
      ,
    • Y. Bengio
  • Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proc. 32nd Int. Conf
    • S. Ioffe
      ,
    • C. Szegedy
      • Machine Learning 37 (2015) 448-456
  • Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit
    • R.H.R. Hahnloser
      ,
    • R. Sarpeshkar
      ,
    • M.A. Mahowald
      ,
    • R.J. Douglas
      ,
    • H.S. Seung
  • Dynamic Programming and Markov Processes (MIT Press,)
    • R. Howard
  • Reinforcement Learning: an Introduction (MIT Press,)
    • R. Sutton
      ,
    • A. Barto
  • Approximate policy iteration: a survey and some new methods. J. Control Theory
    • D.P. Bertsekas
  • Approximate policy iteration schemes: a comparison. In Proc. 31st Int. Conf
    • B. Scherrer
      • Machine Learning 32 (2014) 1314-1322
  • Multi-armed bandits with episode context
    • C.D. Rosin
  • Whole-history rating: a Bayesian rating system for players of time-varying strength. In Int. Conf. Comput. Games (eds van den Herik, H. J
    • R. Coulom