The significantly expanded and updated new edition of a widely used text on reinforcement learning, one of the most active research areas in artificial intelligence.
Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field's key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics.
Like the first edition, this second edition focuses on core, online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new for the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning's relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson's wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.
“This book is the bible of reinforcement learning, and the new edition is particularly timely given the burgeoning activity in the field. No one with an interest in the problem of learning to act - student, researcher, practitioner, or curious nonspecialist - should be without it.”
—Professor of Computer Science, University of Washington, and author of The Master AlgorithmThe goal of building systems that can adapt to their environments and learn from their experience has attracted researchers from many fields, including computer science, engineering, mathematics, physics, neuroscience, and cognitive science. Out of this research has come a wide variety of learning techniques, including methods for learning decision trees, decision rules, neural networks, statistical classifiers, and probabilistic graphical models. The researchers in these various areas have also produced several different theoretical frameworks for understanding these methods, such as computational learning theory, Bayesian learning theory, classical statistical theory, minimum description length theory, and statistical mechanics approaches. These theories provide insight into experimental results and help to guide the development of improved learning algorithms. A goal of the series is to promote the unification of the many diverse strands of machine learning research and to foster high quality research and innovative applications. This series will publish works of the highest quality that advance the understanding and practical application of machine learning and adaptive computation. Research monographs, introductory and advanced level textbooks, how-to books for practitioners will all be considered. For information on the submission of proposals and manuscripts, please contact any of the series editors above or the publisher, Marie Lee (marielee@mit.edu). The series editor is Francis Bach.