资源论文Thinking Fast and Slow with Deep Learning and Tree Search

Thinking Fast and Slow with Deep Learning and Tree Search

2020-02-10 | |  64 |   41 |   0

Abstract 

Sequential decision making problems, such as structured prediction, robotic control, and game playing, require a combination of planning policies and generalisation of those plans. In this paper, we present Expert Iteration (E X I T), a novel reinforcement learning algorithm which decomposes the problem into separate planning and generalisation tasks. Planning new policies is performed by tree search, while a deep neural network generalises those plans. Subsequently, tree search is improved by using the neural network policy to guide search, increasing the strength of new plans. In contrast, standard deep Reinforcement Learning algorithms rely on a neural network not only to generalise plans, but to discover them too. We show that E X I T outperforms REINFORCE for training a neural network to play the board game Hex, and our final tree search agent, trained tabula rasa, defeats MoHEX 1.0, the most recent Olympiad Champion player to be publicly released.

上一篇:Data-Efficient Reinforcement Learning in Continuous State-Action Gaussian-POMDPs

下一篇:#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...