资源论文DeltaDou: Expert-level Doudizhu AI through Self-play

DeltaDou: Expert-level Doudizhu AI through Self-play

2019-09-29 | |  145 |   42 |   0
Abstract Artificial Intelligence has seen several breakthroughs in two-player perfect information game. Nevertheless, Doudizhu, a three-player imperfect information game, is still quite challenging. In this paper, we present a Doudizhu AI by applying deep reinforcement learning from games of self-play. The algorithm combines an asymmetric MCTS on nodes representing each player’s information set, a policy-value network that approximates the policy and value on each decision node, and inference on unobserved hands of other players by given policy. Our results show that self-play can significantly improve the performance of our agent in this multiagent imperfect information game. Even starting with a weak AI, our agent can achieve human expert level after days of self-play and training

上一篇:Deanonymizing Social Networks Using Structural Information

下一篇:Depth-First Memory-Limited AND/OR Search and Unsolvability in Cyclic Search Spaces

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...