资源论文GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms

GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms

2020-03-16 | |  73 |   40 |   0

Abstract

In continuous action domains, standard deep reinforcement learning algorithms like DDPG suffer from inefficient exploration when facing sparse or deceptive reward problems. Conversely, evolutionary and developmental methods focusing on exploration like Novelty Search, QualityDiversity or Goal Exploration Processes explore more robustly but are less efficient at fine-tunin policies using gradient-descent. In this paper, we present the GEP PG approach, taking the best of both worlds by sequentially combining a Goal Exploration Process and two variants of DDPG. We study the learning performance of these components and their combination on a low dimensional deceptive reward problem and on the larger HalfCheetah benchmark. We show that DDPG fails on the former and that GEP PG improves over the best DDPG variant in both environments.

上一篇:Learning in Integer Latent Variable Models with Nested Automatic Differentiation

下一篇:Variational Bayesian dropout: pitfalls and fixes

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...