资源论文EXPLORATION IN REINFORCEMENT LEARNINGWITH DEEP COVERING OPTIONS

EXPLORATION IN REINFORCEMENT LEARNINGWITH DEEP COVERING OPTIONS

2020-01-02 | |  90 |   57 |   0

Abstract

While many option discovery methods have been proposed to accelerate exploration in reinforcement learning, they are often heuristic. Recently, covering options was proposed to discover a set of options that provably reduce the upper bound of the environment’s cover time, a measure of the difficulty of exploration. However, they are constrained to tabular tasks and are not applicable to tasks with large or continuous state-spaces. We introduce deep covering options, an online method that extends covering options to large state spaces, automatically discovering taskagnostic options that encourage exploration. We evaluate our method in several challenging sparse-reward domains and we show that our approach identifies less explored regions of the state-space and successfully generates options to visit these regions, substantially improving both the exploration and the total accumulated reward.

上一篇:EVOLUTIONARY POPULATION CURRICULUM FORS CALING MULTI -AGENT REINFORCEMENT LEARNING

下一篇:BLACK -BOX OFF -POLICY ESTIMATION FORI NFINITE -H ORIZON REINFORCEMENT LEARNING

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...