资源论文Autonomously Learning an Action Hierarchy Using a Learned Qualitative State Representation

Autonomously Learning an Action Hierarchy Using a Learned Qualitative State Representation

2019-11-15 | |  77 |   52 |   0

Abstract There has been intense interest in hierarchical reinforcement learning as a way to make Markov decision process planning more tractable, but there has been relatively little work on autonomously learning the hierarchy, especially in continuous domains. In this paper we present a method for learning a hierarchy of actions in a continuous environment. Our approach is to learn a qualitative representation of the continuous environment and then to defifine actions to reach qualitative states. Our method learns one or more options to perform each action. Each option is learned by fifirst learning a dynamic Bayesian network (DBN). We approach this problem from a developmental robotics perspective. The agent receives no extrinsic reward and has no external direction for what to learn. We evaluate our work using a simulation with realistic physics that consists of a robot playing with blocks at a table

上一篇:Transfer Learning from Minimal Target Data by Mapping across Relational Domains

下一篇:Spectral Embedded Clustering

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...