资源论文STATE ALIGNMENT- BASED IMITATION LEARNING

STATE ALIGNMENT- BASED IMITATION LEARNING

2020-01-02 | |  53 |   39 |   0

Abstract

Consider an imitation learning problem that the imitator and the expert have different dynamics models. Most of the current imitation learning methods fail because they focus on imitating actions. We propose a novel state alignment based imitation learning method to train the imitator to follow the state sequences in expert demonstrations as much as possible. The state alignment comes from both local and global perspectives and we combine them into a reinforcement learning framework by a regularized policy update objective. We show the superiority of our method on standard imitation learning settings and imitation learning settings where the expert and imitator have different dynamics models.

上一篇:LEARNING ROBUST REPRESENTATIONS VIA MULTI -V IEW INFORMATION BOTTLENECK

下一篇:ITERATIVE ENERGY- BASED PROJECTION ON ANORMAL DATAMANIFOLD FOR ANOMALY LOCALIZATION

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...