资源论文LEARNING SELF -C ORRECTABLE POLICIES ANDVALUE FUNCTIONS FROM DEMONSTRATIONS WITHN EGATIVE SAMPLING

LEARNING SELF -C ORRECTABLE POLICIES ANDVALUE FUNCTIONS FROM DEMONSTRATIONS WITHN EGATIVE SAMPLING

2020-01-02 | |  52 |   53 |   0

Abstract

Imitation learning, followed by reinforcement learning algorithms, is a promising paradigm to solve complex control tasks sample-efficiently. However, learning from demonstrations often suffers from the covariate shift problem, which results in cascading errors of the learned policy. We introduce a notion of conservativelyextrapolated value functions, which provably lead to policies with self-correction. We design an algorithm Value Iteration with Negative Sampling (VINS) that practically learns such value functions with conservative extrapolation. We show that VINS can correct mistakes of the behavioral cloning policy on simulated robotics benchmark tasks. We also propose the algorithm of using VINS to initialize a reinforcement learning algorithm, which is shown to outperform prior works in sample efficiency.

上一篇:SEMANTICALLY-G UIDED REPRESENTATION LEARN -ING FOR SELF -S UPERVISED MONOCULAR DEPTH

下一篇:RIDGE REGRESSION :S TRUCTURE ,C ROSS -VALIDATION ,AND SKETCHING

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...