资源论文Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition

Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition

2020-02-17 | |  122 |   43 |   0

Abstract 

The design of a reward function often poses a major practical challenge to realworld applications of reinforcement learning. Approaches such as inverse reinforcement learning attempt to overcome this challenge, but require expert demonstrations, which can be difficult or expensive to obtain in practice. We propose variational inverse control with events (VICE), which generalizes inverse reinforcement learning methods to cases where full demonstrations are not needed, such as when only samples of desired goal states are available. Our method is grounded in an alternative perspective on control and reinforcement learning, where an agent’s goal is to maximize the probability that one or more events will happen at some point in the future, rather than maximizing cumulative rewards. We demonstrate the effectiveness of our methods on continuous control tasks, with a focus on highdimensional observations like images where rewards are hard or even impossible to specify.

上一篇:Statistical mechanics of low-rank tensor decomposition

下一篇:Robustness of conditional GANs to noisy labels

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...