资源论文Agnostic System Identification for Model-Based Reinforcement Learning

Agnostic System Identification for Model-Based Reinforcement Learning

2020-03-02 | |  95 |   47 |   0

Abstract

A fundamental problem in control is to learn a model of a system from observations that is useful for controller synthesis. To provide good performance guarantees, existing methods must assume that the real system is in the class of models considered during learning. We present an iterative method with strong guarantees even in the agnostic case where the system is not in the class. In particular, we show that any no-regret online learning algorithm can be used to obtain a nearoptimal policy, provided some model achieves low training error and access to a good exploration distribution. Our approach applies to both discrete and continuous domains. We demonstrate its efficacy and scalability on a challengin helicopter domain from the literature.

上一篇:Artist Agent: A Reinforcement Learning Approach to Automatic Stroke Generation in Oriental Ink Painting

下一篇:Monte Carlo Bayesian Reinforcement Learning

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...