资源论文On the Correctness and Sample Complexity of Inverse Reinforcement Learning

On the Correctness and Sample Complexity of Inverse Reinforcement Learning

2020-02-26 | |  93 |   50 |   0

Abstract

Inverse reinforcement learning (IRL) is the problem of finding a reward function that generates a given optimal policy for a given Markov Decision Process. This paper looks at an algorithmic-independent geometric analysis of the IRL problem with finite states and actions. A L1-regularized Support Vector Machine formulation of the IRL problem motivated by the geometric analysis is then proposed with the basic objective of the inverse reinforcement problem in mind: to find a reward function that generates a specified optimal policy. The paper further analyzes the proposed formulation of inverse reinforcement learning with n states and k actions, and shows a sample complexity of 图片.png for transition probability matrices with at most d nonzeros per row, for recovering a reward function that generates a policy that satisfies Bellman’s optimality condition with respect to the true transition probabilities.

上一篇:Meta-Inverse Reinforcement Learning with Probabilistic Context Variables

下一篇:Robust exploration in linear quadratic reinforcement learning

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...