资源论文Abstraction Selection in Model-Based Reinforcement Learning

Abstraction Selection in Model-Based Reinforcement Learning

2020-03-05 | |  86 |   37 |   0

Abstract

State abstractions are often used to reduce the complexity of model-based reinforcement learning when only limited quantities of data are available. However, choosing the appropriate level of abstraction is an important problem in practice. Existing approaches have theoretical guarantees only under strong assumptions on the domain or asymptotically large amounts of data, but in this paper we propose a simple algorithm based on statistical hypothesis testing that comes with a finite-sample guarantee under assumptions on candidate abstractions. Our algorithm trades off the low approximation error of finer abstractions against the low estimation error of coarser abstrac tions, resulting in a loss bound that depends only on the quality of the best available abstraction an is polynomial in planning horizon.

上一篇:Improved Regret Bounds for Undiscounted Continuous Reinforcement Learning

下一篇:Safe Policy Search for Lifelong Reinforcement Learning with Sublinear Regret

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...