资源论文Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes

Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes

2020-02-14 | |  57 |   52 |   0

Abstract 

While designing the state space of an MDP, it is common to include states that are transient or not reachable by any policy (e.g., in mountain car, the product space of speed and position contains configurations that are not physically reachable). This results in weakly-communicating or multi-chain MDPs. In this paper, we introduce TUCRL, the first algorithm able to perform efficient exploration-exploitation in any finite Markov Decision Process (MDP) without requiring any form of prior knowledge. In particular, for any MDP with image.png communicating states, A actions and image.png possible communicating next states, we derive a image.png regret bound, where image.png is the diameter (i.e., the length of the longest shortest path between any two states) of the communicating part of the MDP. This is in contrast with existing optimistic algorithms (e.g., UCRL, Optimistic PSRL) that suffer linear regret in weakly-communicating MDPs, as well as posterior sampling or regularised algorithms (e.g., R EGAL), which require prior knowledge on the bias span of the optimal policy to achieve sub-linear regret. We also prove that in weaklycommunicating MDPs, no algorithm can ever achieve a logarithmic growth of the regret without first suffering a linear regret for a number of steps that is exponential in the parameters of the MDP. Finally, we report numerical simulations supporting our theoretical findings and showing how TUCRL overcomes the limitations of the state-of-the-art.

上一篇:Bayesian Distributed Stochastic Gradient Descent

下一篇:Bayesian Control of Large MDPs with Unknown Dynamics in Data-Poor Environments

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...