资源论文Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning

Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning

2020-03-11 | |  167 |   92 |   0

Abstract

We introduce SCAL, an algorithm designed to perform efficient exploration-exploitation in any unknown weakly-communicating Markov decision process (MDP) for which an upper bound c on the span of the optimal bias function is known. For an MDP with S states, A actions and 图片.png possible next states, we prove a regret bound of 图片.png which significantly improves over existing algorithms (e.g., UCRL and PSRL), whose regret scales linearly with the MDP diameter D. In fact, the optimal bias span is finite and often much smaller than D (e.g., D = ∞ in non-communicating MDPs). A similar result was originally derived by Bartlett and Tewari (2009) for R EGAL .C, for which no tractable algorithm is available. In this paper, we relax the optimization problem at the core of R EGAL .C, we carefully analyze its properties, and we provide the first computationally efficient algorithm to solve it. Fi nally, we report numerical simulations supporting our theoretical findings and showing how SCAL significantly outperforms UCRL in MDPs with large diameter and small span.

上一篇:Accelerated Spectral Ranking

下一篇:Ranking Distributions based on Noisy Sorting

用户评价
全部评价

热门资源

  • Deep Cross-media ...

    Cross-media retrieval is a research hotspot in ...

  • Regularizing RNNs...

    Recently, caption generation with an encoder-de...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...

  • Supervised Descen...

    Many computer vision problems (e.