资源论文Prior-free and prior-dependent regret bounds for Thompson Sampling

Prior-free and prior-dependent regret bounds for Thompson Sampling

2020-01-16 | |  55 |   31 |   0

Abstract

We consider the stochastic multi-armed bandit problem with a prior distribution on the reward distributions. We are interested in studying prior-free and priordependent regret bounds, very much in the same spirit than the usual distributionfree and distribution-dependent bounds for the non-Bayesian stochastic bandit. We first show that Thompson Sampling attains an optimal prior-free bound in the sense ? that for any prior distribution its Bayesian regret is bounded from above by 图片.png This result is unimprovable in the sense that there exists a prior distribution  such that any algorithm has a Bayesian regret bounded from below by 图片.pngWe also study the case of priors for the setting of Bubeck et al. [2013] (where the optimal mean is known as well as a lower bound on the smallest gap) and we show that in this case the regret of Thompson Sampling is in fact uniformly bounded over time, thus showing that Thompson Sampling can greatly take advantage of the nice properties of these priors.

上一篇:Understanding Dropout

下一篇:Model Selection for High-Dimensional Regression under the Generalized Irrepresentability Condition

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...