资源论文Thompson Sampling on Symmetric ?-Stable Bandits?

Thompson Sampling on Symmetric ?-Stable Bandits?

2019-10-10 | |  40 |   35 |   0
Abstract Thompson Sampling provides an efficient technique to introduce prior knowledge in the multiarmed bandit problem, along with providing remarkable empirical performance. In this paper, we revisit the Thompson Sampling algorithm under rewards drawn from symmetric ?-stable distributions, which are a class of heavy-tailed probability distributions utilized in finance and economics, in problems such as modeling stock prices and human behavior. We present an efficient framework for posterior inference, which leads to two algorithms for Thompson Sampling in this setting. We prove finite-time regret bounds for both algorithms, and demonstrate through a series of experiments the stronger performance of Thompson Sampling in this setting. With our results, we provide an exposition of symmetric ?-stable distributions in sequential decision-making, and enable sequential Bayesian inference in applications from diverse fields in finance and complex systems that operate on heavy-tailed features

上一篇:Swell-and-Shrink: Decomposing Image Captioning by Transformation and Summarization

下一篇:Truly Batch Apprenticeship Learning with Deep Successor Features ?

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...