资源论文Tracking the Best Expert in Non-stationary Stochastic Environments

Tracking the Best Expert in Non-stationary Stochastic Environments

2020-02-07 | |  61 |   44 |   0

Abstract 

We study the dynamic regret of multi-armed bandit and experts problem in nonstationary stochastic environments. We introduce a new parameter image.png, which measures the total statistical variance of the loss distributions over T rounds of the process, and study how this amount affects the regret. We investigate the interaction between image.png and image.png, which counts the number of times the distributions change, as well as image.png and V , which measures how far the distributions deviates over time. One striking result we find is that even when image.png, V , and ? are all restricted to constant, the regret lower bound in the bandit setting still grows with T . The other highlight is that in the full-information setting, a constant regret becomes achievable with constant image.png and image.png, as it can be made independent of T , while with constant V and image.png, the regret still has a image.png dependency. We not only propose algorithms with upper bound guarantee, but prove their matching lower bounds as well.

上一篇:Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

下一篇:Integrated Perception with Recurrent Multi-Task Neural Networks

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...