资源论文On Top-k Selection in Multi-Armed Bandits and Hidden Bipartite Graphs

On Top-k Selection in Multi-Armed Bandits and Hidden Bipartite Graphs

2020-02-04 | |  46 |   43 |   0

Abstract 

This paper discusses how to efficiently choose from n unknown distributions the k ones whose means are the greatest by a certain metric, up to a small relative error. We study the topic under two standard settings—multi-armed bandits and hidden bipartite graphs—which differ in the nature of the input distributions. In the former setting, each distribution can be sampled (in the i.i.d. manner) an arbitrary number of times, whereas in the latter, each distribution is defined on a population of a finite size m (and hence, is fully revealed after m samples). For both settings, we prove lower bounds on the total number of samples needed, and propose optimal algorithms whose sample complexities match those lower bounds.

上一篇:Semi-Proximal Mirror-Prox for Nonsmooth Composite Minimization

下一篇:Deep learning with Elastic Averaging SGD

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...