资源论文Dueling Bandits with Weak Regret

Dueling Bandits with Weak Regret

2020-03-09 | |  89 |   58 |   0

Abstract

We consider online content recommendation with implicit feedback through pairwise comparisons, formalized as the so-called dueling bandit problem. We study the dueling bandit problem in the Condorcet winner setting, and consider two notions of regret: the more well-studied strong regret, which is 0 only when both arms pulled are the Condorcet winner; and the less well-studied weak regret, which is 0 if either arm pulled is th Condorcet winner. We propose a new algorithm for this problem, Winner Stays (WS), with variations for each kind of regret: WS for weak regret (WS-W) has expected cumulative weak regret that is 图片.png and O(N log(N )) if arms have a total order; WS for strong regret (WS-S) has expected cumulative strong regret of O(图片.png + N log(T )), and O(N log(N ) + N log(T )) if arms have a total order. WS-W is the first dueling bandit algorithm with weak regret that is constant in time. WS is simple to compute, even for problems with many arms, and we demonstrate through numerical experiments on simulated and real data that WS has significantly smaller regret than existing algorithms in both the weakand strong-regret settings.

上一篇:Theoretical Properties for Neural Networks with Weight Matrices of Low Displacement Rank

下一篇:Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...