资源论文Combinatorial Bandits with Relative Feedback

Combinatorial Bandits with Relative Feedback

2020-02-23 | |  56 |   52 |   0

Abstract

We consider combinatorial online learning with subset choices when only relative feedback information from subsets is available, instead of bandit or semi-bandit feedback which is absolute. Specifically, we study two regret minimisation problems over subsets of a finite ground set [n], with subset-wise relative preference information feedback according to the Multinomial logit choice model. In the first setting, the learner can play subsets of size bounded by a maximum size and receives top-m rank-ordered feedback, while in the second setting the learner can play subsets of a fixed size k with a full subset ranking observed as feedback. For both settings, we devise instance-dependent and order-optimal regret algorithms n with regret 图片.png and 图片.png respectively. We derive fundamental limits on the regret performance of online learning with subset-wise preferences, proving the tightness of our regret guarantees. Our results also show the value of eliciting more general top-m rank-ordered feedback over single winner feedback (m = 1). Our theoretical results are corroborated with empirical evaluations.

上一篇:Efficient Approximation of Deep ReLU Networks for Functions on Low Dimensional Manifolds

下一篇:Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness

用户评价
全部评价

热门资源

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Rating-Boosted La...

    The performance of a recommendation system reli...

  • Hierarchical Task...

    We extend hierarchical task network planning wi...