资源论文Online Stochastic Linear Optimization under One-bit Feedback

Online Stochastic Linear Optimization under One-bit Feedback

2020-03-06 | |  59 |   35 |   0

Abstract

In this paper, we study a special bandit setting of online stochastic linear optimization, where only one-bit of information is revealed to the learner at each round. This problem has found many applications including online advertisement and online recommendation. We assume the binary feedback is a random variable generated from the logit model, and aim to minimize the regret defined by the unknown linear function. Although the existing method for generalized linear bandit can be applied to our problem, the high computational cost makes it impractical for real-world applications. To address this challenge, we develop an efficient online learning algorithm by exploiting particular structures of the observation model. Specifically, we adopt online Newton step to estimate the unknown parameter and derive a tight confidence region based on the exponential concavity of the logistic loss. Our analysis shows that the proposed ? algorithm achieves a regret bound of O(d e T ), which matches the optimal result of stochastic linear bandits.

上一篇:Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks

下一篇:Gossip Dual Averaging for Decentralized Optimization of Pairwise Functions

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...