资源论文Model selection for contextual bandits

Model selection for contextual bandits

2020-02-20 | |  79 |   34 |   0

Abstract

We introduce the problem of model selection for contextual bandits, where a learner must adapt to the complexity of the optimal policy while balancing exploration and exploitation. Our main result is a new model selection guarantee for linear contextual bandits. We work in the stochastic realizable setting with a sequence of nested linear policy classes of dimension 图片.png where the m* -th class contains the optimal policy, and we design an algorithm that achieves 图片.png regret with no prior knowledge of the optimal dimension 图片.png The algorithm also achieves regret 图片.png which is optimal for 图片.png This is the first model selection result for contextual bandits with non-vacuous regret for all values of 图片.png and to the best of our knowledge is the first positive result of this type for any online learning setting with partial information. The core of the algorithm is a new estimator for the gap in the best loss achievable by two linear policy classes, which we show admits a convergence rate faster than the rate required to learn the parameters for either class.

上一篇:A New Defense Against Adversarial Images: Turning a Weakness into a Strength

下一篇:Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...