资源论文Equipping Experts/Bandits with Long-term Memory

Equipping Experts/Bandits with Long-term Memory

2020-02-19 | |  45 |   35 |   0

Abstract

We propose the first reduction-based approach to obtaining long-term memory guarantees for online learning in the sense of Bousquet and Warmuth [8], by reducing the problem to achieving typical switching regret. Specifically, for the classical expert problem with K actions and T rounds, using p our framework we develop various algorithms with a regret bound of order 图片.png  compared to any sequence of experts with S 图片.png1 switches among 图片.png distinct experts. In addition, by plugging specific adaptive algorithms into our framework we also achieve the best of both stochastic and adversarial environments simultaneously. This resolves an open problem of Warmuth and Koolen [35]. Furthermore, we extend our results to the sparse multi-armed bandit setting and show both negative and positive results for long-term memory guarantees. As a side result, our lower bound also implies that sparse losses do not help improve the worst-case regret for contextual bandits, a sharp contrast with the non-contextual case.

上一篇:Implicitly Learning to Reason in First-Order Logic

下一篇:Bayesian Optimization under Heavy-tailed Payoffs

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...