资源论文Beyond Online Balanced Descent: An Optimal Algorithm for Smoothed Online Optimization

Beyond Online Balanced Descent: An Optimal Algorithm for Smoothed Online Optimization

2020-02-20 | |  59 |   34 |   0

Abstract

We study online convex optimization in a setting where the learner seeks to minimize the sum of a per-round hitting cost and a movement cost which is incurred when changing decisions between rounds. We prove a new lower bound on the competitive ratio of any online algorithm in the setting where the costs are mstrongly convex and the movement costs are the squared `2 norm. This lower bound shows that no algorithm can achieve a competitive ratio that is 图片.png as m tends to zero. No existing algorithms have competitive ratios matching this bound, and we show that the state-of-the-art algorithm, Online Balanced Decent (OBD), has a competitive ratio that is 图片.png We additionally propose two new algorithms, Greedy OBD (G-OBD) and Regularized OBD (R-OBD) and prove that both algorithms have an 图片.png competitive ratio. The result for G-OBD holds when the hitting costs are quasiconvex and the movement costs are the squared `2 norm, while the result for R-OBD holds when the hitting costs are m-strongly convex and the movement costs are Bregman Divergences. Further, we show that R-OBD simultaneously achieves constant, dimension-free competitive ratio and sublinear regret when hitting costs are strongly convex.

上一篇:Balancing Efficiency and Fairness in On-Demand Ridesourcing

下一篇:Generalized Block-Diagonal Structure Pursuit: Learning Soft Latent Task Assignment against Negative Transfer

用户评价
全部评价

热门资源

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Rating-Boosted La...

    The performance of a recommendation system reli...

  • Hierarchical Task...

    We extend hierarchical task network planning wi...