资源论文Efficient online algorithms for fast-rate regret bounds under sparsity

Efficient online algorithms for fast-rate regret bounds under sparsity

2020-02-14 | |  64 |   35 |   0

Abstract 

We consider the problem of online convex optimization in two different settings: arbitrary and i.i.d. sequence of convex loss functions. In both settings, we provide efficient algorithms whose cumulative excess risks are controlled with fast-rate sparse bounds. First, the excess risks bounds depend on the sparsity of the objective rather than on the dimension of the parameters space. Second, their rates are p faster than the slow-rate image.png under additional convexity assumptions on the loss functions. In the adversarial setting, we develop an algorithm BOA+ whose cumulative excess risks is controlled by several bounds with different trade-offs between sparsity and rate for strongly convex loss functions. In the i.i.d. setting under the ?ojasiewicz’s assumption, we establish new risk bounds that are sparse p with a rate adaptive to the convexity of the risk (ranging from a rate image.png for general convex risk to image.png for strongly convex risk). These results generalize previous works on sparse online learning under weak assumptions on the risk.

上一篇:Mixture Matrix Completion

下一篇:Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...