Abstract
We consider the problem of online convex optimization in two different settings: arbitrary and i.i.d. sequence of convex loss functions. In both settings, we provide efficient algorithms whose cumulative excess risks are controlled with fast-rate sparse bounds. First, the excess risks bounds depend on the sparsity of the objective rather than on the dimension of the parameters space. Second, their rates are p faster than the slow-rate under additional convexity assumptions on the loss functions. In the adversarial setting, we develop an algorithm BOA+ whose cumulative excess risks is controlled by several bounds with different trade-offs between sparsity and rate for strongly convex loss functions. In the i.i.d. setting under the ?ojasiewicz’s assumption, we establish new risk bounds that are sparse p with a rate adaptive to the convexity of the risk (ranging from a rate for general convex risk to for strongly convex risk). These results generalize previous works on sparse online learning under weak assumptions on the risk.