资源论文Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets

Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets

2020-03-04 | |  46 |   42 |   0

Abstract

The Frank-Wolfe method (a.k.a. conditional gradient algorithm) for smooth optimization has regained much interest in recent years in the context of large scale optimization and machine learning. A key advantage of the method is that it avoids projections the computational bottleneck in many applications replacing it by a linear optimization step. Despite this advantage, the known convergence rates of the FW method fall behind standard first order methods for most settings of interest. It is an active line of research to derive faster linear optimization-based algorithms for various settings of convex optimization. In this paper we consider the special case of optimization over strongly convex sets, for which we prove that the vanila FW method converges at a rate of 图片.png This gives a quadratic improvement in convergence rate compared to the general case, in which convergence is of the order 图片.png and known to be tight. We show that various balls induced by 图片.png norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution. We further show how several previous fastrate results for the FW method follow easily from our analysis.

上一篇:Proteins, Particles, and Pseudo-Max-Marginals: A Submodular Approach

下一篇:Surrogate Functions for Maximizing Precision at the Top

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...