资源论文Lazifying Conditional Gradient Algorithms

Lazifying Conditional Gradient Algorithms

2020-03-09 | |  71 |   42 |   0

Abstract

Conditional gradient algorithms (also often called Frank-Wolfe algorithms) are popular due to their simplicity of only requiring a linear optimization oracle and more recently they also gained significant traction for online learning. While simple in principle, in many cases the actual implementation of the linear optimization oracle is costly We show a general method to lazify various conditional gradient algorithms, which in actual computations leads to several orders of magnitude of speedup in wall-clock time. This is achieved by using a faster separation oracle instead of a line optimization oracle, relying only on few linear optimization oracle calls.

上一篇:Doubly Greedy Primal-Dual Coordinate Descent for Sparse Empirical Risk Minimization

下一篇:Attentive Recurrent Comparators

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...