资源论文SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

2020-01-19 | |  99 |   41 |   0

Abstract

In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.

上一篇:Neurons as Monte Carlo Samplers: Bayesian Inference and Learning in Spiking Networks

下一篇:Tight convex relaxations for sparse matrix factorization

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...