资源论文Breaking the Span Assumption Yields Fast Finite-Sum Minimization*

Breaking the Span Assumption Yields Fast Finite-Sum Minimization*

2020-02-14 | |  94 |   49 |   0

Abstract 

In this paper, we show that SVRG and SARAH can be modified to be fundamentally faster than all of the other standard algorithms that minimize the sum of n smooth functions, such as SAGA, SAG, SDCA, and SDCA without duality. Most finite sum algorithms follow what we call the “span assumption”: Their updates are in the span of a sequence of component gradients chosen in a random IID fashion. In the big data regime, where the condition number image.png the span assumption prevents algorithms from converging to an approximate solution of accuracy image.png in less than image.png iterations. SVRG and SARAH do not follow the span assumption since they are updated with a hybrid of full-gradient and component-gradient information. We show that because of this, they can be up to image.pngimage.png times faster. In particular, to obtain an accuracy image.png for image.png and image.png modified SVRG requires O(n) iterations, whereas algorithms that follow the span assumption require O(n ln(n)) iterations. Moreover, we present lower bound results that show this speedup is optimal, and provide analysis to help explain why this speedup exists. With the understanding that the span assumption is a point of weakness of finite sum algorithms, future work may purposefully exploit this to yield faster algorithms in the big data regime.

上一篇:Learning Attentional Communication for Multi-Agent Cooperation

下一篇:On Fast Leverage Score Sampling and Optimal Learning

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...