资源论文Limitations on Variance-Reduction and Acceleration Schemes for Finite Sum Optimization

Limitations on Variance-Reduction and Acceleration Schemes for Finite Sum Optimization

2020-02-10 | |  54 |   38 |   0

Abstract 

We study the conditions under which one is able to efficiently apply variancereduction and acceleration schemes on finite sum optimization problems. First, we show that, perhaps surprisingly, the finite sum structure by itself, is not sufficient for obtaining a complexity bound of image.png for L-smooth and µ-strongly convex individual functions one must also know which individual function is being referred to by the oracle at each iteration. Next, we show that for a broad class of first-order and coordinate-descent finite sum algorithms (including, e.g., SDCA, SVRG, p SAG), it is not possible to get an ‘accelerated’ complexity bound of  image.png, unless the strong convexity parameter is given explicitly. Lastly, we show that when this class of algorithms is used for minimizing L-smooth and convex finite sums, the iteration complexity is bounded from below by image.png, assuming p that (on average) the same update rule is used in any iteration, and image.png otherwise.

上一篇:Minimizing a Submodular Function from Samples

下一篇:Decomposable Submodular Function Minimization Discrete and Continuous

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...