资源论文Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives

Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives

2020-03-06 | |  52 |   35 |   0

Abstract

Many classical algorithms are found until several years later to outlive the confines in which they were conceived, and continue to be relevant in unforeseen settings. In this paper, we show that SVRG is one such method: being originally designed for strongly convex objectives, it is also very robust in non-strongly convex or sum-ofnon-convex settings. More precisely, we provide new analysis to improve the state-of-the-art running times in both settings by either applying SVRG or its novel variant. Since non-strongly convex objectives include important examples such as Lasso or logistic regression, and sum-of-non-convex objectives include famous examples such as stochastic PCA and is even believed to be related to training dee neural nets, our results also imply better performances in these applications.1

上一篇:Heteroscedastic Sequences: Beyond Gaussianity

下一篇:Asymmetric Multi-task Learning Based on Task Relatedness and Loss

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...