资源论文Linear Convergence with Condition Number Independent Access of Full Gradients

Linear Convergence with Condition Number Independent Access of Full Gradients

2020-01-16 | |  57 |   55 |   0

Abstract

For smooth and strongly convex optimizations,  the optimal iteration complexity of the gradient-based algorithm is 图片.pngwhere 图片.png is the condition number. In the case that the optimization problem is ill-conditioned, we need to evaluate a large number of full gradients, which could be computationally expensive. In this paper, we propose to remove the dependence on the condition number by allowing the algorithm to access stochastic gradients of the objective function. To this end, we present a novel algorithm named Epoch Mixed Gradient Descent (EMGD) that is able to utilize two kinds of gradients. A distinctive step in EMGD is the mixed gradient descent, where we use a combination of the full and stochastic gradients to update the intermediate solution. Theoretical analysis shows that EMGD is able to find an 图片.png-optimal solution by computing 图片.png full gradients and 图片.png stochastic gradients.

上一篇:A* Lasso for Learning a Sparse Bayesian Network Structure for Continuous Variables

下一篇:Stochastic Convex Optimization with Multiple Objectives

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...