资源论文On the Convergence of (Stochastic) Gradient Descent with Extrapolation for Non-Convex Minimization

On the Convergence of (Stochastic) Gradient Descent with Extrapolation for Non-Convex Minimization

2019-10-09 | |  40 |   31 |   0
Abstract Extrapolation is a well-known technique for solving convex optimization and variational inequalities and recently attracts some attention for nonconvex optimization. Several recent works have empirically shown its success in some machine learning tasks. However, it has not been analyzed for non-convex minimization and there still remains a gap between the theory and the practice. In this paper, we analyze gradient descent and stochastic gradient descent methods with extrapolation for finding an approximate first-order stationary point of smooth non-convex optimization problems. Our convergence upper bounds show that the algorithms with extrapolation could be potentially faster than without extrapolation

上一篇:Neural Network based Continuous Conditional Random Field for Fine-grained Crime Prediction

下一篇:Partial Label Learning with Unlabeled Data

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...