On the Convergence of (Stochastic) Gradient Descent with Extrapolation for
Non-Convex Minimization
Abstract
Extrapolation is a well-known technique for solving convex optimization and variational inequalities and recently attracts some attention for nonconvex optimization. Several recent works have
empirically shown its success in some machine
learning tasks. However, it has not been analyzed for non-convex minimization and there still
remains a gap between the theory and the practice. In this paper, we analyze gradient descent and
stochastic gradient descent methods with extrapolation for finding an approximate first-order stationary point of smooth non-convex optimization problems. Our convergence upper bounds show that the
algorithms with extrapolation could be potentially
faster than without extrapolation