Abstract
Although gradient descent (GD) almost always escapes saddle points asymptotically [Lee et al., 2016], this paper shows that even with fairly natural random initialization schemes and non-pathological functions, GD can be signi?cantly slowed down by saddle points, taking exponential time to escape. On the other hand, gradient descent with perturbations [Ge et al., 2015, Jin et al., 2017] is not slowed down by saddle points—it can ?nd an approximate local minimizer in polynomial time. This result implies that GD is inherently slower than perturbed GD, and justi?es the importance of adding perturbations for effcient non-convex optimization. While our focus is theoretical, we also present experiments that illustrate our theoretical ?ndings.