资源论文First-order methods almost always avoid saddle points: The case of vanishing step-sizes

First-order methods almost always avoid saddle points: The case of vanishing step-sizes

2020-02-19 | |  49 |   37 |   0

Abstract

In a series of papers [17, 22, 16], it was established that some of the most commonly used first order methods almost surely (under random initializations) and with stepsize being small enough, avoid strict saddle points, as long as the objective function f is C 2 and has Lipschitz gradient. The key observation was that first order methods can be studied from a dynamical systems perspective, in which instantiations of Center-Stable manifold theorem allow for a global analysis. The results of the aforementioned papers were limited to the case where the step-size ? is constant, i.e., does not depend on time (and bounded from the inverse of the Lipschitz constant of the gradient of f ). It remains an open question whether or not the results still hold when the step-size is time dependent and vanishes with time. In this paper, we resolve this question on the affirmative for gradient descent, mirror descent, manifold descent and proximal point. The main technical challenge is that the induced (from each first order method) dynamical system is time nonhomogeneous and the stable manifold theorem is not applicable in its classic form. By exploiting the dynamical systems structure of the aforementioned first order methods, we are able to prove a stable manifold theorem that is applicable to time non-homogeneous dynamical systems and generalize the results in [16] for vanishing step-sizes.

上一篇:Envy-Free Classification

下一篇:Code Generation as a Dual Task of Code Summarization

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...