资源论文On Graduated Optimization for Stochastic Non-Convex Problems

On Graduated Optimization for Stochastic Non-Convex Problems

2020-03-05 | |  56 |   30 |   0

Abstract

The graduated optimization approach, also known as the continuation method, is a popular heuristic to solving non-convex problems that has received renewed interest over the last decade. Despite being popular, very little is known in terms of its theoretical convergence analysis. In this paper we describe a new first-order algorithm based on graduated optimization and analyze its performance. We characterize a family of non-convex functions for which this algorithm provably converges to a global optimum. In particular, we prove that the algorithm converges to an ?-approximate solution within 图片.png gradient-based steps. We extend our algorithm and analysis to the setting of stochastic non-convex optimization with noisy gradient feedback, attaining the same convergence rate. Additionally, we discuss the setting of “zeroorder optimization”, and devise a variant of our algorithm which converges at rate of 图片.png

上一篇:Persistent RNNs: Stashing Recurrent Weights On-Chip

下一篇:Conditional Dependence via Shannon Capacity: Axioms, Estimators and Applications

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...