资源论文Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

2020-03-10 | |  59 |   38 |   0

Abstract

In this paper, a new theory is developed for first order stochastic convex optimization, showing that the global convergence rate is sufficiently quantified by a local growth rate of the objective function in a neighborhood of the optimal solutions. In particular, if the objective function F (w) in the -sublevel set grows as fast as 图片.png where 图片.png represents the closest optimal solution to w and 图片.png quantifies the local growth rate, the iteration complexity of first-order stochastic optimization for achieving an ε-optimal solution can be 图片.png which is optimal at most up to a logarithmic factor. To achieve the faster global convergence, we develop two different accelerated stochastic subgradient methods by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Besides the theoretical improvements, this work also include new contributions towards making the proposed algorithms practical: (i) we present practical variants of accelerated stochastic subgradient methods that can run without the knowledge of multiplicative growth constant and even the growth rate θ; (ii) we consider a broad family of problems in machine learning to demonstrate that the proposed algorithms enjoy faster convergence than traditional stochastic subgradient method. For example, when applied to the 图片.png regularized empirical polyhedral loss minimization (e.g., hinge loss, absolute loss), the propos stochastic methods have a logarithmic iteration complexity.

上一篇:Fractional Langevin Monte Carlo: Exploring L´evy Driven Stochastic Differential Equations for Markov Chain Monte Carlo

下一篇:Analogical Inference for Multi-relational Embeddings

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...