资源论文`1,p -Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods

`1,p -Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods

2020-03-04 | |  95 |   44 |   0

Abstract

In recent years, the 图片.png-regularizer has been widely used to induce structured sparsity in the solutions to various optimization problems. Currently, such 图片.png-regularized problems are typically solved by first-order methods. Motivated by the desire to analyze the convergence rates of these methods, we show that for a large class of 图片.png-regularized problems, an error bound condition is satisfied when p ∈ [1, 2] or p = ∞ but fails to hold for any p ∈ (2, ∞). Based on this result, we show that many first-order methods enjoy an asymptotic linear rate of convergence when applied to `1,p -regularized linear or logistic regression with p ∈ [1, 2] or p = ∞. By contrast, numerical experiments suggest that for the same class of problems with p ∈ (2, ∞), the aforementioned methods may not converge linearly.

上一篇:Causal Inference by Identification of Vector Autoregressive Processes with Hidden Components

下一篇:Learning from Corrupted Binary Labels via Class-Probability Estimation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...