Abstract
In recent years, the -regularizer has been widely used to induce structured sparsity in the solutions to various optimization problems. Currently, such -regularized problems are typically solved by first-order methods. Motivated by the desire to analyze the convergence rates of these methods, we show that for a large class of -regularized problems, an error bound condition is satisfied when p ∈ [1, 2] or p = ∞ but fails to hold for any p ∈ (2, ∞). Based on this result, we show that many first-order methods enjoy an asymptotic linear rate of convergence when applied to `1,p -regularized linear or logistic regression with p ∈ [1, 2] or p = ∞. By contrast, numerical experiments suggest that for the same class of problems with p ∈ (2, ∞), the aforementioned methods may not converge linearly.