资源论文Regularized M -estimators with nonconvexity: Statistical and algorithmic theory for local optima

Regularized M -estimators with nonconvexity: Statistical and algorithmic theory for local optima

2020-01-16 | |  65 |   43 |   0

Abstract

We establish theoretical results concerning local optima of regularized M estimators, where both loss and penalty functions are allowed to be nonconvex. Our results show that as long as the loss satisfies restricted strong convexity and the penalty satisfies suitable regularity conditions, any local optimum of the composite objective lies within statistical precision of the true parameter vector. Our theory covers a broad class of nonconvex objective functions, including corrected versions of the Lasso for errors-in-variables linear models and regression in generalized linear models using nonconvex regularizers such as SCAD and MCP. On the optimization side, we show that a simple adaptation of composite gradient descent may be used to compute a global optimum up to the statistical precision 图片.png in图片.png iterations, the fastest possible rate for any first-order method. We provide simulations to illustrate the sharpness of our theoretical predictions.

上一篇:Recurrent linear models of simultaneously-recorded neural populations

下一篇:Spike train entropy-rate estimation using hierarchical Dirichlet process priors

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...