资源论文Connecting Optimization and Regularization Paths

Connecting Optimization and Regularization Paths

2020-02-14 | |  54 |   50 |   0

Abstract 

We study the implicit regularization properties of optimization techniques by explicitly connecting their optimization paths to the regularization paths of “corresponding” regularized problems. This surprising connection shows that iterates of optimization techniques such as gradient descent and mirror descent are pointwise close to solutions of appropriately regularized objectives. While such a tight connection between optimization and regularization is of independent intellectual interest, it also has important implications for machine learning: we can port results from regularized estimators to optimization, and vice versa. We investigate one key consequence, that borrows from the well-studied analysis of regularized estimators, to then obtain tight excess risk bounds of the iterates generated by optimization techniques.

上一篇:Discretely Relaxing Continuous Variables for tractable Variational Inference

下一篇:Found Graph Data and Planted Vertex Covers

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...