资源论文Approximate Newton Methods and Their Local Convergence

Approximate Newton Methods and Their Local Convergence

2020-03-10 | |  62 |   32 |   0

Abstract

Many machine learning models are reformulated as optimization problems. Thus, it is important to solve a large-scale optimization problem in big data applications. Recently, stochastic second order methods have emerged to attract much attention for optimization due to their efficiency at each iteration, rectified a weakness in the ordi nary Newton method of suffering a high cost in each iteration while commanding a high convergence rate. However, the convergence properties of these methods are still not well understood. There are also several important gaps between the current convergence theory and the performance in real applications. In this paper, we aim to fill these gaps. We propose a unifying framework to analyze local convergence properties of second order methods. Based on this framework, our theoretical analysis matches the performance in real applications.

上一篇:A Unified Variance Reduction-Based Framework for Nonconvex Low-Rank Matrix Recovery

下一篇:Multi-Class Optimal Margin Distribution Machine

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...