资源论文Proximal Newton-type methods for convex optimization

Proximal Newton-type methods for convex optimization

2020-01-13 | |  88 |   49 |   0

Abstract

where g is convex and continuously differentiable and 图片.png is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. We prove such methods are globally convergent and achieve superlinear rates of convergence in the vicinity of an optimal solution. We also demonstrate the performance of these methods using problems of relevance in machine learning and statistics.

上一篇:Multiresolution Gaussian Processes

下一篇:The Coloured Noise Expansion and Parameter Estimation of Diffusion Processes

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...