资源论文Natasha: Faster Non-Convex Stochastic Optimization via Strongly Non-Convex Parameter

Natasha: Faster Non-Convex Stochastic Optimization via Strongly Non-Convex Parameter

2020-03-09 | |  40 |   39 |   0

Abstract

Given a non-convex function f (x) that is an average of n smooth functions, we design stochastic first-order methods to find its approximate stationary points. The performance of our new methods depend on the smallest (negative) eigenvalue -σ of the Hessian. This parameter σ captures how strongly non-convex f (x) is, and is analogous to the strong convexity parameter for convex optimization. At least in theory, our methods outperform known results for a range of parameter σ, and can also be used to find approximate local minima. Our result implies an interesting dichotomy: there exists a threshold 图片.png so that the (currently) fastest methods for σ > 图片.png and for σ < 图片.png have different behaviors: the former scales with 图片.png and the latter scales with 图片.png

上一篇:iSurvive: An Interpretable, Event-time Prediction Model for mHealth

下一篇:Optimal and Adaptive Off-policy Evaluation in Contextual Bandits

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...