资源论文SDCA without Duality, Regularization, and Individual Convexity

SDCA without Duality, Regularization, and Individual Convexity

2020-03-06 | |  64 |   41 |   0

Abstract

Stochastic Dual Coordinate Ascent is a popular method for solving regularized loss minimization for the case of convex losses. We describe variants of SDCA that do not require explicit regularization and do not rely on duality. We prove linear convergence rates even if individual loss functions are non-convex, as long as the expected loss is strongly convex.

上一篇:Continuous Deep Q-Learning with Model-based Acceleration

下一篇:Generative Adversarial Text to Image Synthesis

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...