资源论文Level-Set Methods for Finite-Sum Constrained Convex Optimization

Level-Set Methods for Finite-Sum Constrained Convex Optimization

2020-03-19 | |  58 |   29 |   0

Abstract

We consider the constrained optimization where the objective function and the constraints are de fined as summation of finitely many loss function This model has applications in machine learning such as Neyman-Pearson classification. We consider two level-set methods to solve this class of problems, an existing inexact Newton method and a feasible level-set method. To update the level parameter towards the optimality, both meth ods require an oracle that generates upper and lower bounds as well as an affine-minorant of the level function. To construct the desired oracle, reformulate the level function as the value of a saddle-point problem using the conjugate and perspective of the loss functions. Then a stochastic variance-reduced gradient method with a special Bregman divergence is proposed as the oracle for solving that saddle-point problem. The special divergence ensures the proximal mapping in each iteration can be solved in a closed form. The tot complexity of both level-set methods using the proposed oracle are analyzed.

上一篇:Adaptive Sampled Softmax with Kernel Based Sampling

下一篇:Mitigating Bias in Adaptive Data Gathering via Differential Privacy

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...