资源论文UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization

UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization

2020-02-20 | |  63 |   40 |   0

Abstract

We propose a novel adaptive, accelerated algorithm for the stochastic constrained convex optimization setting. Our method, which is inspired by the Mirror-Prox method, simultaneously achieves the optimal rates for smooth/non-smooth problems with either deterministic/stochastic first-order oracles. This is done without any prior knowledge of the smoothness nor the noise properties of the problem. To the best of our knowledge, this is the first adaptive, unified algorithm that achieves the optimal rates in the constrained setting. We demonstrate the practical performance of our framework through extensive numerical experiments.

上一篇:Learning to Learn via Self-Critique

下一篇:Decentralized Cooperative Stochastic Bandits

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...