资源论文Complexity of Highly Parallel Non-Smooth Convex Optimization

Complexity of Highly Parallel Non-Smooth Convex Optimization

2020-02-23 | |  36 |   35 |   0

Abstract

A landmark result of non-smooth convex optimization is that gradient descent is an optimal algorithm whenever the number of computed gradients is smaller than the dimension d. In this paper we study the extension of this result to the parallel optimization setting. Namely we consider optimization algorithms interacting with a highly parallel gradient oracle, that is one that can answer poly(d) gradient queries inpparallel. We show that in this case gradient descent is optimal only up to 图片.png rounds of interactions with the oracle. The lower bound improves up to a decades old construction by Nemirovski which proves optimality only up to 图片.png rounds(as recently observed by Balkanski and Singer) after  图片.png rounds was already observed by Duchi, Bartlett and Wainwright. In the latter regime we propose a new method with improved complexity, which we conjecture to be optimal. The analysis of this new method is based upon a generalized version of the recent results on optimal acceleration for highly smooth convex optimization.

上一篇:Thresholding Bandit with Optimal Aggregate Regret

下一篇:Optimal Decision Tree with Noisy Outcomes

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...