资源论文Zeroth-order (Non)-Convex Stochastic Optimization via Conditional Gradient and Gradient Updates

Zeroth-order (Non)-Convex Stochastic Optimization via Conditional Gradient and Gradient Updates

2020-02-17 | |  48 |   38 |   0

Abstract 

In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization. Specifically, we propose generalizations of the conditional gradient algorithm achieving rates similar to the standard stochastic gradient algorithm using only zeroth-order information. Furthermore, under a structural sparsity assumption, we first illustrate an implicit regularization phenomenon where the standard stochastic gradient algorithm with zeroth-order information adapts to the sparsity of the problem at hand by just varying the stepsize. Next, we propose a truncated stochastic gradient algorithm with zeroth-order information, whose rate depends only poly-logarithmically on the dimensionality.

上一篇:The Lingering of Gradients: How to Reuse Gradients over Time

下一篇:Variational Learning on Aggregate Outputs with Gaussian Processes

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...