资源论文Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods

Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods

2020-01-13 | |  90 |   33 |   0

Abstract

We consider derivative-free algorithms for stochastic optimization problems that use only noisy function values rather than gradients, analyzing their finite-sample convergence rates. We show that if pairs of function values are available, algo-rithms that use gradient estimates based on random perturbations suffer a factor of at most 图片.png in convergence rate over traditional stochastic gradient methods, where d is the problem dimension. We complement our algorithmic development with information-theoretic lower bounds on the minimax convergence rate of such problems, which show that our bounds are sharp with respect to all problemdependent quantities: they cannot be improved by more than constant factors.

上一篇:Multiclass Learning Approaches: A Theoretical Comparison with Implications

下一篇:Entangled Monte Carlo

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...