资源论文Optimization, Fast and Slow: Optimally Switching between Local and Bayesian Optimization

Optimization, Fast and Slow: Optimally Switching between Local and Bayesian Optimization

2020-03-16 | |  70 |   46 |   0

Abstract

We develop the first Bayesian Optimization algorithm, BLOSSOM, which selects between multiple alternative acquisition functions and traditional local optimization at each step. This is co bined with a novel stopping condition based on expected regret. This pairing allows us to obtain th best characteristics of both local and Bayesian op timization, making efficient use of function evalu ations while yielding superior convergence to the global minimum on a selection of optimization problems, and also halting optimization once a principled and intuitive stopping condition has been fulfilled.

上一篇:Orthogonal Machine Learning: Power and Limitations

下一篇:Synthesizing Programs for Images using Reinforced Adversarial Learning

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...