资源论文Adversarially Robust Optimization with Gaussian Processes

Adversarially Robust Optimization with Gaussian Processes

2020-02-14 | |  60 |   39 |   0

Abstract 

In this paper, we consider the problem of Gaussian process (GP) optimization with an added robustness requirement: The returned point may be perturbed by an adversary, and we require the function value to remain as high as possible even after this perturbation. This problem is motivated by settings in which the underlying functions during optimization and implementation stages are different, or when one is interested in finding an entire region of good inputs rather than only a single point. We show that standard GP optimization algorithms do not exhibit the desired robustness properties, and provide a novel confidence-bound based algorithm S TABLE O PT for this purpose. We rigorously establish the required number of samples for S TABLE O PT to find a near-optimal point, and we complement this guarantee with an algorithm-independent lower bound. We experimentally demonstrate several potential applications of interest using real-world data sets, and we show that S TABLE O PT consistently succeeds in finding a stable maximizer where several baseline methods fail.

上一篇:Streamlining Variational Inference for Constraint Satisfaction Problems

下一篇:Faster Online Learning of Optimal Threshold for Consistent F-measure Optimization

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...