资源论文Bayesian Optimization under Heavy-tailed Payoffs

Bayesian Optimization under Heavy-tailed Payoffs

2020-02-19 | |  56 |   33 |   0

Abstract

We consider black box optimization of an unknown function in the nonparametric Gaussian process setting when the noise in the observed function values can be heavy tailed. This is in contrast to existing literature that typically assumes subGaussian noise distributions for queries. Under the assumption that the unknown function belongs to the Reproducing Kernel Hilbert Space (RKHS) induced by a kernel, we first show that an adaptation ofthe well-known  GP-UCB algorithm  with reward truncation enjoys sublinear 图片.png regret even with only the (1 + 图片.png)-th moments, 图片.png (0, 1], of the reward distribution being bounded (图片.png hides logarithmic factors). However, for the common squared exponential (SE) and Matérn kernels, this is seen to be significantly larger than a fundamental 图片.png lower bound on regret. We resolve this gap by developing novel Bayesian optimization algorithms, based on kernel approximation techniques, with regret bounds matching the lower bound in order for the SE kernel. We numerically benchmark the algorithms on environments based on both synthetic models and real-world data sets.

上一篇:Equipping Experts/Bandits with Long-term Memory

下一篇:Minimax Optimal Estimation of Approximate Differential Privacy on Neighboring Databases

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...