资源论文Fast learning rates with heavy-tailed losses

Fast learning rates with heavy-tailed losses

2020-02-05 | |  56 |   40 |   0

Abstract 

We study fast learning rates when the losses are not necessarily bounded and may have a distribution with heavy tails. To enable such analyses, we introduce two new conditions: (i) the envelope function supimage.png, where f is the loss function and F is the hypothesis class, exists and is image.png-integrable, and (ii) image.png satisfies the multi-scale Bernstein’s condition on F. Under these assumptions, we prove that learning rate faster than image.png can be obtained and, depending on r and the multi-scale Bernstein’s powers, can be arbitrarily close to image.png. We then verify these assumptions and derive fast learning rates for the problem of vector quantization by k-means clustering with heavy-tailed distributions. The analyses enable us to obtain novel learning rates that extend and complement existing results in the literature from both theoretical and practical viewpoints.

上一篇:Select-and-Sample for Spike-and-Slab Sparse Coding

下一篇:Refined Lower Bounds for Adversarial Bandits

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...