资源论文Generalization Bounds for Uniformly Stable Algorithms

Generalization Bounds for Uniformly Stable Algorithms

2020-02-13 | |  88 |   52 |   0

Abstract

Uniform stability of a learning algorithm is a classical notion of algorithmic stability introduced to derive high-probability bounds on the generalization error (Bousquet and Elisseeff, 2002). Specifically, for a loss function with range bounded in [0, 1], the generalization error of a image.png-uniformly p stable learning algorithm on n samples is known to be within image.png of the empirical error with probability at least image.png. Unfortunately, this bound does not lead  to meaningful generalization bounds in many common settings where image.png. At the same time the bound is known to be tight only when image.png. We substantially improve generalization bounds for uniformly stable algorithms without making p any additional assumptions. First, we show that the bound in this setting is image.png with probability at least image.png. In addition, we prove a tight bound of image.png on the second moment of the estimation error. The best previous bound on the second moment is image.png. Our proofs are based on new analysis techniques and our results imply substantially stronger generalization guarantees for several well-studied algorithms.

上一篇:SVD-Softmax: Fast Softmax Approximation on Large Vocabulary Neural Networks

下一篇:Improving Online Algorithms via ML Predictions

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...