资源论文Algorithmic Stability and Hypothesis Complexity

Algorithmic Stability and Hypothesis Complexity

2020-03-10 | |  116 |   55 |   0

Abstract

We introduce a notion of algorithmic stability of learning algorithms—that we term argument stability—that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. The main result of the paper bounds the generalization error of any learning algorithm in terms of its argument stability. The bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent.

上一篇:Input Convex Neural Networks

下一篇:SPLICE: Fully Tractable Hierarchical Extension of ICA with Pooling

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...