资源论文Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin*

Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin*

2020-02-20 | |  68 |   42 |   0

Abstract

We study the problem of properly learning large margin halfspaces in the agnostic PAC model. In more detail, we study the complexity of properly learning ddimensional halfspaces on the unit ball within misclassification error 图片.png where OPT图片.png is the optimal 图片.png-margin error rate and 图片.png is the approximation ratio. We give learning algorithms and computational hardness results for this problem, for all values of the approximation ratio 图片.png, that are nearly-matching for a range of parameters. Specifically, for the natural setting that α is any constant bigger than one, we provide an essentially tight complexity characterization. On the positive side, we give an α = 1.01-approximate proper learner that uses 图片.png samples (which is optimal) and runs in time 图片.pngOn the negative side, we show that any constant factor approximate proper learner has runtime 图片.png assuming the Exponential Time Hypothesis.

上一篇:Learning Robust Options by Conditional Value at Risk Optimization

下一篇:Learning Representations for Time Series Clustering

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...