资源论文But How Does It Work in Theory? Linear SVM with Random Features

But How Does It Work in Theory? Linear SVM with Random Features

2020-02-14 | |  37 |   34 |   0

Abstract 

We prove that, under low noise assumptions, the support vector machine with Nimage.pngm random features (RFSVM) can achieve the learning rate faster than image.png on a training set with m samples when an optimized feature map is used. Our work extends the previous fast rate analysis of random features method from least square loss to 0-1 loss. We also show that the reweighted feature selection method, which approximates the optimized feature map, helps improve the performance of RFSVM in experiments on a synthetic data set.

上一篇:cpSGD: Communication-efficient and differentially-private distributed SGD

下一篇:Learning to Optimize Tensor Programs

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...