资源论文Sparse Features for PCA-Like Linear Regression

Sparse Features for PCA-Like Linear Regression

2020-01-08 | |  173 |   124 |   0

Abstract

Principal Components Analysis (PCA) is often used as a feature extraction procedure. Given a matrix 图片.png whose rows represent n data points with respect to d features, the top k right singular vectors of X (the so-called eigenfeatures), are arbitrary linear combinations of all available features. The eigenfeatures are very useful in data analysis, including the regularization of linear regression. Enforcing sparsity on the eigenfeatures, i.e., forcing them to be linear combinations of only a small number of actual features (as opposed to all available features), can promote better generalization error and improve the interpretability of the eigenfeatures. We present deterministic and randomized algorithms that construct such sparse eigenfeatures while provably achieving in-sample performance comparable to regularized linear regression. Our algorithms are relatively simple and practically efficient, and we demonstrate their performance on several data sets.

上一篇:Shaping Level Sets with Submodular Functions

下一篇:Why The Brain Separates Face Recognition From Object Recognition

用户评价
全部评价

热门资源

  • Regularizing RNNs...

    Recently, caption generation with an encoder-de...

  • Deep Cross-media ...

    Cross-media retrieval is a research hotspot in ...

  • Supervised Descen...

    Many computer vision problems (e.

  • Learning Expressi...

    Facial expression is temporally dynamic event w...

  • Attributed Graph ...

    Graph clustering is a fundamental task which di...