资源论文No penalty no tears: Least squares in high-dimensional linear models

No penalty no tears: Least squares in high-dimensional linear models

2020-03-06 | |  100 |   34 |   0

Abstract

Ordinary least squares (OLS) is the default method for fitting linear models, but is not applicable for problems with dimensionality larger than the sample size. For these problems, we advocate the use of a generalized version of OLS motivated by ridge regression, and propose two novel three-step algorithms involving least squares fitting and hard thresholding. The algorithms are methodologically simple to understand intuitively, computationally easy to implement efficiently, and theoretically appealing for choosing models consistently. Numerical exercises comparing our methods with penalizationbased approaches in simulations and data analyses illustrate the great potential of the proposed algorithms.

上一篇:Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification

下一篇:Graying the black box: Understanding DQNs

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...