资源论文Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions

Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions

2020-03-16 | |  65 |   39 |   0

Abstract

Consider the following class of leaning schemes: 图片.png,  where 图片.png denote the 图片.pngfeature and response variable respectively. Let 图片.pngand R be the loss function and regularizer, 图片.png denote the unknown weights, and 图片.png be a regularization parameter. Finding the optimal choice of 图片.png is a challenging problem in high-dimensional regimes where both n and p are large. We propose two frameworks to obtain a computationally efficient approximation ALO of the leave-one-out cross validation (LOOCV) risk for nonsmooth losses and regularizers. Our two frameworks are based on the primal and dual formulations of (1). We prove the equivalence of the two approaches under smoothness conditions. This equivalence enables us to justify the accuracy of both methods under such conditions. We use our approaches to obtain a risk estimate for several standard problems, including generalized LASSO, nuclear norm regularization, and support vector machines. We empirically demonstrate the effectiveness of our results for non-differentiable cases.

上一篇:SAFFRON: an Adaptive Algorithm for Online Control of the False Discovery Rate

下一篇:Multi-Fidelity Black-Box Optimization with Hierarchical Partitions

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...