资源论文Sparse Gaussian Process Regression via Penalization

Sparse Gaussian Process Regression via Penalization

2020-02-26 | |  71 |   50 |   0

Abstract

To handle massive data, a variety of sparse Gaussian Process (GP) methods have been proposed to reduce the computational cost. Many of them essentially map the large dataset into a small set of basis points. A common approach to learn these basis points is evidence maximization. Nevertheless, evidence maximization may lead to overfitting and cause a high computational cost. In this paper, we propose a novel sparse GP regression approach, GPLasso, that explicitly represents the trade-off between its approximation quality and the model sparsity. GPLasso minimizes a 图片.png -penalized KL divergence between the exact and sparse GP posterior processes. Optimizing this convex cost function leads to sparse GP parameters. Furthermore, we use incomplete Cholesky factorization to obtain low-rank matrix approximations to speed up the optimization procedure. Experimental results on synthetic and real data demonstrate that, compared with several state-of-the-art sparse GP methods and a direct low-rank matrix approximation method, GPLasso achieves a significantly improved trade-off between prediction accuracy and computational cost.

上一篇:Sequential Projection Learning for Hashing with Compact Codes

下一篇:Spherical Topic Models

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...