资源论文Speeding Up Latent Variable Gaussian Graphical Model Estimation via Nonconvex Optimization

Speeding Up Latent Variable Gaussian Graphical Model Estimation via Nonconvex Optimization

2020-02-10 | |  46 |   38 |   0

Abstract 

We study the estimation of the latent variable Gaussian graphical model (LVGGM), where the precision matrix is the superposition of a sparse matrix and a low-rank matrix. In order to speed up the estimation of the sparse plus low-rank components, we propose a sparsity constrained maximum likelihood estimator based on matrix factorization, and an efficient alternating gradient descent algorithm with hard thresholding to solve it. Our algorithm is orders of magnitude faster than the convex relaxation based methods for LVGGM. In addition, we prove that our algorithm is guaranteed to linearly converge to the unknown sparse and low-rank components up to the optimal statistical precision. Experiments on both synthetic and genomic data demonstrate the superiority of our algorithm over the state-ofthe-art algorithms and corroborate our theory.

上一篇:Approximation and Convergence Properties of Generative Adversarial Learning

下一篇:Efficient Sublinear-Regret Algorithms for Online Sparse Linear Regression with Limited Observation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...