资源论文High-Dimensional Variance-Reduced Stochastic Gradient Expectation-Maximization Algorithm

High-Dimensional Variance-Reduced Stochastic Gradient Expectation-Maximization Algorithm

2020-03-09 | |  71 |   39 |   0

Abstract

We propose a generic stochastic expectationmaximization (EM) algorithm for the estimation of high-dimensional latent variable models. At the core of our algorithm is a novel semi-stochastic variance-reduced gradient designed for the Qfunction in the EM algorithm. Under a mild condition on the initialization, our algorithm is guar anteed to attain a linear convergence rate to the u known parameter of the latent variable model, and achieve an optimal statistical rate up to a logarit mic factor for parameter estimation. Compared with existing high-dimensional EM algorithms, our algorithm enjoys a better computational complexity and is therefore more efficient. We apply our generic algorithm to two illustrative latent variable models: Gaussian mixture model and mixture of linear regression, and demonstrate the advantages of our algorithm by both theoretical analysis and numerical experiments. We believe that the proposed semi-stochastic gradient is of independent interest for general nonconvex optimization problems with bivariate structures.

上一篇:Developing Bug-Free Machine Learning Systems With Formal Mathematics

下一篇:Learned Optimizers that Scale and Generalize

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...