资源论文Convergence of Gradient EM on Multi-component Mixture of Gaussians

Convergence of Gradient EM on Multi-component Mixture of Gaussians

2020-02-10 | |  51 |   44 |   0

Abstract 

In this paper, we study convergence properties of the gradient variant of Expectation-Maximization algorithm [11] for Gaussian Mixture Models for arbitrary number of clusters and mixing coefficients. We derive the convergence rate depending on the mixing coefficients, minimum and maximum pairwise distances between the true centers, dimensionality and number of components; and obtain a near-optimal local contraction radius. While there have been some recent notable works that derive local convergence rates for EM in the two symmetric mixture of Gaussians, in the more general case, the derivations need structurally different and non-trivial arguments. We use recent tools from learning theory and empirical processes to achieve our theoretical results.

上一篇:Boltzmann Exploration Done Right

下一篇:On clustering network-valued data

用户评价
全部评价

热门资源

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Rating-Boosted La...

    The performance of a recommendation system reli...

  • Hierarchical Task...

    We extend hierarchical task network planning wi...