资源论文Estimation of Markov Chain via Rank-Constrained Likelihood

Estimation of Markov Chain via Rank-Constrained Likelihood

2020-03-16 | |  65 |   37 |   0

Abstract

This paper studies the estimation of low-rank Markov chains from empirical trajectories. We propose a non-convex estimator based on rankconstrained likelihood maximization. Statistical upper bounds are provided for the KullbackLeiber divergence and the 图片.png2 risk between the estimator and the true transition matrix. The estimator reveals a compressed state space of the Markov chain. We also develop a novel DC (difference of convex function) programming algorithm to tackle the rank-constrained non-smooth optimization problem. Convergence results are established. Experiments show that the proposed estimator achieves better empirical performance than other popular approaches.

上一篇:Goodness-of-Fit Testing for Discrete Distributions via Stein Discrepancy

下一篇:Programmatically Interpretable Reinforcement Learning

用户评价
全部评价

热门资源

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...