资源论文Fast Bidirectional Probability Estimation in Markov Models

Fast Bidirectional Probability Estimation in Markov Models

2020-02-04 | |  54 |   64 |   0

Abstract 

We develop a new bidirectional algorithm for estimating Markov chain multi-step transition probabilities: given a Markov chain, we want to estimate the probability of hitting a given target state in image.png steps after starting from a given source distribution. Given the target state t, we use a (reverse) local power iteration to construct an ‘expanded target distribution’, which has the same mean as the quantity we want to estimate, but a smaller variance – this can then be sampled efficiently by a Monte Carlo algorithm. Our method extends to any Markov chain on a discrete (finite or countable) state-space, and can be extended to compute functions of multi-step transition probabilities such as PageRank, graph diffusions, hitting/return times, etc. Our main result is that in ‘sparse’ Markov Chains – wherein the number of transitions between states is comparable to the number of states – the running time of our algorithm for a uniform-random target node is order-wise smaller than Monte Carlo and power iteration based algorithms; ? in particular, our method can estimate a probability p using only image.png running time.

上一篇:On some provably correct cases of variational inference for topic models

下一篇:Risk-Sensitive and Robust Decision-Making: a CVaR Optimization Approach

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...