资源论文Online Expectation Maximization for Reinforcement Learning in POMDPs

Online Expectation Maximization for Reinforcement Learning in POMDPs

2019-11-11 | |  61 |   44 |   0
Abstract We present online nested expectation maximization for model-free reinforcement learning in a POMDP. The algorithm evaluates the policy only in the current learning episode, discarding the episode after the evaluation and memorizing the suf?cient statistic, from which the policy is computed in closedform. As a result,   the online algorithm has a time complexity O n and  a memory complexity O(1), 2 compared to O n and O(n) for the corresponding batch-mode algorithm, where n is the number of learning episodes. The online algorithm, which has a provable convergence, is demonstrated on ?ve benchmark POMDP problems.

上一篇:Large-Scale Spectral Clustering on Graphs Jialu Liu Chi Wang Marina Danilevsky Jiawei Han

下一篇:Learning Canonical Correlations of Paired Tensor Sets Via Tensor-to-Vector Projection

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...