Abstract
We present online nested expectation maximization for model-free reinforcement learning in a POMDP. The algorithm evaluates the policy only in the current learning episode, discarding the episode after the evaluation and memorizing the suf?cient statistic, from which the policy is computed in closedform. As a result, the online algorithm has a time complexity O n and a memory complexity O(1), 2 compared to O n and O(n) for the corresponding batch-mode algorithm, where n is the number of learning episodes. The online algorithm, which has a provable convergence, is demonstrated on ?ve benchmark POMDP problems.