资源论文On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient

On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient

2020-01-06 | |  72 |   43 |   0

Abstract

Likelihood ratio policy gradient methods have been some of the most successful reinforcement learning algorithms, especially for learning on physical systems. We describe how the likelihood ratio policy gradient can be derived from an importance sampling perspective. This derivation highlights how likelihood ratio methods under-use past experience by (i) using the past experience to estimate only the gradient of the expected return U 图片.png at the current policy parameterization图片.png rather than to obtain a more complete estimate of U 图片.png , and (ii) using past experience under the current policy only rather than using all past experience to improve the estimates. We present a new policy search method, which leverages both of these observations as well as generalized baselines—a new technique which generalizes commonly used baseline techniques for policy gradient methods. Our algorithm outperforms standard likelihood ratio policy gradient algorithms on several testbeds.

上一篇:Learning Efficient Markov Networks

下一篇:Inductive Regularized Learning of Kernel Functions

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...