资源论文Bias in Natural Actor-Critic Algorithms

Bias in Natural Actor-Critic Algorithms

2020-03-03 | |  100 |   52 |   0

Abstract

We show that several popular discounted reward natural actor-critics, including the popular NACLSTD and eNAC algorithms, do not generate unbiased estimates of the natural policy gradient as claimed. We derive the first unbiased discounted reward natural actor-critics using batch and iterative approaches to gradient estimation. We argue that the bias makes the existing algorithms more appropriate for the average reward setting. We also show that, when Sarsa(λ) is guaranteed to converge to an optimal policy, the objective function used by natural actor-critics has only global optima, so policy gradient methods are guaranteed to converge to globally optimal policies as well.

上一篇:Robust Distance Metric Learning via Simultaneous l1 -Norm Minimization and Maximization

下一篇:Hierarchical Dirichlet Scaling Process

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...