资源论文AC LOSER LOOK AT DEEP POLICY GRADIENTS

AC LOSER LOOK AT DEEP POLICY GRADIENTS

2020-01-02 | |  58 |   41 |   0

Abstract

We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development. To this end, we propose a finegrained analysis of state-of-the-art methods based on key elements of this framework: gradient estimation, value prediction, and optimization landscapes. Our results show that the behavior of deep policy gradient algorithms often deviates from what their motivating framework would predict: the surrogate objective does not match the true reward landscape, learned value estimators fail to fit the true value function, and gradient estimates poorly correlate with the “true” gradient. The mismatch between predicted and empirical behavior we uncover highlights our poor understanding of current methods, and indicates the need to move beyond current benchmark-centric evaluation methods.

上一篇:DEEP LEARNING FOR SYMBOLIC MATHEMATICS

下一篇:ESTIMATING COUNTERFACTUAL TREATMENTOUTCOMES OVER TIME THROUGH ADVERSARIALLYBALANCED REPRESENTATIONS

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...