资源论文NEURAL POLICY GRADIENT METHODS :G LOBAL OPTIMALITY AND RATES OF CON -VERGENCE

NEURAL POLICY GRADIENT METHODS :G LOBAL OPTIMALITY AND RATES OF CON -VERGENCE

2020-01-02 | |  58 |   44 |   0

Abstract

Policy gradient methods with actor-critic schemes demonstrate tremendous empirical successes, especially when the actors and critics are parameterized by neural networks. However, it remains less clear whether such “neural” policy gradient methods converge to globally optimal policies and whether they even converge at all. We answer both the questions affirmatively under the overparameterized twolayer neural-network parameterization. In detail, assuming independent sampling, we prove that neural natural policy gradient converges to a globally optimal policy at a sublinear rate. Also, we show that neural vanilla policy gradient converges sublinearly to a stationary point. Meanwhile, by relating the suboptimality of the stationary points to the representation power of neural actor and critic classes, we prove the global optimality of all stationary points under mild regularity conditions. Particularly, we show that a key to the global optimality and convergence is the “compatibility” between the actor and critic, which is ensured by sharing neural architectures and random initializations across the actor and critic. To the best of our knowledge, our analysis establishes the first global optimality and convergence guarantees for neural policy gradient methods.

上一篇:SNODE: SPECTRAL DISCRETIZATION OF NEURALODE SFOR SYSTEM IDENTIFICATION

下一篇:LOOKAHEAD :A FAR -SIGHTED ALTERNATIVE OFM AGNITUDE -BASED PRUNING

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...