资源论文Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning

Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning

2020-03-10 | |  92 |   46 |   0

Abstract

Reinforcement learning algorithms for realworld robotic applications must be able to handle complex, unknown dynamical systems while maintaining data-efficient learning. These requirements are handled well by model-free and model-based RL approaches, respectively. In this work, we aim to combine the advantages of these approaches. By focusing on time-varying linearGaussian policies, we enable a model-based algorithm based on the linear-quadratic regulator that can be integrated into the model-free framework of path integral policy improvement. We can further combine our method with guided policy search to train arbitrary parameterized policies such as deep neural networks. Our simulation and real-world experiments demonstrate that this method can solve challenging manipulation tasks with comparable or better performance than model-free methods while maintaining the sample efficiency of model-based methods.

上一篇:Learning Deep Architectures via Generalized Whitened Neural Networks

下一篇:Innovation Pursuit: A New Approach to the Subspace Clustering Problem

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...