资源论文A Uni?ed Bellman Optimality Principle Combining Reward Maximization and Empowerment

A Uni?ed Bellman Optimality Principle Combining Reward Maximization and Empowerment

2020-02-23 | |  43 |   39 |   0

Abstract

Empowerment is an information-theoretic method that can be used to intrinsically motivate learning agents. It attempts to maximize an agent’s control over the environment by encouraging visiting states with a large number of reachable next states. Empowered learning has been shown to lead to complex behaviors, without requiring an explicit reward signal. In this paper, we investigate the use of empowerment in the presence of an extrinsic reward signal. We hypothesize that empowerment can guide reinforcement learning (RL) agents to ?nd good early behavioral solutions by encouraging highly empowered states. We propose a uni?ed Bellman optimality principle for empowered reward maximization. Our empowered reward maximization approach generalizes both Bellman’s optimality principle as well as recent information-theoretical extensions to it. We prove uniqueness of the empowered values and show convergence to the optimal solution. We then apply this idea to develop off-policy actor-critic RL algorithms which we validate in high-dimensional continuous robotics domains (MuJoCo). Our methods demonstrate improved initial and competitive ?nal performance compared to model-free state-of-the-art techniques.

上一篇:Push-pull Feedback Implements Hierarchical Information Retrieval Efficiently

下一篇:Sampled Softmax with Random Fourier Features

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...