资源论文Using Reward Machines for High-Level Task Specification and Decomposition in Reinforcement Learning

Using Reward Machines for High-Level Task Specification and Decomposition in Reinforcement Learning

2020-03-16 | |  104 |   51 |   0

Abstract

In this paper we propose Reward Machines – a type of finite state machine that supports the spec ification of reward functions while exposing reward function structure to the learner and supporting decomposition. We then present Q-Learning for Reward Machines (QRM), an algorithm which appropriately decomposes the reward machine and uses off-policy q-learning to simultaneously learn subpolicies for the different components. QRM is guaranteed to converge to an optimal policy in the tabular case, in contrast to Hierarchica Reinforcement Learning methods which might converge to suboptimal policies. We demonstrate this behavior experimentally in two discrete domains. We also show how function approximation methods like neural networks can be incorporated into QRM, and that doing so can find better policies more quickly than hierarchical methods in a domain with a continuous state space.

上一篇:Dynamic Regret of Strongly Adaptive Methods

下一篇:Beyond the One-Step Greedy Approach in Reinforcement Learning

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...