资源论文Sharing Experience in Multitask Reinforcement Learning

Sharing Experience in Multitask Reinforcement Learning

2019-10-09 | |  81 |   47 |   0

Abstract In multitask reinforcement learning, tasks often have sub-tasks that share the same solution, even though the overall tasks are different. If the sharedportions could be effectively identifified, then the learning process could be improved since all the samples between tasks in the shared space could be used. In this paper, we propose a Sharing Experience Framework (SEF) for simultaneously training of multiple tasks. In SEF, a confifidence sharing agent uses task-specifific rewards from the environment to identify similar parts that should be shared across tasks and defifines those parts as shared-regions between tasks. The shared-regions are expected to guide task-policies sharing their experience during the learning process. The experiments highlight that our framework improves the performance and the stability of learning taskpolicies, and is possible to help task-policies avoid local optimums

上一篇:Hierarchical Representation Learning for Bipartite Graphs

下一篇:A Convergence Analysis of Distributed SGD with Communication-Efficient Gradient Sparsification

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...