Abstract
Experience reuse is key to sample-efficient reinforcement learning. One of the critical issues is
how the experience is represented and stored. Previously, the experience can be stored in the forms of
features, individual models, and the average model,
each lying at a different granularity. However,
new tasks may require experience across multiple
granularities. In this paper, we propose the policy residual representation (PRR) network, which
can extract and store multiple levels of experience.
PRR network is trained on a set of tasks with a
multi-level architecture, where a module in each
level corresponds to a subset of the tasks. Therefore, the PRR network represents the experience
in a spectrum-like way. When training on a new
task, PRR can provide different levels of experience for accelerating the learning. We experiment
with the PRR network on a set of grid world navigation tasks, locomotion tasks, and fighting tasks in
a video game. The results show that the PRR network leads to better reuse of experience and thus
outperforms some state-of-the-art approaches