Abstract
Experience replay enables reinforcement learning
agents to memorize and reuse past experiences,
just as humans replay memories for the situation
at hand. Contemporary off-policy algorithms either
replay past experiences uniformly or utilize a rulebased replay strategy, which may be sub-optimal.
In this work, we consider learning a replay policy
to optimize the cumulative reward. Replay learning is challenging because the replay memory is
noisy and large, and the cumulative reward is unstable. To address these issues, we propose a novel
experience replay optimization (ERO) framework
which alternately updates two policies: the agent
policy, and the replay policy. The agent is updated
to maximize the cumulative reward based on the replayed data, while the replay policy is updated to
provide the agent with the most useful experiences.
The conducted experiments on various continuous
control tasks demonstrate the effectiveness of ERO,
empirically showing promise in experience replay
learning to improve the performance of off-policy
reinforcement learning algorithms