Abstract
In Reinforcement Learning (RL), an agent is
guided by the rewards it receives from the reward
function. Unfortunately, it may take many interactions with the environment to learn from sparse rewards, and it can be challenging to specify reward
functions that reflect complex reward-worthy behavior. We propose using reward machines (RMs),
which are automata-based representations that expose reward function structure, as a normal form
representation for reward functions. We show
how specifications of reward in various formal languages, including LTL and other regular languages,
can be automatically translated into RMs, easing
the burden of complex reward function specification. We then show how the exposed structure of
the reward function can be exploited by tailored
q-learning algorithms and automated reward shaping techniques in order to improve the sample effi-
ciency of reinforcement learning methods. Experiments show that these RM-tailored techniques significantly outperform state-of-the-art (deep) RL algorithms, solving problems that otherwise cannot
reasonably be solved by existing approaches