Abstract
To be successful in real-world tasks, Reinforcement
Learning (RL) needs to exploit the compositional,
relational, and hierarchical structure of the world,
and learn to transfer it to the task at hand. Recent advances in representation learning for language make
it possible to build models that acquire world knowledge from text corpora and integrate this knowledge
into downstream decision making problems. We
thus argue that the time is right to investigate a tight
integration of natural language understanding into
RL in particular. We survey the state of the field,
including work on instruction following, text games,
and learning from textual domain knowledge. Finally, we call for the development of new environments as well as further investigation into the potential uses of recent Natural Language Processing
(NLP) techniques for such tasks