Learning to Look Around:
Intelligently Exploring Unseen Environments for Unknown Tasks
Abstract
It is common to implicitly assume access to intelligently
captured inputs (e.g., photos from a human photographer),
yet autonomously capturing good observations is itself a
major challenge. We address the problem of learning to
look around: if an agent has the ability to voluntarily acquire new views to observe its environment, how can it learn
efficient exploratory behaviors to acquire informative visual observations? We propose a reinforcement learning
solution, where the agent is rewarded for actions that reduce its uncertainty about the unobserved portions of its
environment. Based on this principle, we develop a recurrent neural network-based approach to perform active completion of panoramic natural scenes and 3D object shapes.
Crucially, the learned policies are not tied to any recognition task nor to the particular semantic content seen during
training. As a result, 1) the learned “look around” behavior is relevant even for new tasks in unseen environments,
and 2) training data acquisition involves no manual labeling. Through tests in diverse settings, we demonstrate that
our approach learns useful generic policies that transfer to
new unseen tasks and environments.