Abstract
We study the problem of holistic scene understanding. We
would like to obtain a compact, expressive, and interpretable
representation of scenes that encodes information such as
the number of objects and their categories, poses, positions,
etc. Such a representation would allow us to reason about
and even reconstruct or manipulate elements of the scene.
Previous works have used encoder-decoder based neural
architectures to learn image representations; however, representations obtained in this way are typically uninterpretable,
or only explain a single object in the scene.
In this work, we propose a new approach to learn an
interpretable distributed representation of scenes. Our approach employs a deterministic rendering function as the
decoder, mapping a naturally structured and disentangled
scene description, which we named scene XML, to an image.
By doing so, the encoder is forced to perform the inverse of
the rendering operation (a.k.a. de-rendering) to transform
an input image to the structured scene XML that the decoder
used to produce the image. We use a object proposal based
encoder that is trained by minimizing both the supervised
prediction and the unsupervised reconstruction errors. Experiments demonstrate that our approach works well on
scene de-rendering with two different graphics engines, and
our learned representation can be easily adapted for a wide
range of applications like image editing, inpainting, visual
analogy-making, and image captioning