Abstract
Spatial relationships between objects provide important
information for text-based image retrieval. As users are
more likely to describe a scene from a real world perspective, using 3D spatial relationships rather than 2D relationships that assume a particular viewing direction, one of the
main challenges is to infer the 3D structure that bridges
images with users’ text descriptions. However, direct inference of 3D structure from images requires learning from
large scale annotated data. Since interactions between objects can be reduced to a limited set of atomic spatial relations in 3D, we study the possibility of inferring 3D structure from a text description rather than an image, applying
physical relation models to synthesize holistic 3D abstract
object layouts satisfying the spatial constraints present in
a textual description. We present a generic framework for
retrieving images from a textual description of a scene by
matching images with these generated abstract object layouts. Images are ranked by matching object detection outputs (bounding boxes) to 2D layout candidates (also represented by bounding boxes) which are obtained by projecting
the 3D scenes with sampled camera directions. We validate
our approach using public indoor scene datasets and show
that our method outperforms baselines built upon object occurrence histograms and learned 2D pairwise relations