Abstract
Visual relations, such as “person ride bike” and “bike
next to car”, offer a comprehensive scene understanding
of an image, and have already shown their great utility in
connecting computer vision and natural language. However, due to the challenging combinatorial complexity of
modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional
object detection networks, we propose a Visual Translation
Embedding network (VTransE) for visual relation detection.
VTransE places objects in a low-dimensional relation space
where a relation can be modeled as a simple vector translation, i.e., subject + predicate ? object. We propose a novel
feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports
training and inference in a single forward/backward pass.
To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on
two large-scale datasets: Visual Relationship and Visual
Genome. Note that even though VTransE is a purely visual
model, it is still competitive to the Lu’s multi-modal model
with language priors