Abstract
We propose to compose dynamic tree structures that
place the objects in an image into a visual context, helping visual reasoning tasks such as scene graph generation
and visual Q&A. Our visual context tree model, dubbed
VCTREE, has two key advantages over existing structured
object representations including chains and fully-connected
graphs: 1) The efficient and expressive binary tree encodes
the inherent parallel/hierarchical relationships among objects, e.g., “clothes” and “pants” are usually co-occur and
belong to “person”; 2) the dynamic structure varies from
image to image and task to task, allowing more content-
/task-specific message passing among objects. To construct
a VCTREE, we design a score function that calculates the
task-dependent validity between each object pair, and the
tree is the binary version of the maximum spanning tree
from the score matrix. Then, visual contexts are encoded by
bidirectional TreeLSTM and decoded by task-specific models. We develop a hybrid learning procedure which integrates end-task supervised learning and the tree structure
reinforcement learning, where the former’s evaluation result serves as a self-critic for the latter’s structure exploration. Experimental results on two benchmarks, which require reasoning over contexts: Visual Genome for scene
graph generation and VQA2.0 for visual Q&A, show that
VCTREE outperforms state-of-the-art results while discovering interpretable visual context structures.