Abstract
This paper addresses a fundamental problem of sceneunderstanding: How to parse the scene image into a struc-tured configuration (i.e., a semantic object hierarchy with object interaction relations) that finely accords with humanperception. We propose a deep architecture consisting oftwo networks: i) a convolutional neural network (CNN) ex-tracting the image representation for pixelwise object labeling and ii) a recursive neural network (RNN) discovering the hierarchical object structure and the inter-object rela-tions. Rather than relying on elaborative user annotations (e.g., manually labeling semantic maps and relations), we train our deep model in a weakly-supervised manner byleveraging the descriptive sentences of the training images. Specifically, we decompose each sentence into a semantic tree consisting of nouns and verb phrases, and facilitate these trees discovering the configurations of the training images. Once these scene configurations are determined,then the parameters of both the CNN and RNN are updated accordingly by back propagation. The entire model training is accomplished through an Expectation-Maximization method. Extensive experiments suggest that our model is capable of producing meaningful and structured scene configurations and achieving more favorable scene labelingperformance on PASCAL VOC 2012 over other state-of-theart weakly-supervised methods.