资源论文Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction

Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction

2020-02-18 | |  58 |   31 |   0

Abstract 

Machine understanding of complex images is a key goal of artificial intelligence. One challenge underlying this task is that visual scenes contain multiple interrelated objects, and that global context plays an important role in interpreting the scene. A natural modeling framework for capturing such effects is structured prediction, which optimizes over complex labels, while modeling within-label interactions. However, it is unclear what principles should guide the design of a structured prediction model that utilizes the power of deep learning components. Here we propose a design principle for such architectures that follows from a natural requirement of permutation invariance. We prove a necessary and sufficient characterization for architectures that follow this invariance, and discuss its implication on model design. Finally, we show that the resulting model achieves new state-of-the-art results on the Visual Genome scene-graph labeling benchmark, outperforming all recent approaches.

上一篇:Searching for Efficient Multi-Scale Architectures for Dense Image Prediction

下一篇:IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...