Abstract
Neural image/video captioning models can generate accurate descriptions, but their internal process of mapping
regions to words is a black box and therefore difficult to
explain. Top-down neural saliency methods can find important regions given a high-level semantic task such as object
classification, but cannot use a natural language sentence
as the top-down input for the task. In this paper, we propose Caption-Guided Visual Saliency to expose the regionto-word mapping in modern encoder-decoder networks and
demonstrate that it is learned implicitly from caption training data, without any pixel-level annotations. Our approach
can produce spatial or spatiotemporal heatmaps for both
predicted captions, and for arbitrary query sentences. It
recovers saliency without the overhead of introducing explicit attention layers, and can be used to analyze a variety of existing model architectures and improve their design. Evaluation on large-scale video and image datasets
demonstrates that our approach achieves comparable captioning performance with existing methods while providing
more accurate saliency heatmaps. Our code is available at
visionlearninggroup.github.io/caption-guided-saliency/.