Abstract. Visual attention has shown usefulness in image captioning,
with the goal of enabling a caption model to selectively focus on regions of interest. Existing models typically rely on top-down language
information and learn attention implicitly by optimizing the captioning
objectives. While somewhat effective, the learned top-down attention can
fail to focus on correct regions of interest without direct supervision of
attention. Inspired by the human visual system which is driven by not
only the task-specific top-down signals but also the visual stimuli, we in
this work propose to use both types of attention for image captioning.
In particular, we highlight the complementary nature of the two types
of attention and develop a model (Boosted Attention) to integrate them
for image captioning. We validate the proposed approach with state-ofthe-art performance across various evaluation metrics.