Abstract. Recently, much advance has been made in image captioning,
and an encoder-decoder framework has been adopted by all the stateof-the-art models. Under this framework, an input image is encoded by
a convolutional neural network (CNN) and then translated into natural
language with a recurrent neural network (RNN). The existing models
counting on this framework employ only one kind of CNNs, e.g., ResNet
or Inception-X, which describes the image contents from only one specific
view point. Thus, the semantic meaning of the input image cannot be
comprehensively understood, which restricts improving the performance.
In this paper, to exploit the complementary information from multiple
encoders, we propose a novel recurrent fusion network (RFNet) for the
image captioning task. The fusion process in our model can exploit the
interactions among the outputs of the image encoders and generate new
compact and informative representations for the decoder. Experiments
on the MSCOCO dataset demonstrate the effectiveness of our proposed
RFNet, which sets a new state-of-the-art for image captioning