Abstract
Image captioning is an important task, applicable to
virtual assistants, editing tools, image indexing, and support of the disabled. In recent years significant progress
has been made in image captioning, using Recurrent Neural Networks powered by long-short-term-memory (LSTM)
units. Despite mitigating the vanishing gradient problem,
and despite their compelling ability to memorize dependencies, LSTM units are complex and inherently sequential
across time. To address this issue, recent work has shown
benefits of convolutional networks for machine translation
and conditional image generation [9, 34, 35]. Inspired by
their success, in this paper, we develop a convolutional image captioning technique. We demonstrate its efficacy on
the challenging MSCOCO dataset and demonstrate performance on par with the LSTM baseline [16], while having
a faster training time per number of parameters. We also
perform a detailed analysis, providing compelling reasons
in favor of convolutional language generation approaches