Abstract
Automatically describing a video with natural language
is regarded as a fundamental challenge in computer vision.
The problem nevertheless is not trivial especially when a
video contains multiple events to be worthy of mention,
which often happens in real videos. A valid question is
how to temporally localize and then describe events, which
is known as “dense video captioning.” In this paper, we
present a novel framework for dense video captioning that
unifies the localization of temporal event proposals and sentence generation of each proposal, by jointly training them
in an end-to-end manner. To combine these two worlds, we
integrate a new design, namely descriptiveness regression,
into a single shot detection structure to infer the descriptive
complexity of each detected proposal via sentence generation. This in turn adjusts the temporal locations of each
event proposal. Our model differs from existing dense video
captioning methods since we propose a joint and global optimization of detection and captioning, and the framework
uniquely capitalizes on an attribute-augmented video captioning architecture. Extensive experiments are conducted
on ActivityNet Captions dataset and our framework shows
clear improvements when compared to the state-of-the-art
techniques. More remarkably, we obtain a new record: METEOR of 12.96% on ActivityNet Captions official test set.