CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action
Localization in Untrimmed Videos
Abstract
Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background
contents, we need not only to recognize their action categories, but also to localize the start time and end time of
each instance. Many state-of-the-art systems use segmentlevel classifiers to select and rank proposal segments of predetermined boundaries. However, a desirable model should
move beyond segment-level and make dense predictions at
a fine granularity in time to determine precise temporal
boundaries. To this end, we design a novel ConvolutionalDe-Convolutional (CDC) network that places CDC filters
on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at
the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-toend manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but
also significantly boosts the precision of localizing temporal
boundaries. Finally, the CDC network demonstrates a very
high efficiency with the ability to process 500 frames per
second on a single GPU server. Source code and trained
models are available online at https://bitbucket.
org/columbiadvmm/cdc.