Abstract
Spatiotemporal feature learning is of central importance
for action recognition in videos. Existing deep neural network models either learn spatial and temporal features independently (C2D) or jointly with unconstrained parameters (C3D). In this paper, we propose a novel neural operation which encodes spatiotemporal features collaboratively by imposing a weight-sharing constraint on the learnable parameters. In particular, we perform 2D convolution along three orthogonal views of volumetric video data,
which learns spatial appearance and temporal motion cues
respectively. By sharing the convolution kernels of different views, spatial and temporal features are collaboratively learned and thus benefit from each other. The complementary features are subsequently fused by a weighted
summation whose coefficients are learned end-to-end. Our
approach achieves state-of-the-art performance on largescale benchmarks and won the 1st place in the Moments
in Time Challenge 2018. Moreover, based on the learned
coefficients of different views, we are able to quantify the
contributions of spatial and temporal features. This analysis sheds light on interpretability of the model and may also
guide the future design of algorithm for video recognition.