Abstract. Temporal relational reasoning, the ability to link meaningful
transformations of objects or entities over time, is a fundamental property of intelligent species. In this paper, we introduce an effective and
interpretable network module, the Temporal Relation Network (TRN),
designed to learn and reason about temporal dependencies between video
frames at multiple time scales. We evaluate TRN-equipped networks on
activity recognition tasks using three recent video datasets - SomethingSomething, Jester, and Charades - which fundamentally depend on temporal relational reasoning. Our results demonstrate that the proposed
TRN gives convolutional neural networks a remarkable capacity to discover temporal relations in videos. Through only sparsely sampled video
frames, TRN-equipped networks can accurately predict human-object
interactions in the Something-Something dataset and identify various
human gestures on the Jester dataset with very competitive performance. TRN-equipped networks also outperform two-stream networks
and 3D convolution networks in recognizing daily activities in the Charades dataset. Further analyses show that the models learn intuitive and
interpretable visual common sense knowledge in videos