Abstract
Human actions often involve complex interactions across
several inter-related objects in the scene. However, existing approaches to fine-grained video understanding or visual relationship detection often rely on single object representation or pairwise object relationships. Furthermore,
learning interactions across multiple objects in hundreds of
frames for video is computationally infeasible and performance may suffer since a large combinatorial space has to
be modeled. In this paper, we propose to efficiently learn
higher-order interactions between arbitrary subgroups of
objects for fine-grained video understanding. We demonstrate that modeling object interactions significantly improves accuracy for both action recognition and video
captioning, while saving more than 3-times the computation over traditional pairwise relationships. The proposed
method is validated on two large-scale datasets: Kinetics
and ActivityNet Captions. Our SINet and SINet-Caption
achieve state-of-the-art performances on both datasets even
though the videos are sampled at a maximum of 1 FPS. To
the best of our knowledge, this is the first work modeling object interactions on open domain large-scale video datasets,
and we additionally model higher-order object interactions
which improves the performance with low computational
costs