Abstract
Learning in the space-time domain remains a very challenging problem in machine learning andcomputer vision. Current computational models for understanding spatio-temporal visual data areheavily rooted in the classical single-image based paradigm. It is not yet well understood how tointegrate information in space and time into a single, general model. We propose a neural graphmodel, recurrent in space and time, suitable for capturing both the local appearance and the complexhigher-level interactions of different entities and objects within the changing world scene. Nodesand edges in our graph have dedicated neural networks for processing information. Nodes operateover features extracted from local parts in space and time and over previous memory states. Edgesprocess messages between connected nodes at different locations and spatial scales or between pastand present time. Messages are passed iteratively in order to transmit information globally andestablish long range interactions. Our model is general and could learn to recognize a variety of highlevel spatio-temporal concepts and be applied to different learning tasks. We demonstrate, throughextensive experiments and ablation studies, that our model outperforms strong baselines and toppublished methods on recognizing complex activities in video. Moreover, we obtain state-of-the-artperformance on the challenging Something-Something human-object interaction dataset.