Abstract
Despite the remarkable progress in action recognition
over the past several years, existing methods remain limited
in efficiency and effectiveness. The methods treating appearance and motion as separate streams are usually subject to the cost of optical flow computation, while those relying on 3D convolution on the original video frames often yield inferior performance in practice. In this paper,
we propose a new ConvNet architecture for video representation learning, which can derive disentangled components of dynamics purely from raw video frames, without the
need of optical flow estimation. Particularly, the learned
representation comprises three components for representing static appearance, apparent motion, and appearance
changes. We introduce 3D pooling, cost volume processing,
and warped feature differences, respectively for extracting
the three components above. These modules are incorporated as three branches in our unified network, which share
the underlying features and are learned jointly in an end-toend manner. On two large datasets, UCF101 [22] and Kinetics [16], our method obtained competitive performances
with high efficiency, using only the RGB frame sequence as
input