Diversity Regularized Spatiotemporal Attention
for Video-based Person Re-identification
Abstract
Video-based person re-identification matches video clips
of people across non-overlapping cameras. Most existing
methods tackle this problem by encoding each video frame
in its entirety and computing an aggregate representation
across all frames. In practice, people are often partially
occluded, which can corrupt the extracted features. Instead, we propose a new spatiotemporal attention model
that automatically discovers a diverse set of distinctive body
parts. This allows useful information to be extracted from
all frames without succumbing to occlusions and misalignments. The network learns multiple spatial attention models
and employs a diversity regularization term to ensure multiple models do not discover the same body part. Features
extracted from local image regions are organized by spatial
attention model and are combined using temporal attention.
As a result, the network learns latent representations of the
face, torso and other body parts using the best available
image patches from the entire video sequence. Extensive
evaluations on three datasets show that our framework outperforms the state-of-the-art approaches by large margins
on multiple metrics.