Video Person Re-identification with Competitive Snippet-similarity Aggregation
and Co-attentive Snippet Embedding
Abstract
In this paper, we address video-based person reidentification with competitive snippet-similarity aggregation and co-attentive snippet embedding. Our approach
divides long person sequences into multiple short video
snippets and aggregates the top-ranked snippet similarities for sequence-similarity estimation. With this strategy,
the intra-person visual variation of each sample could be
minimized for similarity estimation, while the diverse appearance and temporal information are maintained. The
snippet similarities are estimated by a deep neural network
with a novel temporal co-attention for snippet embedding.
The attention weights are obtained based on a query feature, which is learned from the whole probe snippet by an
LSTM network, making the resulting embeddings less affected by noisy frames. The gallery snippet shares the same
query feature with the probe snippet. Thus the embedding of
gallery snippet can present more relevant features to compare with the probe snippet, yielding more accurate snippet
similarity. Extensive ablation studies verify the effectiveness of competitive snippet-similarity aggregation as well
as the temporal co-attentive embedding. Our method significantly outperforms the current state-of-the-art approaches
on multiple datasets