Abstract This paper attacks the challenging problem of zeroexample video retrieval. In such a retrieval paradigm, an
end user searches for unlabeled videos by ad-hoc queries
described in natural language text with no visual example
provided. Given videos as sequences of frames and queries
as sequences of words, an effective sequence-to-sequence
cross-modal matching is required. The majority of existing methods are concept based, extracting relevant concepts from queries and videos and accordingly establishing associations between the two modalities. In contrast,
this paper takes a concept-free approach, proposing a dual
deep encoding network that encodes videos and queries into
powerful dense representations of their own. Dual encoding is conceptually simple, practically effective and endto-end. As experiments on three benchmarks, i.e. MSRVTT, TRECVID 2016 and 2017 Ad-hoc Video Search show,
the proposed solution establishes a new state-of-the-art for
zero-example video retrieval.