Abstract
In this work, we contribute to video saliency research in
two ways. First, we introduce a new benchmark for predicting human eye movements during dynamic scene freeviewing, which is long-time urged in this field. Our dataset,
named DHF1K (Dynamic Human Fixation), consists of 1K
high-quality, elaborately selected video sequences spanning
a large range of scenes, motions, object types and background complexity. Existing video saliency datasets lack
variety and generality of common dynamic scenes and fall
short in covering challenging situations in unconstrained
environments. In contrast, DHF1K makes a significant
leap in terms of scalability, diversity and difficulty, and is
expected to boost video saliency modeling. Second, we
propose a novel video saliency model that augments the
CNN-LSTM network architecture with an attention mechanism to enable fast, end-to-end saliency learning. The
attention mechanism explicitly encodes static saliency information, thus allowing LSTM to focus on learning more
flexible temporal saliency representation across successive
frames. Such a design fully leverages existing large-scale
static fixation datasets, avoids overfitting, and significantly
improves training efficiency and testing performance. We
thoroughly examine the performance of our model, with
respect to state-of-the-art saliency models, on three largescale datasets (i.e., DHF1K, Hollywood2, UCF sports). Experimental results over more than 1.2K testing videos containing 400K frames demonstrate that our model outperforms other competitors.