资源论文Connecting Gaze, Scene, and Attention:Generalized Attention Estimation via JointModeling of Gaze and Scene Saliency

Connecting Gaze, Scene, and Attention:Generalized Attention Estimation via JointModeling of Gaze and Scene Saliency

2019-10-21 | |  27 |   22 |   0
Abstract. This paper addresses the challenging problem of estimating the general visual attention of people in images. Our proposed method is designed to work across multiple naturalistic social scenarios and provides a full picture of the subject’s attention and gaze. In contrast, earlier works on gaze and attention estimation have focused on constrained problems in more specific contexts. In particular, our model explicitly represents the gaze direction and handles out-of-frame gaze targets. We leverage three different datasets using a multi-task learning approach. We evaluate our method on widely used benchmarks for single-tasks such as gaze angle estimation and attention-within-an-image, as well as on the new challenging task of generalized visual attention prediction. In addition, we have created extended annotations for the MMDB and GazeFollow datasets which are used in our experiments, which we will publicly release

上一篇:Multiple-gaze geometry: Inferring novel 3Dlocations from gazes observed in monocularvideo

下一篇:Neural Stereoscopic Image Style Transfer

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...